Time Dilated Computing

2025, March 23    

In a recent work trip into the city I was playing a game of 'count-the-cameras' and it got me thinking about how much computational power would be required to process all of the video footage in real-time, from facial recognition to contextual awareness (i.e., what someone was smiling for/at on a train). Even with the creation of specialised processing units (TPUs as an example), attempting to process all feeds within a country (fast enough to make real-time determinations) would require an insane amount of computational resources, and an insane amount of time.

In computing there are two basic types of scaling (for improving performance):

  • Horizontal scaling (adding more compute resources to process the workload in parallel)
  • Vertical scaling (improving the performance/speed of your computational unit(s) so tasks complete faster)

In the last 5 years horizontal scaling has become more prominent, with multi-core CPU's (and software that can take advantage of them) becoming more prevalent. Octa-core mobile phones are more commonplace, and desktop computers can be found with 16 cores (or higher) in even mid-range setups. Servers can now be purchased with over 128 cores (per socket), and even workstations have significant performance with the likes of the AMD Threadripper PRO 7995WX (which sadly my bank balance cannot accommodate).

Vertical scaling is still ongoing, but has slowed down somewhat (looking at you Intel) in favour of horizontal. The days of gigahertz-level increases are far gone (at least for now) as physics is tricky, especially when it comes to continuously trying to shrink the size of semiconductors. That said, IPC improvements are still being made (the latest AMD CPU's are a good example of this), and the improvements aren't limited to x86_64. The latest ARM-based CPU's are also showing significant improvements in both functionality and performance, with the Apple M4 being a serious powerhouse.

There is also a third option (though technically it isn't scaling), whereby you completely change the architecture / design to something that is better suited for the task at hand. A good example of this is where Bitcoin mining went from CPU-based, to GPU-based, to custom-designed ASICs that are an order of magnitude faster (but can do little else). Newer CPU's have started incorporating this in the form of AI accelerators, and even GPUs have dedicated media decoders / encoders (mostly).

Taking the above one step further (and adding an option four), is to move to quantum computing (or more specifically quantum parallelism due to the use of qubits). This technology is still in its infancy and still has many major obstacles to overcome. Additionally, at present quantum computing is only suitable for certain types of calculations (aligning them more with ASICs), and so even if a fully-functional quantum computer was available today (without all of the caveats), it wouldn't be suitable for many commonplace tasks (including the one detailed here).

The problem with all of the above approaches is that it only gets you so far. At some point you run out of performance, space, and likely electricity to power it all. You can examine reducing the fidelity of the footage you are processing, however that brings inaccuracy / incorrect determinations due to incorrect context / missing information. All of this is discounting the amount of data storage required to hold the footage (even for processing before archival to a medium like tape).

With all of the above approaches, the one unit that was only mentioned in the first paragraph, is time. To us, time is a constant that passes at a measurable rate (yet always passes faster when it involves our alarm clock). Yet in practice, time is relative to the observer and is impacted by gravity. The closer you get to a black hole, time slows down, or more accurately to the person by the black hole it appears that time away from the black hole is moving incredibly fast, while to people away from it those people next to it are moving incredibly slowly. Same universe, same rules, yet due to relativity it can appear that time passes at different rates.

This is where we wonder into the realm of science fiction, and take inspiration from a Star Trek Voyager episode titled 'Blink of an eye'. The premise is that they encounter a planet whereby time on the planet passes incredibly fast (in comparison to time in space / on the ship), allowing them to watch a civilisation grow through hundreds of years in a matter of hours. While science fiction in nature, it does beg an interesting question.

If you could create an area of space-time where you could leverage inverse relative velocity time dilation, or inverse gravitational time dilation, time in that area would pass faster than the time around it. If you could couple this with a form of communication (ideally quantum entanglement but without wavefunction collapse), you could send the footage to be processed into the area (which would relatively speaking be ingested as very slow playback), process it, and then send the results back. In theory, this adjusts the compute balance and allows for masses of data to be processed 'real-time', as technically while the compute resource stays the same, relativity (time) does not.

Of course, all of the above is just theory, inverse time dilation doesn't exist (that we know of), wavefunction collapse is unavoidable (at present), and we still don't have a habitat on the moon (baby steps and all that). That said, if we manage to achieve all of this some day, the level of surveillance capability (or data processing in general) will be a significant leap.