O(log(n)) confuses me. I could have sworn that the "Center-of-spherical library" thought experiment proves that its O(cube-root(n)) in 3-dimensional space, and O(square-root(n)) for 2-dimensional space.
That is: all memory has a physical shape, and the worst-case speed is related to the distance away from the center (aka: CPU Core) and whatever "book" or memory-cell you're looking for.
For a 2d space, the optimal organization of books (or memory cells), is a perfect circle: each book is exactly R (radius) distance away from the center, meaning all books can be fetched under equal time. This leads to O(sqrt(n)) performance, as the more memory you have (assuming memory is the same "shape"), the larger the circle _must_ become.
For 3d space, the optimal organization of books is a perfect sphere. Same-same really, except we can fly now.
Real-world chips are largely 2d technologies, but given the shear number of layers available on modern chips (hundreds of layers for Flash and other tech), maybe spherical / 3d perspectives are a better theory to work off of. Transitioning between layers is always fraught with problems (vias in physical chips / boards have inductances and other parasitics after all), so we're somewhere between 2-dimensions and 3-dimensions in practice (with more and more tricks to get us towards the 3d side).
------------
In the real world: chips are limited by these parasitic elements (inductances and capacitances), which slow down the electrical signal. But these are primarily related to distance (more inductance the longer a trace is, and more capacitance the more ground-is-parallel to that trace). So this theory ends up working out as a physical model of the electrons moving at the chip level to fetch data and communicate.
Let me clarify: This wasn't intended to contradict the root-n limit. Both limits exist.
* The root-n limit is based on physics of a 3d universe.
* A log n limit is based on computation, and would apply even in a infinite-dimensional universe, where everything is one unit away.
The root-n is a stronger limiting factor, obviously, but both are helpful to know since they talk about different limits from different premises.
Both disprove O(1). Both are (1) quite theoretical (2) not the limiting factor in most scenarios in practical terms (3) still apply in a limited set of cases.
The log n argument is helpful, for example, for network addressing for the internet and network topologies (and issues like 32 bit versus 128 bit addresses, NAT, etc.), were distances are well above the limits of physics, and the root-n argument doesn't apply. It's also helpful to know since it also applies to storage, and not just speed. If I have twice as much storage, my pointers all get longer by a bit.
(I'm intentionally omitting square root versus cube root, since that's tangential to the above, and a rabbit hole I don't want to go down right now; perhaps another post)
As it turns out, the information density of a region of space is bounded by it's surface area, not its volume. If you try exceeding that bound, you collapse into a black hole. This observation led to the idea of the Holographic principle in physics. Having said that, this is one of those areas that depends on a full understanding of quantum gravity, which we do not yet have, so I wouldn't call it settled science.
In practical terms, if your argument ends with "otherwise, your computer would collapse into a black hole", you are considering technology far beyond anything we are currently capable of, so n^3 information density is still on the table for everyone except theoretical physicists.
The article makes an argument about black holes to get to r^2 instead of r^3. It seems a bit surprising, but then I don’t know that type of physics.
Thinking of the more classical geometrical point of view, I agree that between 2D and 3D seems right. If nothing else, I’d imagine there aren’t many wires going diagonally through the layers, haha. Maybe a cylinder would be the right model.
O(log(n)) confuses me. I could have sworn that the "Center-of-spherical library" thought experiment proves that its O(cube-root(n)) in 3-dimensional space, and O(square-root(n)) for 2-dimensional space.
That is: all memory has a physical shape, and the worst-case speed is related to the distance away from the center (aka: CPU Core) and whatever "book" or memory-cell you're looking for.
For a 2d space, the optimal organization of books (or memory cells), is a perfect circle: each book is exactly R (radius) distance away from the center, meaning all books can be fetched under equal time. This leads to O(sqrt(n)) performance, as the more memory you have (assuming memory is the same "shape"), the larger the circle _must_ become.
For 3d space, the optimal organization of books is a perfect sphere. Same-same really, except we can fly now.
Real-world chips are largely 2d technologies, but given the shear number of layers available on modern chips (hundreds of layers for Flash and other tech), maybe spherical / 3d perspectives are a better theory to work off of. Transitioning between layers is always fraught with problems (vias in physical chips / boards have inductances and other parasitics after all), so we're somewhere between 2-dimensions and 3-dimensions in practice (with more and more tricks to get us towards the 3d side).
------------
In the real world: chips are limited by these parasitic elements (inductances and capacitances), which slow down the electrical signal. But these are primarily related to distance (more inductance the longer a trace is, and more capacitance the more ground-is-parallel to that trace). So this theory ends up working out as a physical model of the electrons moving at the chip level to fetch data and communicate.