A: Four dimensional space is just as real as three dimensional space. What we think of as the 'real world' is our brain processing the sensory input it receives. If our brain received the exact same input it receives from the 'real world' but from another source, would it be any less real?
Virtual reality designed to mimic our base reality is distinguishable from said base reality because it doesn't live up to expectations set by our experience of base reality. Dreams on the other hand are often indistinguishable from base reality.
If the brain is able to generate meaning out of input that corresponds to four dimensional space, then it is real, or at least as real as anything else.
A: Anything that can be explored/observed independently of other established dimensions can be counted as an additional dimension. For example on a flat LCD screen, color can be thought of as a third dimension since the whole range of colors is available to each pixel on the screen. In the case of time, the same space existed and will continue to exist regardless of which time it is observed in, and conversely if given a time, the full range of space is available to observe, so it too can be thought of as an additional dimension to space.
3 Dimensions are conceptual, they map conveniently to the experience of what we call the "physical world", which is why they were conceived of in the first place.
For the purpose of our technology, when we talk about 4D or 4 spatial dimensions, we are talking about four spatial dimensions plus time.
A: Stereoscopic vision is one of the mechanisms that enables us to see depth. It is particulary useful in scenarios where both observer and object are still, and it enables greater accuracy of depth perception within 5 to 20 meters of range. We are focusing on motion parallax as the primary mechanism for space depth perception.
This issue is further complicated by the 'hacking' of stereo vision employed as a fundamental of VR technology.
A: There is nothing inherently two dimensional about the input the eyes supply the brain. In fact the input is not even one dimensional because it comes from photoreceptors on the retina. These act like point light sensors and the brain has to make sense of them to convert them into vision.
Though it is a leap to expect the brain to understand 4D space. Given that it is already taking leaps like producing 3D vision from a collection of point light input, with 2D vision as a likely intermediate step. It is reasonable to expect our approach to work because we are trying to build 4D vision using the same process, but instead using 3D vision as an intermediate step.
A: This is not really a visualization technique, it is brain training technology. Most 4D visualization techniques rely on dimensional analogy and attempt to cram a logically conceptualized 4D object into 3D space. An object like a tesseract for example. Others display a disparate series of 3D slices of 4D objects in the hope that the brain will be able to somehow stack the slices into a 4D whole. These approaches miss the time series data inherent to continuous rotation, and fail to consider how the brain managed to visualize/intuit 3D space in the first place. Our technology exploits a continuum of images derived from smoothly rotating 4D objects.
A: Apart from satisfying our curiosity, learning to see four dimensional space could change our understanding of the brain. Learning 4D vision could significantly change the brain itself as it opens up the possibility of intuitive four dimensional problem solving. As the technology matures it also opens up the avenue of a four dimensional metaverse, which is to say it opens up the possibility of a whole new world.
It is unclear how much of our human 3D vision algorithm is baked into our DNA, and how much is learned. We are counting on the learning infrastructure being prebaked and not the complete vision algorithm so that we can apply the learning algorithm to 4D space.
Many of our claims made about the human vision algorithm, like the role of stereoscopic vision and motion parallax, or the notion of 2D vision being a precursor to 3D vision, are based on informal research and our own reasoning. It is likely that experiments to test our working understanding have already been conducted or can be designed. This is an area of further exploration for us as time and resources allow.
Mathematically, we treat the 4D point spheres as infinitesimal, which we use to calculate their perspective projections as ellipsoidal instead of the more computationally intensive egg shape projection.
Our rotation calculation is numerical so there exists some degree of error accumulation.