However, another way to look at that is that some people are willing to give up some space for a more convenient location. We're not running out of space quite yet.
However, another way to look at that is that some people are willing to give up some space for a more convenient location. We're not running out of space quite yet.
But, the "I don't understand" is strong in this. it doesn't mean "it can't work" but I don't understand how it avoids the problems.
Maybe the size of the computed foveal coverage area is made big enough, to cover the movement? But if you move your eyes suddenly, there's got to be some lag while it computes the missing pixels. So you'd see the same as when Netflix ups the coding rate: crude render becomes clearer. Banded would become smooth transitions.
Do you know you have a big hole in your vision in each eye where the optic nerve is? It's about half the size of your fist at arm's length, and 35 degrees to the side. Your fovea happens to be roughly the same size. It's the HD part of your retina, and it's where essentially all of your vision happens. It's the only section of the retina that sees color, for instance. The periphery sees motion and that's about it.
Saccades top out at around 700 degrees per second. At 120 frames per second that's only about 6 degrees in either direction. Compared to the FOV, that's tiny. Overfill it!
If you were right that would be easily verifiable. Do you have an example of a post dated before 2018? Maybe you're getting tricked by the fact that 2018 was 8 years ago?
How?
I do like the store and forward idea, though a thought on that is that while it makes sense for DM's, it makes less sense for group chats, which, being real time, make the shelf life of messages a bit short. It makes good sense for forum like content though. I think so far Bitchat has treated this as a bit out of scope, at least at this stage of development, and it is a reason that indeed, Briar is still quite relevant.
Bitchat only just recently even added ad hoc wifi support, so it's still very early days.
Neither are real time once you introduce delayed communication. Not sure I see the distinction.
Actually, I'd argue that unreliable transport breaks the real-time assumption even without introducing delayed communication. Is there immediate feedback if your message can't reach it's destination?
You just can't see the back of a thing by knowing the shape of the front side with current technologies.
The depth map stored for image processing is image metadata, meaning it calculates one depth per pixel from a single position in space. Note that it doesn't have the ability to measure that many depth values, so it measures what it can using LIDAR and focus information and estimates the rest.
On the other hand, a point cloud is not image data. It isn't necessarily taken from a single position, in theory the device could be moved around to capture addition angles, and the result is a sparse point cloud of depth measurements. Also, raw point cloud data doesn't necessarily come tagged with point metadata such as color.
I also note that these distinctions start to vanish when dealing with video or using more than one capture device.
https://developer.apple.com/documentation/spatial/
Edit: As I'm digging, this seems to be focused on stereoscopic video as opposed to actual point clouds. It appears applications like cinematic mode use a monocular depth map, and their lidar outputs raw point cloud data.
Static hazards deserve physical signage and/or remediation.