So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result.
So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result.
https://en.wikipedia.org/wiki/O_visa#Number_of_visas_issued_...
Do you have any data to back up that claim?
E.g. the number of diversity lottery applicants (one of the easiest proxies to judge how many people express their interest in moving to the US) went up from 12 million in 2011 to almost 20 million last year.
Of course the most frustrating part about that is as the US and other western countries start sliding into authoritarianism, people deny it because they don’t feel like it’s authoritarian.
Edit: To clarify, I don’t think life is exactly the same - just that the consequences of authoritarianism are much more insidious than they’re portrayed.
The "happy path" is, the major differences start when you have any kind of a problem, then not having any functional institutions makes the experience _very_ different from the west.
If I couldn't easily cut off the majority of that bot volume I probably would've shut down the app entirely.
I keep Facebook because certain communities and events only happen there.
For some of us, it is plain better to only block short form video.
I keep in touch with my friends abroad by emailing them when I think about them, and I get long form responses on what they are up to, not whatever is the public image filtered stuff that they may or may not be posting somewhere.
LLMs hallucinate because they are language models. They are stochastic models of language. They model language, not truth.
If the “truthy” responses are common in their training set for a given prompt, you might be more likely to get something useful as output. Feels like we fell into that idea and said - ok this is useful as an information retrieval tool. And now we use RL to reinforce that useful behaviour. But still, it’s a (biased) language model.
I don’t think that’s how humans work. There’s more to it. We need a model of language, but it’s not sufficient to explain our mental mechanisms. We have other ways of thinking than generating language fragments.
Trying to eliminate cases where a stochastic model the size of an LLM gives “undesirable” or “untrue” responses seems rather odd.