No?
You can have two independent random walks. Eg flip a coin, gain a dollar or lose a dollar. Do that to times in parallel. Your two account balances will change over time, but they won't be correlated.
No?
You can have two independent random walks. Eg flip a coin, gain a dollar or lose a dollar. Do that to times in parallel. Your two account balances will change over time, but they won't be correlated.
Please explain.
Given different T_zero configs of matter and energies T_current would be different. and there are many pathways that could lead to same physical configuration (position + energies etc) with different (Universe minus cake) configurations.
Also we are assuming there is no non-deterministic processed happening at all.
Why? We learn about the past by looking at the present all the time. We also learn about the future by looking at the present.
> Also we are assuming there is no non-deterministic processed happening at all.
Depends on the kind of non-determinism. If there's randomness, you 'just' deal with probability distributions instead. Since you have measurement error anyway, you need to do that anyway.
There are other forms of non-determinism, of course.
https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
UTF-8 is so complicated, because it wants to be backwards compatible with ASCII.
Notably Rust did the correct thing by defining multiple slightly incompatible string types for different purposes in the standard library and regularly gets flak for it.
Also publishing pirated IP without any monetary gain to yourself also used to be treated more leniently.
Of course, all the rules were changed (both in law and in interpretation in practice) as file sharing became a huge deal about two decades ago.
Details depend on jurisdiction.
it doesn't matter how much jargon and mathematical notation you layer on top of your black box next token generator, it will still be unreliable and inconsistent because fundamentally the output is an approximation of an answer and has no basis in reality
This is not a limitation you can build around, its a basic limitation of the underlying models.
Bonus points if you are relying on an LLM for orchestration or agentic state, its not going to work, just move on to a problem you can actually solve.
And you'd be half-right: humans are extremely unreliable, and it takes a lot of safeguards and automated testing and PR reviews etc to get reliable software out of humans.
(Just to be clear, I agree that current models aren't exactly reliable. But I'm fairly sure with enough resources thrown at the problem, we could get reasonably reliable systems out of them.)