Seems to solve a real problem which is growing rapidly, both in the old way and in the new way ... if it can overcome _slop_ in LLM chats, and the sheer enormity of code/data ahead. Trying to picture how coherence will survive.
With claims/hype/concern floating around that >90% of code will be LLM-generated within 3-6 months, with the insinuation/tone [1] that the same amount of code will be written by humans as now ( at least at first ) but LLM code will radically grow to dilute the space ( as is happening ) ... seems like DeltaDB being done right/well is going to be do-or-die on whether coherence remains possible!
[1] https://www.businessinsider.com/anthropic-ceo-ai-90-percent-...
Then mention she is 10,
a few years later she is 12 but now i call her by her name.
I have struggled to get any of the RAG approaches to handle this effectively. It is also 3 entries, but 2 of them are no longer useful, they are nothing but noise in the system.
Anyone know why these would return “ Google asks for a login, sleeping 20 minutes.”?
It's probably possible with current systems to do though. I believe there are TTS systems that can use context/prompting to change emphasis and other speech qualities, though I'm not sure how reliably.