AI stays the top story but in a boring way as novelty wears off and models get cheaper and faster (maybe even more embedded). No AGI moment. LLMs start feeling like databases or cloud compute.
No SpaceX or OpenAI IPO moment. Capital markets quietly reward the boring winners instead. S&P 500 grinds out another double digit year, mostly because earnings keep up and alternatives still look worse. Tech discourse stays apocalyptic, but balance sheets don't.
If you mute politics and social media noise, 2026 probably looks like one of those years that we later remember as "stable" in retrospect.
Bonus: Bitcoin sees both 50k and 150k.
I love this, we focus way too much on the apparent chaos of daily life. Any news seems like a big wave that announces something bigger and we spend our time (especially here!) imagining the tsunami to come. Then later, we realize that most events are just unimportant to the point we forgot about them.
Let's say you loose your wallet. What do you do next? You call your bank to block your credit card and to take an appointment with the administration to make a new ID, driving license etc. The cash in your wallet is gone but everything else isn't. The process is annoying but at the end of the day you'll be fine.
Now if all this is based on a private key and you loose it, you're completely done, you're just not part of society anymore.
No one will ever embrace this because humans are messy and make mistakes all the time. Crypto and blockchain are so resistant to mistakes that for this specific case it's just not good at all.
For whatever reason they posted their paper on zenodo but it belongs to vixra, if they had posted it there you would have never heard of it.
Asking "them"... your perspective is already warped. It's not your fault, all the text we've previously ever seen is associated with a human being.
Language models are mathematical, statistical beasts. The beast generally doesn't do well with open ended questions (known as "zero-shot"). It shines when you give it something to work off of ("one-shot").
Some may complain of the preciseness of my use of zero and one shot here, but I use it merely to contrast between open ended questions versus providing some context and work to be done.
Some examples...
- summarize the following
- given this code, break down each part
- give alternatives of this code and trade-offs
- given this error, how to fix or begin troubleshooting
I mainly use them for technical things I can then verify myself.
While extremely useful, I consider them extremely dangerous. They provide a false sense of "knowing things"/"learning"/"productivity". It's too easy to begin to rely on them as a crutch.
When learning new programming languages, I go back to writing by hand and compiling in my head. I need that mechanical muscle memory, same as trying to learn calculus or physics, chemistry, etc.
I'll correct my take then: due to this, the epidemic of loneliness will start to surge like never before. This might pave the way to some reaction in the public opinion, but real concrete actions will not happen in 2026, I would rather expect them around 2028 or even 2030.