Essentially, it claims that modern humans and our ancestors starting with Homo habilis were primarily carnivores for 2 million years. We moved back to an omnivorous diet starting around 85,000 years ago after killing off the megafauna, is the hypothesis.
Wikipedia article: https://en.wikipedia.org/wiki/Near-Earth_supernova
Kurzgesagt video on the impact on Earth of supernovas at varying distances: https://www.youtube.com/watch?v=q4DF3j4saCE
As the Kurzesagt video points out, a supernova within 100 light-years would make space travel very difficult for humans and machines due to the immense amount of radiation for many years.
Still, I think the primary value is in expanding our understanding of science and the nature of the universe and our location within it.
Wikipedia article: https://en.wikipedia.org/wiki/Near-Earth_supernova
Kurzgesagt video on the impact on Earth of supernovas at varying distances: https://www.youtube.com/watch?v=q4DF3j4saCE
I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
This video from a few days ago analyzes the issue: https://www.youtube.com/watch?v=2tNp2vsxEzk
Regardless of climate change issues, the anti-renewable policy doesn't seem to make any sense from an economic, growth, or national security standpoint. It even is contrary to the anti-regulation and pro-capitalism _stated_ stance of the administration.
The story there is very different than what's in the article.
Some infos:
- 50% of the budgets (the one that fails) went to marketing and sales
- the authors still see that AI would offer automation equaling $2.3 trillion in labor value affecting 39 million positions
- top barriers for failure is Unwillingness to adopt new tools, Lack of executive sponsorship
Lots of people here are jumping to conclusions. AI does not work. I don't think that's what the report says.
The failure is not AI, but that a lot of existing employees are not adopting the tools or at least not adopting the tools provided by their company. The "Shadow AI economy" they discuss is a real issue: People are just using their personal subscriptions to LLMs rather than internal company offerings. My university made an enterprise version of ChatGPT available to all students, faculty, and staff so that it can be used with data that should not be used with cloud-based LLMs, but it lacks a lot of features and has many limitations compared to, for example, GPT-5. So, adoption and retention of users of that system is relatively low, which is almost surely due to its limitations compared to cloud-based options. Most use-cases don't necessarily involve data that would be illegal to use with a cloud-based system.
I am not going to trust it without actually going over the paper.
Even then, if it isn't peer-reviewed and properly vetted, I still wouldn't necessarily trust it. The MIT study on AI's impact on scientific discovery that made a big splash a year ago was fraudulent even though it was peer reviewed (so I'd really like to know about the veracity of the data): https://www.ndtv.com/science/mit-retracts-popular-study-clai...
It’s all fun and games until the bean counters start asking for evidence of return on investment. GenAI folks better buckle up. Bumps ahead. The smart folks are already quietly preparing for a shift to ride the next hype wave up while others ride this train to the trough’s bottom.
Cue a bunch of increasingly desperate puff PR trying to show this stuff returns value.