Readit News logoReadit News
chriskanan commented on The AI vibe shift is upon us   cnn.com/2025/08/22/busine... · Posted by u/lelele
Etheryte · 4 days ago
> Researchers at MIT published a report showing that 95% of the generative AI programs launched by companies failed to do the main thing they were intended for

I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.

chriskanan · 4 days ago
Read the paper. The media is not providing a lot of missing context. The paper points out problems like leadership failures for those efforts, lack of employee buy-in (potentially because they use their personal LLM), etc.

A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license

chriskanan commented on The US Department of Agriculture Bans Support for Renewables   insideclimatenews.org/new... · Posted by u/mooreds
chriskanan · 5 days ago
This is so shortsighted. The US needs a huge increase in its electricity generation capabilities, and nowadays, rewnewables, especially solar, are the cheapest option.

This video from a few days ago analyzes the issue: https://www.youtube.com/watch?v=2tNp2vsxEzk

Regardless of climate change issues, the anti-renewable policy doesn't seem to make any sense from an economic, growth, or national security standpoint. It even is contrary to the anti-regulation and pro-capitalism _stated_ stance of the administration.

chriskanan commented on 95% of Companies See 'Zero Return' on $30B Generative AI Spend   thedailyadda.com/95-of-co... · Posted by u/speckx
resiros · 7 days ago
Here is the report: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...

The story there is very different than what's in the article.

Some infos:

- 50% of the budgets (the one that fails) went to marketing and sales

- the authors still see that AI would offer automation equaling $2.3 trillion in labor value affecting 39 million positions

- top barriers for failure is Unwillingness to adopt new tools, Lack of executive sponsorship

Lots of people here are jumping to conclusions. AI does not work. I don't think that's what the report says.

chriskanan · 6 days ago
That's my assessment of the report as well.... really, some news truly is "fake" where they are pushing a narrative that they think will drive clicks and eyeballs, and the media is severely misrepresenting what is in this report.

The failure is not AI, but that a lot of existing employees are not adopting the tools or at least not adopting the tools provided by their company. The "Shadow AI economy" they discuss is a real issue: People are just using their personal subscriptions to LLMs rather than internal company offerings. My university made an enterprise version of ChatGPT available to all students, faculty, and staff so that it can be used with data that should not be used with cloud-based LLMs, but it lacks a lot of features and has many limitations compared to, for example, GPT-5. So, adoption and retention of users of that system is relatively low, which is almost surely due to its limitations compared to cloud-based options. Most use-cases don't necessarily involve data that would be illegal to use with a cloud-based system.

chriskanan commented on 95% of Companies See 'Zero Return' on $30B Generative AI Spend   thedailyadda.com/95-of-co... · Posted by u/speckx
chriskanan · 7 days ago
Where is the actual paper that makes these claims? I'm seeing this story repeated all over today, but the link doesn't actually seem to go to the study.

I am not going to trust it without actually going over the paper.

Even then, if it isn't peer-reviewed and properly vetted, I still wouldn't necessarily trust it. The MIT study on AI's impact on scientific discovery that made a big splash a year ago was fraudulent even though it was peer reviewed (so I'd really like to know about the veracity of the data): https://www.ndtv.com/science/mit-retracts-popular-study-clai...

chriskanan commented on 95% of Companies See 'Zero Return' on $30B Generative AI Spend   thedailyadda.com/95-of-co... · Posted by u/speckx
JCM9 · 7 days ago
We are entering the “Trough of disillusionment.” These hype cycles are very predictable. GPT-5 being panned as a disappointment after endless hype may go down as GenAI’s “jump the shark” moment.

It’s all fun and games until the bean counters start asking for evidence of return on investment. GenAI folks better buckle up. Bumps ahead. The smart folks are already quietly preparing for a shift to ride the next hype wave up while others ride this train to the trough’s bottom.

Cue a bunch of increasingly desperate puff PR trying to show this stuff returns value.

chriskanan · 7 days ago
Sam Altman way oversold GPT-5's capabilities, in that it doesn't feel like a big leap in capability from a user's perspective; however, the a idea of a trainable dynamic router enabling them to run inference using a lot less compute (in aggregate) to me seems like a major win. Just not necessarily a win for the user (a win for the electric grid and making OpenAI's models more cost competitive).
chriskanan commented on ICE's Supercharged Facial Recognition App of 200M Images   404media.co/inside-ices-s... · Posted by u/joker99
chriskanan · a month ago
If they are going to do this, they really ought to corroborate the face recognition with fingerprints. Many people have unrelated doppelgangers, even if an AI algorithm was near perfect: https://twinstrangers.net/
chriskanan commented on 'Positive review only': Researchers hide AI prompts in papers   asia.nikkei.com/Business/... · Posted by u/ohjeez
chriskanan · 2 months ago
Is there a list of the papers that were flagged as doing this?

A lot of people are reviewing with LLMs, despite it being banned. I don't entirely blame people nowadays... the person inclined to review using LLMs without double checking everything is probably someone who would have given a generic terrible review anyway.

A lot of conferences now require that one or even all authors who submit to the conference review for it, but they may be very unqualified. I've been told that I must review for conferences where some collaborators are submitting a paper and I helped, but I really don't know much about the field. I also have to be pretty picky with the venues I review for nowadays, just because my time is way too limited.

Conference reviewing has always been rife with problems, where the majority of reviewers wait until the last day which means they aren't going to do a very good job evaluating 5-10 papers.

chriskanan commented on Nvidia won, we all lost   blog.sebin-nyshkim.net/po... · Posted by u/todsacerdoti
alanbernstein · 2 months ago
Humanoid robotics
chriskanan · 2 months ago
This will be huge in the next decade and powered by AI. There are so many competitors, currently, that it is hard to know who the winners will be. Nvidia is already angling for humanoid robotics with its investments.
chriskanan commented on Sega mistakenly reveals sales numbers of popular games   gematsu.com/2025/06/sega-... · Posted by u/kelt
nottorp · 2 months ago
Yes but their Steam page says:

"Clair Obscur: Expedition 33 is a ground-breaking turn-based RPG with unique real-time mechanics, making battles more immersive and addictive than ever."

Turns out the real time mechanics aren't unique. Not sure I want "addictive" battles or "addictive" gameplay either. Isn't that the realm of free to play?

chriskanan · 2 months ago
The game is only about 30 hours and has no micro transactions. It is addictive until you beat it. Easily game of the year.

u/chriskanan

KarmaCake day5227February 9, 2014
About
I have over 20 years of experience conducting research in artificial intelligence. I'm a tenured professor at the University of Rochester. My lab focuses on research towards AGI, especially continual learning and multi-modal foundation models. I have a strong background in cognitive science, which inspires many of our AI algorithms.

For three years, I led AI R&D at Paige, a startup aimed at revolutionizing the detection and treatment of cancer. This resulted in the first FDA-cleared AI system for helping pathologists to detect cancer. During my time at Paige, the company grew from a handful of employees to almost 200. I currently serve on Paige's Scientific Advisory Board.

Previously, I was a professor at the Rochester Institute of Technology (RIT). For four years I was visiting faculty at Cornell Tech, where I taught a popular course on deep learning. Before becoming a professor, I worked at NASA JPL. I received my PhD from UC San Diego.

Web: www.chriskanan.com

View Original