Honestly, the more I look at it, the worse it gets.
Honestly, the more I look at it, the worse it gets.
There's a world of difference between the technical capabilities of a technology, and people actually executing it.
See contradicting data here: https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
Companies like lovable are reporting millions of projects that are basically slop apps. They're just not released as real products with an independent company.
The data is misleading - it's like saying high-quality phone cameras had no impact on the video industry. Just look at how much of network tv is filmed with iphone cameras. At best you might have some ads, and some minor projects using it, but nothing big. Completely ignoring that youtube or tiktok are built off of people's phone cameras and their revenue rivals major networks.
I am sorry, I just don't want to have this conversation about AI and it's impact for the millionth time because it just devolves into semantics, word games, etc. It's just so tiring.
[0] https://www.gamesradar.com/platforms/pc-gaming/steams-slop-p...
I don't understand this argument. I mean the same applies for books. All books teach you what has come before. Nobody says "You can't make anything truly radical with books". Radical things are built by people after reading those books. Why can't people build radical things after learning or after being assisted by LLMs?
Because that's not how this is being marketed.
I agree with you completely - the best use case I've found for llms (and I say this as somebody that does generate a lot of code with it) is to use it as a research tool. An instantaneous and powerful solution that fills the gap from the long gone heydays of communities like mailing groups, or Stack overflow where you had the people - the experts and maintainers - that seemingly answered within a few hours on how something works.
But then that's not enough for all the money that's being fed into this monster. The AI leadership is hell-bent on trying to build a modern day tower of babel (in more ways than one), where there is no thinking or learning - one click and you have an app! Now you can fire your entire software team, and then ask chatgpt what to do next when this breaks the economy.
It becomes obsolete in literally weeks, and it also doesn't work 80% of the time. Like why write a mcp server for custom tasks when I don't know if the llm is going to reliably call it.
My rule for AI has been steadfast for months (years?) now. I write (myself, not AI because then I spend more time guiding the AI instead of thinking about the problem) documentation for myself (templates, checklist, etc.). I give ai a chance to one-shot it in seconds, if it can't, I am either review my documentation or I just do it manually.
Zig... is surprisngly used a lot given how rough the state of the language is. It makes me think that if it ever reaches v1.0, it has a very good chance of being at least a "Kotlin", probably a "elixir"/"haskell", and a decent enough shot of "typescript".
Yea, I remember the time when trillion dollar companies were betting the house on Juicero /s