Readit News logoReadit News
joegibbs commented on The AI Doomers Are Getting Doomier   theatlantic.com/technolog... · Posted by u/joegibbs
joegibbs · 3 days ago
I disagree on the AI 2027 thing, I don't think that we've seen any evidence of self-improvement or a rate of increase in abilities that it suggests. If anything I think that the rate of improvement has slowed since GPT3, without some entirely new architecture I don't see any takeoff scenario. The only way I can imagine AI killing everyone is if somebody decided, for some inscrutable reason, to put an LLM in charge of nuclear missile launches.

The psychosis is worrying but I think an artefact of a new technology that people don't have an accurate mental model around (similar to but worse than the supernatural powers attributed to radio, television etc). Hopefully AI companies will provide more safeguards against it but even without them I think that people will eventually understand the limitations and realise that it's not in love with them, doesn't have a genius new theory of physics and makes things up.

joegibbs commented on Mark Zuckerberg freezes AI hiring amid bubble fears   telegraph.co.uk/business/... · Posted by u/pera
aaronblohowiak · 3 days ago
We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.
joegibbs · 3 days ago
If the LLM is multimodal would more video and images improve the quality of the textual output? There’s a ton of that and it’s always easy to get more.
joegibbs commented on Europe's Free-Speech Problem   theatlantic.com/ideas/arc... · Posted by u/JumpCrisscross
johndoe0815 · 4 days ago
Stupid Republican propaganda, better ignore it.
joegibbs · 4 days ago
In the Atlantic?
joegibbs commented on New Stealth Model in Cline: "Sonic"   cline.bot/blog/new-stealt... · Posted by u/denysvitali
joegibbs · 4 days ago
It's in Cursor as well, I've been using it today and I can't say I'm that impressed with it. Definitely not Sonnet 4 or GPT5-high level. It's fast for sure, but it seems to pay no heed at all to instructions. I tell it "no fallbacks" and it makes a file full of fallbacks, and it looked at an error and said that it had fixed it about 10 times before I gave up and gave it to Claude which fixed it first try.
joegibbs commented on AGENTS.md – Open format for guiding coding agents   agents.md/... · Posted by u/ghuntley
blinkymach12 · 5 days ago
We're in a transition phase today where agents need special guidance to understand a codebase that go beyond what humans need. Before long, I don't think they will. I think we should focus on our own project documentation being comprehensive (e.g. the contents of this AGENTS.md are appropriate to live somewhere in our documentation), but we should always write for humans.

The LLM's whole shtick is that it can read and comprehend our writing, so let's architect for it at that level.

joegibbs · 5 days ago
I think they'll always need special guidance for things like business logic. They'll never know exactly what it is that you're building and why, what the end goal of the project is without you telling them. Architectural stuff is also a matter of human preference: if you have it mapped out in your head where things should go and how they should be done, it will be better for you when reading the changes, which will be the real bottleneck.
joegibbs commented on What could have been   coppolaemilio.com/entries... · Posted by u/coppolaemilio
joegibbs · 6 days ago
You could have a society where there's one single spreadsheet package made by a team of 20 people, a few operating systems, a new set of 50 video games every year (with graphics that are good enough but nothing groundbreaking so they'll run on old hardware) created according to quota by state-run enterprises, Soviet style.

This would be very efficient in avoiding duplication, the entire industry would probably only need a few thousand developers. It would also save material resources and energy. But I think that even if the software these companies produced was entirely reliable and bug-free it it would still be massively outcompeted by the flashy trend-chasing free-market companies which produce a ton of duplicated outputs (Monday.com, Trello, Notion, Asana, Basecamp - all these do basically the same thing).

It's the same with AI, or any other trend like tablets, the internet, smartphones - people wanted these and companies put their money into jumping aboard. If ChatGPT really was entirely useless and had <10,000 users then it would be business as usual - but execs can see how massive the demand is. Of course plenty are going to mess it up and probably go broke, but sometimes jumping on trends is the right move if you want a sustainable business in the future. Sears and Blockbuster could've perfected their traditional business models and customer experience without getting on the internet, and they would have still gone broke as customers moved there.

joegibbs commented on OpenAI Progress   progress.openai.com... · Posted by u/vinhnx
sefrost · 8 days ago
I like using LLMs and I have found they are incredibly useful writing and reviewing code at work.

However, when I want sources for things, I often find they link to pages that don't fully (or at all) back up the claims made. Sometimes other websites do, but the sources given to me by the LLM often don't. They might be about the same topic that I'm discussing, but they don't seem to always validate the claims.

If they could crack that problem it would be a major major win for me.

joegibbs · 7 days ago
It would be difficult to do with a raw model, but a two-step method in a chat interface would work - first the model suggests the URLs, tool call to fetch them and return the actual text of the pages, then the response can be based on that.
joegibbs commented on The new science of “emergent misalignment”   quantamagazine.org/the-ai... · Posted by u/nsoonhui
eszed · 9 days ago
If I'm following correctly, then it would move its own goalposts to whatever else in its training data is considered most taboo / evil.
joegibbs · 9 days ago
Yeah exactly, it’s that the text the model is trained on considers poorly-written code to be on the same axis as other things considered negative like supporting Hitler or killing people.

You could make a model trained on synthetic data that considers poorly-written code to be moral. If you finetuned it to make good code it would be a Nazi as well.

u/joegibbs

KarmaCake day1526March 10, 2020
About
Software developer

https://jgibbs.dev joe@jgibbs.dev

View Original