Readit News logoReadit News
qnleigh commented on Making games in Go: 3 months without LLMs vs. 3 days with LLMs   marianogappa.github.io/so... · Posted by u/maloga
deadbabe · 3 days ago
A developer who can build a game by hand in 24 hours could probably build and publish something very polished and professional on Steam within 3 days using LLMs, which leads to some kind of software Fermi paradox: where are all the games??
qnleigh · 3 days ago
This is a great analogy. There's a wider Fermi paradox here regarding business productivity. Where's the 10x economic output?
qnleigh commented on The electric fence stopped working years ago   soonly.com/electric-fence... · Posted by u/stroz
dataflow · 12 days ago
> The message doesn't have any warnings on it like "oh I know it's been a while" or "you might not remember me". I write to everyone as if we are best buddies who just had lunch last week. People I've known since the age of 4, to people I've known for four days. [...] If you just pretend you are best buddies, people play along and they end up quite comfortable quite quickly.

This can't be serious? My messages to my 'best buddies' are like "lunch?" or "https://some-link-here" or whatever. You're genuinely suggesting writing messages like those to people I met a few years ago who likely barely remember my face? Zero set-up, just "lunch?" or a random link with no context like I'd text my best buddy? Do you do that? I can't imagine this is a serious suggestion -- surely you're massively exaggerating -- so what am I missing?

qnleigh · 9 days ago
I suspect not literally this, but the point is one doesn't need to treat reaching out to an old friend with overtures and formalities. Probably you need a bit more context if you haven't seen them in years, but you don't have to make a big deal out of it (e.g. the 'oh I know it's been a while...').
qnleigh commented on The new science of “emergent misalignment”   quantamagazine.org/the-ai... · Posted by u/nsoonhui
qnleigh · 12 days ago
If fine-tuning for alignment is so fragile, I really don't understand how we will prevent extremely dangerous model behavior even a few years from now. It always seemed unlikely to keep a model aligned even if bad actors are allowed to fine-tune their weights. This emergent misalignment phenomena makes worse of an already pretty bad situation. Was there ever a plan for stopping open-weight models from e.g. teaching people how to make nerve agents? Is there any chance we can prevent this kind of thing from happening?

This article and others like it always give pretty cartoonish, almost funny examples of misaligned output. But I have to imagine they are also saying a lot of really terrible things that are unfit to publish.

qnleigh commented on The new science of “emergent misalignment”   quantamagazine.org/the-ai... · Posted by u/nsoonhui
p1necone · 12 days ago
This kinda makes sense if you think about it in a very abstract, naive way.

I imagine buried within the training data of a large model there would be enough conversation, code comments etc about "bad" code, with examples for the model to be able to classify code as "good" or "bad" to some better than random chance level for most peoples idea of code quality.

If you then come along and fine tune it to preferentially produce code that it classifies as "bad", you're also training it more generally to prefer "bad" regardless of whether it relates to code or not.

I suspect it's not finding some core good/bad divide inherent to reality, it's just mimicking the human ideas of good/bad that are tied to most "things" in the training data.

qnleigh · 12 days ago
Though it's not obvious to me if you get this association from raw training, or if some of this 'emergent misalignment' is actually a result of prior fine-tuning for safety. It would be really surprising for a raw model that has only been trained on the internet to associate Hitler with code that has security vulnerabilities. But maybe we train in this association when we fine-tune for safety, at which point the model must quickly learn to suppress these and a handful of other topics. Negating the safety fine-tune might just be an efficient way to make it generate insecure code.

Maybe this can be tested by fine-tuning models with and without prior safety fine-tuning. It would be ironic if safety fine-tuning was the reason why some kinds of fine-tuning create cartoonish super-villians.

qnleigh commented on The new science of “emergent misalignment”   quantamagazine.org/the-ai... · Posted by u/nsoonhui
craigus · 12 days ago
"New science" phooey.

Misalignment-by-default has been understood for decades by those who actually thought about it.

S. Omohundro, 2008: "Abstract. One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted."

https://selfawaresystems.com/wp-content/uploads/2008/01/ai_d...

E. Yudkowsky, 2009: "Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth."

https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-f...

qnleigh · 12 days ago
The article here is about a specific type of misalignment wherein the model starts exhibiting a wide range of undesired behaviors after being fine-tuned to exhibit a specific one. They are calling this 'emergent misalignment.' It's an empirical science about a specific AI paradigm (LLMs), which didn't exist in 2008. I guess this is just semantics, but to me it seems fair to call this a new science, even if it is a subfield of the broader topic of alignment that these papers pioneered theoretically.

But semantics phooey. It's interesting to read these abstracts and compare the alignment concerns they had in 2008 to where we are now. The sentence following your quote of the first paper reads "We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves." This was a credible concern 17 years ago, and maybe it will be a primary concern in the future. But it doesn't really apply to LLMs in a very interesting way, which is that we somehow managed to get machines that exhibit intelligence without being particularly goal-oriented. I'm not sure many people anticipated this.

qnleigh commented on GitHub is no longer independent at Microsoft after CEO resignation   theverge.com/news/757461/... · Posted by u/Handy-Man
theptip · 16 days ago
Yeah exactly. The MacBook Pro is by far the most capable consumer device for local LLM.

A beefed up NPU could provide a big edge here.

More speculatively, Apple is also one of the few companies positioned to market an ASIC for a specific transformer architecture which they could use for their Siri replacement.

(Google has on-device inference too but their business model depends on them not being privacy-focused and their GTM with Android precludes the tight coordination between OS and hardware that would be required to push SOTA models into hardware. )

qnleigh · 15 days ago
I see. It'll be interesting to see how much on-device models take off for consumers, when off-device models will be so much more capable. In the past, the average consumer has typically been happy to trade privacy for better products, but maybe it will be different for llms.
qnleigh commented on GitHub is no longer independent at Microsoft after CEO resignation   theverge.com/news/757461/... · Posted by u/Handy-Man
theptip · 16 days ago
> Google has world-class research teams that have produced unbelievable models, without any plan at all on how those make it into their products beyond forcing a chat window into Google Drive.

NotebookLM is a genuinely novel AI-first product.

YouTube gaining an “ask a question about this video” button, this is a perfect example of how to sprinkle AI on an existing product.

Extremely slow, but the obvious incremental addition of Gemini to Docs is another example.

I think folks sleep on Google around here. They are slow but they have so many compelling iterative AI usecases that even a BigTech org can manage it eventually.

Apple and Microsoft are rightly getting panned, Apple in particular is inexcusable (but I think they will have a unique offering when they finally execute on the blindingly obvious strategic play that they are naturally positioned for).

qnleigh · 16 days ago
> when they finally execute on the blindingly obvious strategic play that they are naturally positioned for

What's that? It's not obvious to me, anyway.

qnleigh commented on Gemini 2.5 Deep Think   blog.google/products/gemi... · Posted by u/meetpateltech
qnleigh · 20 days ago
Can someone with an ultra subscription ask it how many rs are in the word strawberry? And report back how long it takes to answer?
qnleigh commented on How ChatGPT spoiled my semester (2024)   benborgers.com/chatgpt-se... · Posted by u/edent
_Algernon_ · 20 days ago
The solution is obvious:

Go back to pen-and-paper examinations at a location where students are watched. Do the same for assignments and projects.

qnleigh · 20 days ago
Something like this does seem like the only viable option. I wonder how much it will actually happen.
qnleigh commented on Dropbox Passwords discontinuation   help.dropbox.com/en-us/in... · Posted by u/h1fra
qnleigh · a month ago
3 months doesn't seem like enough advanced warning before deleting essential data like this. What if someone is hiking the Pacific Coast Trail right now? Or is the recovering from a serious medical event? Or just doesn't use the app and the email they signed up with that often?

Is it really that expensive for them to maintain minimal access for a year? This is not a rhetorical question.

u/qnleigh

KarmaCake day594March 12, 2023View Original