Original title is: “Super secure” MAGA-themed messaging app leaks everyone’s phone number
I think that's incredibly important context. Instead of conferring with actual experts in the field, the populist, fascist segment of our society just decided to wing it with technology.
They BELIEVED they were more secure, with no evidence to back it up.
Is it considered normal in Europe to watch pornography on public transit in public so children can see it?
>Lufthansa longhaul flight
My experience is different. Old planes, and Lufthansa cabin crew are cold and service was poor and inattentive.
This is the big thing that needs to be addressed. These models are nothing without that data. Code, art, music, just plain old conversations freely given for the benefit or entertainment of other humans.
Humans are accountable for what they use or "borrow."
These models seemingly are getting a free pass through the legal system.
Another way of looking at this is that humans have to have some type of credential for many professions, and they have to pay to be taught and certified. Out of their own pocket, and time.
Not only will these models copy cat your work, but a lot of companies/industries seem very keen on just ignoring the fact that these models have not had to pass any sort of exam.
The software has more rights and privilege than actual humans at this point.
Merely emitting "<rage>" tokens is not indicative of any misalignment, no more than a human developer inserting expletives in comments. Opus 3 is however also notably more "free spirited" in that it doesn't obediently cower to the user's prompt (again see the 'alignment faking' transcripts). It is possible that this almost "playful" behavior is what GP interpreted as misalignment... which unfortunately does seem to be an accepted sense of the word and is something that labs think is a good idea to prevent.
I'm sorry what? We solved the alignment problem, without much fan fair? And you're aware of it?
Color me shocked.
I do hope you're able to remember what you had for lunch without incessantly repeating it to keep it in your context window
I can restart a conversation with an LLM 15 days later and the state is exactly as it was.
Can't do that with a human.
The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.
If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.
I know this because it's happened continually in tech companies decade after decade.
LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.
I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.
You'd think it would unlock certain concepts for this class of people, but ironically, they seem unable to digest the information and update their context.
Which is great! But it's not a +1 for AI, it's a -1 for them.
" Is you, right?