Readit News logoReadit News
ranyume commented on The Going Dark initiative or ProtectEU is a Chat Control 3.0 attempt   mastodon.online/@mullvadn... · Posted by u/janandonly
TiredOfLife · 3 days ago
> It likely encourages pedophilia

It's like saying that pictures of gay people encourages homosexuality

ranyume · 3 days ago
I like this parallel. No joke intended.
ranyume commented on The Going Dark initiative or ProtectEU is a Chat Control 3.0 attempt   mastodon.online/@mullvadn... · Posted by u/janandonly
elestor · 3 days ago
I'd say it makes a lot of sense. It likely encourages pedophilia, meaning people that consume such things will often adopt a wrong idea of what's okay. It's similar to the way that regular porn affects the brain. I understand where you're coming from, though and I get your point, but I feel if someone consumes a lot of media of a certain type, they begin to 'embody' that media.

Just don't goon.

ranyume · 3 days ago
> will often adopt a wrong idea of what's okay

I wonder who gets to decide what's okay.

ranyume commented on The Going Dark initiative or ProtectEU is a Chat Control 3.0 attempt   mastodon.online/@mullvadn... · Posted by u/janandonly
forty · 3 days ago
If those countries have laws against making and consuming pedophile pictures and drawings (or "artwork"), I feel it's perfectly fine for people making and consuming those to be raided, even if they disagree with the law (if people could opt out of all laws they don't agree with, I'mnot sure what would be the point of making laws).

What is not ok is to watch the activities of everyone who is not a pedophile in order to catch those, otherwise when does it stop? Should they have cameras in every room of your home just in case?

ranyume · 3 days ago
I read this as "It's perfectly fine to persecute people for their art". And boy, you're on the wrong side of history.
ranyume commented on A proposed amendment to ban under 16s in the UK from common online services   decoded.legal/blog/2025/1... · Posted by u/ibobev
everdrive · 6 days ago
I think the problem you lay out is interesting. Back when the Arab Spring was brand new, the narrative was something like "Twitter has finally given power to the people, and once they had power they overthrew their evil dictatorships."

A decade and some time later, my personal opinion would be that the narrative reads something like this: "access to social media increases populism, extremism, and social unrest. It's a risk to any and all forms of government. The Arab dictatorships failed first because they were the most brittle."

To the extent that you agree with my claim, it would mean that even a beneficent government would have something to fear from social media. As with the Arab Spring, whatever comes after the revolution is often worse than the very-imperfect government which came before.

ranyume · 6 days ago
> To the extent that you agree with my claim, it would mean that even a beneficent government would have something to fear from social media

I'd say that governments are beneficial to the extent that they adapt to the people they're governing. It's clear that social media poses a grave danger to current governance. But that doesn't mean that all forms of governance are equally attacked.

My belief is that the current governance is just obsolete and dying because of the pace of cultural and technical innovation. Governments will need to change in order to stay beneficial to people, and the change is to adapt to people instead of making the people adapt to the current governance.

ranyume commented on A proposed amendment to ban under 16s in the UK from common online services   decoded.legal/blog/2025/1... · Posted by u/ibobev
omnicognate · 6 days ago
I'd rather my government control the narrative my children are exposed to than Andrew Tate.

Edit: To expand, this is not just a flippant remark. People ignore Andrew Tate because he's so obviously, cartoonishly awful, but they are not the audience. It's aimed at children, and from personal experience its effect on a large number of them worldwide is profound, to the extent that I worry about the long term, generational effect.

Children will be exposed to narratives one way or another, and to want to (re)assert some control that over that isn't necessarily just an authoritatian power play.

ranyume · 6 days ago
The targets to control are not children. They don't need to be controlled, from an intelligence point of view. Government's attention is not infinite, and between worries of losing power and worries about the wellbeing of children, one of the two is the winner, and it's not the children. If children's well-being was the priority, you would see other stuff being made.
ranyume commented on A proposed amendment to ban under 16s in the UK from common online services   decoded.legal/blog/2025/1... · Posted by u/ibobev
ranyume · 6 days ago
This is my humble opinion, but such a coordinated action from the governments around the world at this particular time has a certain smell. It smells like they're worried about losing governmental narrative control. It could be about foreign powers, but tech nowadays allows regular people to contest power from the government so they become a target as well. AI, the internet, anonymity/cryptography, a probable war with china and/or russia, all exacerbate this worry.

In short, governments want to retain control and prepare for the future, and to retain control they need to control the flow of information and they need to have a monopoly on information. To achieve this they need an intelligence strategy that puts common people at the center (spying on them) and put restrictions in place. But they can't say this outloud because in the current era it's problematic, so the children become a good excuse.

This is particularly clear in governments that don't care about political correctness or are not competent enough to disguise their intentions. Such an example is the Argentine government, which these years passed laws to survey online activity and to put it's intelligence agency to spy on "anyone that puts sovereign narrative and cohesion at risk".

ranyume commented on Kimi K2 1T model runs on 2 512GB M3 Ultras   twitter.com/awnihannun/st... · Posted by u/jeudesprits
stevenhuang · 10 days ago
You have an outmoded understanding of how LLMs work (flawed in ways that are "not even wrong"), a poor ontological understanding of what reasoning even is, and too certain that your answers to open questions are the right ones.
ranyume · 10 days ago
My understanding is based on first-hand experimentation trying to make LLMs work on the impossible task of tasteful simulation of an adventure game.
ranyume commented on Kimi K2 1T model runs on 2 512GB M3 Ultras   twitter.com/awnihannun/st... · Posted by u/jeudesprits
moffkalast · 10 days ago
That's kind of nonsense, since if I ask you what's five times six, you don't do the math in your head, you spit out the value of the multiplication table you memorized in primary school. Doing the math on paper is tool use, which models can easily do too if you give them the option, writing adhoc python scripts to run the math you ask them to with exact results. There is definitely a lot of generalization going on beyond just pattern matching, otherwise practically nothing of what everyone does with LLMs daily would ever work. Although it's true that the patterns drive an extremely strong bias.

Arguably if you're grading LLM output, which by your definition cannot be novel, then it doesn't need to be graded with something that can. The gist of this grading approach is just giving them two examples and asking which is better, so it's completely arbitrary, but the grades will be somewhat consistent and running it with different LLM judges and averaging the results should help at least a little. Human judges are completely inconsistent.

ranyume · 10 days ago
> if I ask you what's five times six, you don't do the math in your head, you spit out the value of the multiplication table you memorized in primary school

Memorization is one ability people have, but it's not the only one. In the case of LLMs, it's the only ability it has.

Moreover, let's make this clear: LLMs do not memorize the same way people do, they don't memorize the same concepts people do, and they don't memorize the same content people do. This is why LLMs "have hallucinations", "don't follow instructions", "are censored", and "makes common sense mistakes" (these are words people use to characterize LLMs).

> nothing of what everyone does with LLMs daily would ever work

It "works" in the sense that the LLM's output serves a purpose designated by the people. LLMs "work" for certain tasks and don't "work" for others. "Working" doesn't require reasoning from an LLM, any tool can "work" well for certain tasks when used by the people.

> averaging the results should help at least a little

Averaging the LLM grading just exacerbates the illusion of LLM reasoning. It only confuses people. Would you ask your hammer to grade how well scissors cut paper? You could do that, and the hammer would say it gets the job done but doesn't cut well because it needs to smash the paper instead of cutting it; Your hammer's just talking in a different language. It's the same here. The LLMs output doesn't necessarily measure what the instructions in the prompt say.

> Human judges are completely inconsistent.

Humans can be inconsistent, but how well the LLM adapts to humans is itself a metric of success.

ranyume commented on Kimi K2 1T model runs on 2 512GB M3 Ultras   twitter.com/awnihannun/st... · Posted by u/jeudesprits
moffkalast · 11 days ago
Well if lmsys showed anything, it's that human judges are measurably worse. Then you have your run of the mill multiple choice tests that grade models on unrealistic single token outputs. What does that leave us with?
ranyume · 11 days ago
> What does that leave us with?

At the start, with no benchmark. Because LLMs can't reason at this time, and because we don't have a reliable way of grading LLM reasoning, and because people are stubborn thinking LLMs are actually reasoning we're at the start. When you ask a LLM "2 + 2 = ", it doesn't add the numbers together, it just looks up one of the stories it memorized and return what happens next. Probably in some such stories 2 + 2 = fish.

Similarly, when you're asking a LLM to grade another LLM, it's just looking up what happens next in it's stories, not even following instructions. "Following" instructions requires thinking, hence it's not even following instructions. But you can say you're commanding the LLM, or programming the LLM, so you have full responsibility for what the LLM produces, and the LLM has no authorship. Put in another way, the LLM cannot make something you yourself can't... at this point, in which it can't reason.

ranyume commented on Kimi K2 1T model runs on 2 512GB M3 Ultras   twitter.com/awnihannun/st... · Posted by u/jeudesprits
Alifatisk · 11 days ago
> As a writer of very-short-form stuff like emails, it's probably the best model available right now.

This is exactly my feeling with Kimi K2, it's unique in this regard, the only one that comes close is Gemini 3 pro, otherwise, no other model has been this good at helping out with communication.

It has such a good understanding with "emotional intelligence" (?), reading signals in messages, understanding intentions, taking human factors into consideration and social norms and trends when helping out with formulating a message.

I don't exactly know what Moonshot did during training but they succeeded with a unique trait on this model. This area deserves more highlight in my opinion.

I saw someone linking to EQ-bench which is about emotional intelligence in LLMs, looking at it, Kimi is #1. So this kind of confirms my feeling.

Link: https://eqbench.com

ranyume · 11 days ago
Careful with that benchmark. It's LLMs grading other LLMs.

u/ranyume

KarmaCake day200February 15, 2024
About
I'm not sure what you're here for. I'm just a another internet mouth.
View Original