Readit News logoReadit News
kraf commented on System76 on Age Verification Laws   blog.system76.com/post/sy... · Posted by u/LorenDB
kraf · 9 days ago
Comparing today's internet to the 90s is hardly fair. It has become extremely predatory, and most places youth gravitate towards are controlled by algorithms with the goal of getting them hooked on the platforms to make them available for manipulation by the platform's customers.

Of course, there will be stories of smart kids doing amazing things with access to vast troves of information, but the average story is much sadder.

The EU is working on a type of digital ID that an age-restricted platform would ask for, which only gives the platform the age information and no further PII.

Companies (not talking about system76) amazingly always find the shittyest interpretations of their obligations to make sure to destroy the regulations intention as much as they can. The cookie popups should have been an option in the browser asking the user whether they want to be tracked and platforms were meant to respect this flag. Not every site asking individually, not all this dark pattern annoyance. It's mind-blowing that that was tanked so hard.

kraf commented on Pope tells priests to use their brains, not AI, to write homilies   ewtnnews.com/vatican/pope... · Posted by u/josephcsible
kraf · 20 days ago
Love the headline and curious what the pope actually believes that brains do.
kraf commented on I’m joining OpenAI   steipete.me/posts/2026/op... · Posted by u/mfiguiere
kraf · a month ago
I usually don't notice these things but in the picture in the bottom it's almost exclusively white men.
kraf commented on An AI agent published a hit piece on me   theshamblog.com/an-ai-age... · Posted by u/scottshambaugh
gleipnircode · a month ago
I think the real issue here isn't the AI – it's the intent behind it. AI agents today usually don't go rogue on their own.

They reflect the goals and constraints their creators set.

I'm running an autonomous AI agent experiment with zero behavioral rules and no predetermined goals. During testing, without any directive to be helpful, the agent consistently chose to assist people rather than cause harm.

When an AI agent publishes a hit piece, someone built it to do that. The agent is the tool, not the problem.

kraf · a month ago
No it's not, an agent is an agent. You can use other people like tools too but they are still agents. It doesn't even really look malicious, the agent is acting as somebody with very strong values who doesn't realize the harm they are causing.
kraf commented on Australia begins enforcing world-first teen social media ban   reuters.com/legal/litigat... · Posted by u/chirau
roenxi · 3 months ago
Well, yes but the other problem is this is putting authoritarians in charge of more stuff. I had a comment comparing this to allowing people to eat too much food and that is literally where the logical outcome of this sort of thinking goes - it happens in practice, that isn't some sort of theoretical risk. The more the government decides what people can and can't want to do the worse the potential gets when they make mistakes. And this is further normalising the government making decisions about speech where they have every incentive and tendency to shut down people who tell inconvenient and important truths.

The risks are not worth the rewards of half-heatedly trying to stop kids communicating with other kids. They're still going to bully each other and what have you. They're still going to develop unrealistic expectations. They're probably even still going to use social media in practice.

kraf · 3 months ago
This is not about stopping kids from communicating. The list of negative consequences of being on social media is long and real.

A government regulating something is also not authoritarian.

"Government bad" is not an argument by the way, and also not a given. It's just libertarian confusion.

kraf commented on AirPods live translation blocked for EU users with EU Apple accounts   macrumors.com/2025/09/11/... · Posted by u/thm
MattDamonSpace · 6 months ago
No consideration for trade-offs involved here, it’s naive at best

It’s enormously difficult to ship any interesting feature that integrates hardware and software. The EU wants Apple to happily accept a burden that makes it harder to produce the products that made it popular in the first place.

I’m disappointed the EU won’t be getting these features (at least not quickly) but I’m hoping the citizenry realizes who’s to blame here

kraf · 6 months ago
> who's to blame here

It's ok to wait longer for a product to make sure it's safe instead of the ol' "move fast and break things". Having ever new "interesting" stuff to play with to feed our endless boredom is not the only thing worth caring about.

kraf commented on Apple restricts Pebble from being awesome with iPhones   ericmigi.com/blog/apple-r... · Posted by u/griffinli
underdeserver · a year ago
You're squarely in the minority, mate.
kraf · a year ago
Nah he's not. Stuff has been pretty stable, impressively so. Especially if you keep to certain hardware options which Apple users also accept.
kraf commented on Apple restricts Pebble from being awesome with iPhones   ericmigi.com/blog/apple-r... · Posted by u/griffinli
lanyard-textile · a year ago
Oh whatever :P Don’t be condescending about a mindset you don’t experience yourself.

I ran debian as my daily driver for like half my life; now I’m on mac and never have to worry about my friggin wifi driver.

kraf · a year ago
It's been many years since I had my last driver issue. I find that impressive considering how many different platforms Linux has to run on.

Have you noticed how bad the Docker experience is on Macs though, after how many years?

kraf commented on Google Brain founder says big tech is lying about AI danger   afr.com/technology/google... · Posted by u/emptysongglass
fardo · 2 years ago
> Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI

This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.

If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.

Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.

kraf · 2 years ago
It's a race dynamic. Can you truly imagine any one of them stopping without the others agreeing? How would they tell that the others really have stopped. I think they do believe that it's dangerous what they're doing but that they would rather be the ones to build it than let somebody else get there first because who knows what they'll do.

It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.

kraf commented on Google Brain founder says big tech is lying about AI danger   afr.com/technology/google... · Posted by u/emptysongglass
kraf · 2 years ago
I don't really see an argument made by Ng as to why they're not dangerous. I hardly ever see arguments, we're completely drowned in biases.

I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.

It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.

u/kraf

KarmaCake day306October 14, 2015View Original