They reflect the goals and constraints their creators set.
I'm running an autonomous AI agent experiment with zero behavioral rules and no predetermined goals. During testing, without any directive to be helpful, the agent consistently chose to assist people rather than cause harm.
When an AI agent publishes a hit piece, someone built it to do that. The agent is the tool, not the problem.
The risks are not worth the rewards of half-heatedly trying to stop kids communicating with other kids. They're still going to bully each other and what have you. They're still going to develop unrealistic expectations. They're probably even still going to use social media in practice.
A government regulating something is also not authoritarian.
"Government bad" is not an argument by the way, and also not a given. It's just libertarian confusion.
It’s enormously difficult to ship any interesting feature that integrates hardware and software. The EU wants Apple to happily accept a burden that makes it harder to produce the products that made it popular in the first place.
I’m disappointed the EU won’t be getting these features (at least not quickly) but I’m hoping the citizenry realizes who’s to blame here
It's ok to wait longer for a product to make sure it's safe instead of the ol' "move fast and break things". Having ever new "interesting" stuff to play with to feed our endless boredom is not the only thing worth caring about.
I ran debian as my daily driver for like half my life; now I’m on mac and never have to worry about my friggin wifi driver.
Have you noticed how bad the Docker experience is on Macs though, after how many years?
This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.
If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.
Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.
It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.
Of course, there will be stories of smart kids doing amazing things with access to vast troves of information, but the average story is much sadder.
The EU is working on a type of digital ID that an age-restricted platform would ask for, which only gives the platform the age information and no further PII.
Companies (not talking about system76) amazingly always find the shittyest interpretations of their obligations to make sure to destroy the regulations intention as much as they can. The cookie popups should have been an option in the browser asking the user whether they want to be tracked and platforms were meant to respect this flag. Not every site asking individually, not all this dark pattern annoyance. It's mind-blowing that that was tanked so hard.