I ran debian as my daily driver for like half my life; now I’m on mac and never have to worry about my friggin wifi driver.
Have you noticed how bad the Docker experience is on Macs though, after how many years?
This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.
If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.
Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.
It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.
Could you imagine if MS had convinced the govt back in the day, to require a special license to build an operating system (this blocking Linux and everything open)?
It’s essentially what’s happening now,
Except it is OpenAI instead of MS, and it is AI instead of Linux
AI is the new Linux, they know it, and are trying desperately to stop it from happening
While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.
For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.
I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?
And let me repeat that I am not looking for the old-school OS threads support. Almost all programming languages have that. It's nowhere nearly good enough.
> I challenge you to provide a modern framework that doesn't provide types
Ruby on Rails
If I change a signature, tests fail. If I pass junk data, tests fail. It's like invisible live tests and people forget this.
As in the other recent discussion, yeah, you can live without tests and you can live in JS-land. Whether it's worth it it depends on you. TS and traditional testing lets me ship updates without even opening node or the browser.
Types are not the same as tests at all, tests are much better at giving you a glimpse of what the code even does.
For clarity, I'll define a static typing opponent as somebody who believes that the return on investment for static typing is negative.
There are tons of articles that say screen time is bad, social media is bad, etc etc... and I suppose you can always find some expert to support those claims, but in terms of actual scientific research, actual experiments, the fear of social media and Internet addiction in general has no more basis today than addiction to TV or video games or even music had in the past, and yes, "experts" in the past went after music as well claiming that it corrupted young kids minds.
* Maryanne Wolf * Cal Newport * Jaron Lanier * Studies conducted by Facebook and brought to public attention by Frances Haugen
Not all are writing about kids specifically but rather humans in general. There are also plenty of psychotherapists specializing in internet addiction, you can look through some of their web pages to see what they are dealing with. I'm from a small European country, I'd recommend looking into local professionals.
It's true that there is a lot of hysteria and speculation and the grand experiment is still running. I admit I'm also biased by what I see in grown ups around me and by my own experiences. The apps and games our kids use are designed to make them addicted. And wasn't it in Irresistible by Adam Alter that managers of companies like Facebook and similar heavily regulate their kids screen times?
I'm not advocating for zero screen time. There are official recommendations from institutes of developed countries and I lean on those. I believe in limits on time and content and media education. They can also learn to deal with something dangerous slowly, they don't have to crash a thousand times.