> OpenAI has argued that they literally can’t build what they are building without an exemption from copyright that would crush artists, musicians, writers, and other creators.
This is my favorite thing about what Silicon Valley has become. "But we're disruptors! We can't do what we want to do unless we break all of your laws!"
Climate Change: Energy != climate change. Even if it entirely did, using it as an argument without including the benefits is to put it politely (i dont want to) is very silly. AGI (if possible) will solve climate.
Economic Resources: Again talking about the costs without the benefits is extremely disenginous, especially since this authour believes in ASI
Human Intellectual Capital: Barring some exceptions, IP laws are extremely outdated and wont last for long regardless of whether Sam venture succeeds or not. Also blacksmiths making horseshoes..
Negative Extranalities: This is where having some liberties to go outside of the HN guidelines would be extremely appropriate. Progress comes with risks but the alternative is the end of civilisation. Probably just the west and not humanity since China, Middle East, India etc have not been ...
Every major problem facing humanity can be solved given enough time, intelligence and creativity. If we had a technology than can accelerate these things, it would be illogical and immoral to not bet big on it.
When I say everything I mean everything from cancer, genetic diseases, famine, drought to climate change, poverty, prosperity etc etc. This not some utopian religious claim either. Everything I've listed is solvable, we have the equations and we've already made so much progress that we know it is just a matter of x. Whether x is time, intelligence or creativity. or all of the above..
To sum it up, the author and his kin are clearly driven by quasi collectivist luddite ideology/moral_framework. It is such a shame that HNs guidelines won't allow me to appropriatly describe the author.
Not at all. This isn't a zero sum game and there's no poetic justice
in letting fools and their money be easily parted. Even if the 7T
never yielded a single "AI" chip, we'd all be much worse-off for that
failure.
For example, the global cost of insecure software is 2T per year. Wrt
to lost opportunity cost, 7T would fund a ground-up secure open
microprocessor and operating system (provably free from back-doors) to
completely replace Android, iOS, Windows etc, with a new-deal style
offering to the entire global technology market. It would pay for
itself in 4 years.
Not so "glamourous" I know, but let's start with the problems we have,
not imaginary ones we haven't even created yet.
Not sure I buy this argument. Typically when society aims for moon-shots, the intermediate discoveries improve the quality of life, sometimes dramatically. It's an unknown unknown. Only exploration can turn it into a known unknown. Maybe the AI chip fails, but we discover a more potent source of energy that's cleaner than petroleum.
I personally don't think Mr. Altman is capable of such moon-shots, but am supportive of moon-shots in general.
I guess his top argument was “climate change”. If that’s the go to we need to see what effect on climate change this 7T would have if invested elsewhere.
Some of the possibilities have a return on investment. Others don’t. The 7T assumes a positive ROI and while fundamental research is crucial to reversing the damage later, I’m not spilling any secrets that we will need 100 dead ends and 1000 incremental improvements before deployment of the next great tech.
Does OpenAI even have an edge anymore? I keep hearing about models being competitive with GPT4 and nothing about their fabled "Q*" model so I'm starting to think they've run their course.
This is my favorite thing about what Silicon Valley has become. "But we're disruptors! We can't do what we want to do unless we break all of your laws!"
Deleted Comment
Deleted Comment
https://twitter.com/realsergevar/status/1753903045962781129
https://officechai.com/startups/sam-altman-is-unsure-if-ilya...
Economic Resources: Again talking about the costs without the benefits is extremely disenginous, especially since this authour believes in ASI
Human Intellectual Capital: Barring some exceptions, IP laws are extremely outdated and wont last for long regardless of whether Sam venture succeeds or not. Also blacksmiths making horseshoes..
Negative Extranalities: This is where having some liberties to go outside of the HN guidelines would be extremely appropriate. Progress comes with risks but the alternative is the end of civilisation. Probably just the west and not humanity since China, Middle East, India etc have not been ...
Every major problem facing humanity can be solved given enough time, intelligence and creativity. If we had a technology than can accelerate these things, it would be illogical and immoral to not bet big on it.
When I say everything I mean everything from cancer, genetic diseases, famine, drought to climate change, poverty, prosperity etc etc. This not some utopian religious claim either. Everything I've listed is solvable, we have the equations and we've already made so much progress that we know it is just a matter of x. Whether x is time, intelligence or creativity. or all of the above..
To sum it up, the author and his kin are clearly driven by quasi collectivist luddite ideology/moral_framework. It is such a shame that HNs guidelines won't allow me to appropriatly describe the author.
For example, the global cost of insecure software is 2T per year. Wrt to lost opportunity cost, 7T would fund a ground-up secure open microprocessor and operating system (provably free from back-doors) to completely replace Android, iOS, Windows etc, with a new-deal style offering to the entire global technology market. It would pay for itself in 4 years.
Not so "glamourous" I know, but let's start with the problems we have, not imaginary ones we haven't even created yet.
I personally don't think Mr. Altman is capable of such moon-shots, but am supportive of moon-shots in general.
We know approximately how much the transition costs per ton of carbon not released. https://www2.deloitte.com/content/dam/insights/us/articles/6... Is a figure pulled from their consultancy but this is one of many.
Some of the possibilities have a return on investment. Others don’t. The 7T assumes a positive ROI and while fundamental research is crucial to reversing the damage later, I’m not spilling any secrets that we will need 100 dead ends and 1000 incremental improvements before deployment of the next great tech.
So is that investment?