The short to medium term concerns mostly come down to human problems. AI and robotics have a multiplicative affect like other technologies, but the problems still originate in the primate nature of humans.
The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
Sometimes I think most of our problems come down to not being on the same page. And I wonder if somehow in the future we gradually become a tiny bit like The Borg.
So maybe we are headed towards a "meta-system transition" where we have some kind of direct links between groups of AI and humans that combine to form a more intelligent and effective organism in some way.
I guess I just came up with a Black Mirror episode concept.
I've said it before, and I'll say it again: there is absolutely zero evidence indicating that we are any closer to AGI than what James Watt was to realizing nuclear fusion.
There are extremely well researched worms with only a few hundred neurons which we cannot yet simulate with anything resembling accuracy. How can that statement be true, if LLMs are close to delivering superhuman intelligence?
Why do we have to simulate real neurons to achieve intelligence? We don't simulate a bird's bones and muscles to achieve flight.
LLMs are a huge step forward. Sure, they might not be the thing to ultimately deliver superhuman intelligence. But it's unfair to say that we're not any closer at all.
Very long time AI skeptic (since 1980) but I believe you're wrong. It turns out we don't need to figure out exactly how a biological brain works. It's just nature's solution for implementing matrix operations (greatly simplified take, obviously). Having hit on approaches that produce similar results to biological systems, we now don't need to focus so much on emulating them.
I do suspect however that there's something to the biological experience of "being the life support system" for the brain, that significantly affects the training process. It might be challenging to simulate that.
The human brain needs 80 billion neurons to hold 5-7 numbers in memory and add and multiply them. How can a MOS integrated circuit from 1965 with only 250 transistors possibly be turned into a faster and more capable arithmetic calculator when it is so woefully underpowered by comparison?
> "there is absolutely zero evidence indicating that we are any closer to AGI than what James Watt was to realizing nuclear fusion"
James Watt lived before Rutherford split the atom, he didn't know they could be split or fused, he was not trying for nuclear fusion. We do know that information exists and can be processed. Still, James Watt was closer to large scale controlled release of energy than humans before the control of fire.
We know that human level intelligence is possible, in a way that Watt didn't know fusion was possible. We have looked for other mechanisms hiding in the brain - Penrose and Hameroff's ideas of quantum tubules for one - and rejected them. We've pretty closely bounded the amount of energy a brain uses, what it can sense, how it can input and output, what it's made of, and through what mechanisms of electric pulses and neurotransmitters it seems likely to operate. We've identified brain regions which cause predictable effects when damaged (on language, memory, executive decision making). We've dissected it, photographed it, x-rayed it, imaged it in layers, imaged it while active.
We've represented facts as data, written code which uses genetic algorithms to solve problems, written code which writes code to solve problems, written code which crunches data in large quantities to pull out higher dimensional patterns, code which appears to identify concepts from that, code which does specialised tasks such as face recognition, language recognition, language generation, person recognition.
It's pretty indefensible to say "absolutely zero evidence indicating we are any closer than people of 10k BC, people of 0 AD, people of 1800, or people of 1900". You're basically doing the God of the gaps argument, as we map the brain physically in more and more detail, and encroach from the data an information processing side, the gap between them where intelligence could be hiding appears to be shrinking. But with no evidence or support, you suggest something else is hiding there which makes the gap a Jaunt so large that encroachment from either side is immeasurably tiny. And you hide it behind the "G" for general.
Twenty years ago you could talk to a nonsense chatbot. Ten-ish years ago you could talk to a somewhat coherent GPT. Five years ago you could talk to a surprisingly coherent ChatGPT. Today people collaborate with Claude, ChatGPT, Gemini, ask them to explain concepts, provide examples and get them to follow on, offload work to them, learn from them, have them proof read documents, have them rewrite in different styles.
I think Hofstadter said he wasn't interested in computers because they don't have concepts. Here's an article in the ACM last year saying similar: ""Neural networks can look at a picture and tell whether it contains a dog, a cat, or a car, but they do not have a rich understanding of any of those categories. Take the concept of a bridge. Humans can extend the notion of a bridge to abstract levels. We can talk about a bridge between people or bridging the gender gap. We can instantly understand what these expressions mean because we have a rich mental model of what a bridge can be."" - https://cacm.acm.org/news/artificial-intelligence-still-cant...
Asking ChatGPT:
Me: "Hi, a bridge is a big metal thing over a road or river;
what does it mean to "bridge the gender gap"?
Can you explain briefly, please?"
ChatGPT: "To "bridge the gender gap" means to reduce the differences
or inequalities between men and women, particularly in areas
like pay, opportunities, education, or representation.
It's about creating a more equal and fair situation where both
genders have the same access and opportunities. Just like a
physical bridge connects two places, "bridging the gender gap"
connects the unequal sides, making them more balanced."
Me: "Interesting; can you suggest other non-physical things
which could be "bridged" please?"
ChatGPT: "Certainly! Here are some examples of non-physical things that can be "bridged":
Cultural gap – Reducing misunderstandings or differences between people from different cultures.
Communication gap – Improving understanding between people who are having difficulty communicating.
Generation gap – Narrowing the differences in attitudes and perspectives between older and younger generations.
Wealth gap – Reducing economic inequality between rich and poor.
Knowledge gap – Closing the gap in access to information or education between different groups.
Digital divide – Bridging the gap between those who have access to technology and the internet and those who don’t.
Trust gap – Rebuilding or enhancing trust between individuals or groups, like companies and their customers.
In each case, "bridging" is about connecting or bringing closer
two sides that are separated by some form of difference or inequality.
Seems to have some grasp of the general concept of "bridging" more than Eliza or Eugene Goostman, doesn't it? "I have tried some letter-string problems in GPT-3, and in some cases it could solve them. It learned, for example, the concept of successorship. Not perfect, not robust, but I found it still surprising that it can do this. Therefore, I don't agree that these systems are only 'stochastic parrots', as some scientists have called them. I have seen evidence of GPT building simple internal models of situations."" says the expert in the article.
I'm not saying this is an AGI or will lead directly to AGI, but stubbornly refusing to accept this obvious progress as any progress towards machine intelligence at all, calling it "absolutely zero" evidence of progress seems wilfully blinkered.
Do you genuinely put us absolutely no closer, not a single step closer, to AGI than the Mechanical Turk or the people of 50k BC?
I don't think this assertion stands on its own. There is more local corruption in the US than at the federal level for example. Way fewer eyes on your activities. Practically no one is going to notice you getting kickbacks for approving zoning changes for a business or other sorts of "low stakes" corruption.
> The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
This is literally the AI doomer position which is very widespread and advocated clearly and explicitly.
We don't need a very strong world government to meet our challenges, I think, including climate change and AI. We can do it with agreements and accords.
(I also don't think a world government might be as dystopic as people imply or fear, for the record, because it can follow democratic principles just as we currently have; and its effects might not be as great as we already have today with the globalizing effects of internet and specially social media. Just given the resistence against the concept, I think the tools we already have and simply coordinating as countries, if may be less effective, probably can work)
The key thing about them is that accords must be enforceable and enforced. There must be consequences for violating them (with escalating gravity, sanctions and measures).
I actually think disarmament of governments worldwide should be one of those goals I believe (and I think very urgent). We know that war is essentially the worst possible form of diplomacy or two nations resolving a disagreement (when it isn't just a power grab or senseless destruction). We shouldn't have that option readily available at all. As for the danger that one nation rapidly industrializes and acquires weapons, against defenseless opponents, we would have very stringent and effective vigilance at all countries industrial capacity against production of significant quantities of military weapons. We can actually extinguish war forever like we extinguish diseases, I really believe. For anyone thinking about existential risks to humanity, I think this should be near the top of the list beside climate change I think.
Of course, there will still be conflicts coming from a myriad of reasons, but those conflicts can be managed and resolved in other ways. And practicing mutual agreements and conflict resolution will be as important as ever -- simply military disarming doesn't magically resolve all issues. For lasting stability worldwide promotion of fundamental human values and rights as well as peace is extremely important (a wise population will not wage war on fellow humans). I think there could be some international work in this area as well. The UN is usually great at it too.
> The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
That’s naive. Even if a global government were possible, cypherpunks, "AI terrorists" or other actors (many HN users) would easily find ways to bypass those controls. It's unrealistic to assume that centralization could prevent exploitation of AI.
Every technology that can be weaponized has been weaponized. Unless we recognize and fix the problems associated with the human condition, we are heading towards a dangerous future.
The human condition is the human condition, in that for as long as we are human, it will never be any other way. This is why our demise is inevitable, and likely to be by our own hand. Greed will always exist, and human ingenuity will never be stopped. The result is an asymptotic approach to certain doom. This is why I think the "Great Filter" explanation for the Fermi Paradox is the most likely.
Why will robots eventually "literally take control"?
I'm anti-AI at the best of times, but this is alarmist at best, and dishonest at worst—especially to suggest a single government as a solution. Name a single existing government that you would want running the world. Fuck that.
>The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
what are you talking about; the main marketing strategy for so terribly many ai companies is to run around declaring the end is soon because their product is so powerful
>The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture.
At this point I think you're joking. Tightly centralizing power always results in oligarchy, kleptocracy, and collapse. And why do you think this central world government wouldn't militarize via your unstoppable robots?
Humanity is not a work task that can be obsoleted. It's hard to imagine total human versatility overlap by AI is possible let alone around the corner.
The Globotax for UBIbucks is just Inescapable Socialism by another name, today it solves imagined ills from murky AI predictions, yesterday it solved automation. There is more innovative work to be done and that needs an accurate human feedback loop.
Eh I don't think humans will really be a viable concept in a few hundred years if this keeps up. We'll simply be too fragile and dumb for anything.
I see this as a good thing though, the only thing that separates us from everything else in the universe is our inteligence, in some sense we are that inteligence and this is the next evolutionary step it needs to finally break out from the confines of a tiny 20W brain that's been pushed as far as it practically can be.
When you no longer need to consume energy constantly unless you die, are functionally immortal and can hibernate for a thousand years, you eat only sunlight, don't get cancer from radiation or bone loss from zero gravity... the galaxy becomes your oyster.
Evolution, Morpheus, evolution. Like the dinosaur ;)
We are extremely energy-efficient. My lunch was one fruit, had no breakfast, a bit hungry but can still type in the sequence of word you're reading right now. Compare this with the wattage of OpenAI, from what I can find 2.9 Watt-hours for one query, so 2.4 kcal, ie. the whole daily-maintenance food need for an average man...
I think there exists a point, where AI stops making sense in a cost equation. Currently we are well accustomed that machines are cheap and humans are expensive, and anything that can be done by machine instead of by a human means a cost reduction. At some point that probably stops, because the amount of compute you need to run larger and larger models means that humans can become cost competitive. It's a bit like in manufacturing, where cost of robotization is sometimes more than just the cost of employing humans.
And in the end, this long term predictions based on where we are now and how fast we are moving are always a bit iffy. Nobody really know what challenges must current AI overcome to become AGI (if that even means anything).
Suppose you are right. From a genetic perspective, the most logical course of hypothetical action is to immediately start smashing computers, learning how to blow up terminators, and to oppose and undermine anyone that thinks like you.
Now, I actually do not believe you are right and I am more inclined to believe that artificial intelligence stemming from humanity would see the value in human ideals. Certain things like love, morality, and humility have been deeply held ideals for millennia. I would suppose that an AI would, to some extent, value these things too. Think more “The Day the Earth Stood Still” than Terminator.
This is all conjecture, but one thing I would suggest to you: if you see the natural evolution of your own species to be the removal of itself, you probably should be nowhere near AI or alignment. What you suggest is akin to world-wide genocide. Ironically, in my estimation, you are less aware of humanity than AI will be.
On another note, I've just finished listening to Harry Potter (1-7) audiobooks read by Stephen Fry - and he's marvellous as an actor there. Every main person he read had a different type of voice, a different way of speaking, melody, and pronunciation, and you mostly could know who was speaking even before the narrator said so, all on the courtesy of the one throat of Stephen Fry.
Now listening to Hitchhiker's Guide To Galaxy. Nice to see he has also a blog, and even sometimes reads it with his own voice.
It's such a massive shame he didn't narrate all the other books in the Hitchhiker's series!
The first book is amazing, but he never narrated the remaining books.
The rest of the books in the series are narrated by a different guy, not nearly as good, and he does a weird New York accent for Zaphod which is just totally wrong.
The following are read by Martin Freeman, he's the actor who played hobbit, but I believe he was chosen for actually playing Artur Dent in the movie. I think the actor reading the book has his freedom to interpret the characters and invent his own voices, especially because he won't be able to read as another actor did, but I'm coming there (to the second book) so I'll try to remember to come back and tell you what I think of a two headed alien villain from another planet having a New York accent...
IMO accent of Tonks' impression by Stephen Fry was also kinda strange to me... I wonder why the accent was so thick, she was just Scottish...
and it all makes me wonder what homesteading in the 21st century could be like, and what the resource limits are --- Isaac Asimov once posited that if one converted the entirety of the earth's crust into biomass the limiting element is phosphorous --- what is the limiting material for our modern lifestyle?
There's at least one recent book which looks at this:
Who is going to determine how resources are divided/allocated? Using what mechanism?
My grandfather lived in a time when commercial hunting was outlawed (and multiple species were made extinct before that decision was arrived at) --- will my children live in a time when commercial fishing is halted?
The homestead act in the U.S. had families setting up farms on 160 acres or so --- how do modern technologies affect the minimum acreage which a family would need for self-sufficiency to any measurable degree?
What sort of industrial base is needed for technologies such as bottled hydrogen being made by solar power? How long do the bottles and the bottling/capture system last? How long does a geothermal system last and what sort of on-going maintenance is needed and how does replacing it get budgeted for?
Modern industrial farming practices are burning/using as many as 10 calories of petro-chemical energy for 1 calorie of food energy --- what happens to food prices when we get past peak oil? Solar is supposed to work as a replacement --- when the cost of a barrel of oil is ~$400 last I checked --- what does food cost at that price point?
I tend to think the car comparison is cause for optimism. Prior to mass car ownership you would assume that such things in the general population would be enormously more dangerous than they have proven to be.
We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
Humanity also has a huge problem with the mortality of the species as a whole. At some point we will be extinct. Will we have evolved into something else before this? Or will we choose to replace ourselves with machines? It is a valid choice to make in the face of changing circumstances which will occur no matter what we do.
> We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
I'm not so sure on the utility of cars. If you replace "car" with "internal combustion engines", I would agree 100%. But most social utility of ICEs is in cargo transport and utility vehicles (ambulances, firefighter trucks...). Mass use of small cars solve mostly the problems they create themselves. For example, car-centric cities suffer from sprawl, so everything is far away and you can't even buy bread without owning a car, because walking to the bakery would take too much time.
Mass car ownership has been a disaster; the entirety of society was reorganized around them causing urban sprawl and long commutes; nevermind the myriad conflicts around the world caused in the pursuit of oil.
The road deaths are an incredibly minor party of the harm caused by cars.
> Prior to mass car ownership you would assume that such things in the general population would be enormously more dangerous than they have proven to be.
An assumption that was correct and involved large-scale propaganda efforts to blame pedestrians for the harm inflicted upon them, including blaming children for their own deaths. U.S. car accident fatalities are also still to this day at developing country levels and every actually developed country largely has public transportation infrastructure. And this is before we get into the blight that is American urban design, evolving to serve the needs of this mass car ownership, with all its down stream environmental harms.
> We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
And along the way we have arranged our physical surroundings, legal system, and cultural mores in such a way that cars are significantly more dangerous than they need to be.
200 years of internal combustion engines and we're slowly waking up to the fact we might have fucked up our atmosphere beyond repair, hn take: cars are amazing with only slight drawbacks
While I mostly agree, we had state driver licenses almost immediately at the advent of the consumer automobile.
I am not sure we could do that today. I think the push would largely to be to ban the automobile completely as too dangerous if we had to do it in 2024.
This is all just a mirror being held up to such a wealthy society and the changing risk preferences that come with that over time.
You're missing the point of the car section. The point isn't that cars were a dangerous technology.
The point is that just as the car replaced the horse, programs on computers are set to replace the human mind. Right now, we see only the beginnings of the trend; inventions which look interesting but limited. But eventually, and perhaps sooner than many believe, the replacement of human cognition will radically transform the world in unprecedented ways.
> The point is that just as the car replaced the horse, programs on computers are set to replace the human mind
My point is this is not inherently a bad thing. At some point either the human mind is overtaken by another species or AI. We must learn to accept this inevitability.
It's astonishing to me that people do not see the correlation between AI and as you call it "bureaucratic collective action".
I have considered corporations, nation states, etc as forms of artificial intelligence that long predates computers. Just because the intelligence is collective doesn't make it less artificial. and just because the "code" that makes up these collective intelligences are written as laws and governance documents doesn't change the fact that it is an agent that can make "decisions".
I suppose it is hard for people to feel a connection with a group or org as intelligence opposed to an personified "individual" like ChatGPT. Kind of like how it is hard to see an anthill as an organism when we are so much more related to the ant. However just because its hard for us to relate to an anthill, it doesn't make it any less of an organism that behaves like an individual.
In order to advance AI, it is imperative to study and understand how our current "super intelligences" of nation-states and mega-corporations behave and how they interact with others and govern themselves.
We're swimming in a dominant ideology which barely recognizes the existence of "society" (the Margaret Thatcher "there's no such thing as society" vibe) and frames most of reality in terms of individualism, individuals, and the self.
It should come as no surprise how little understanding there is of the social framing of technology and tools.
From chipping flint to writing code, it's a huge mass of inherited and circulated technique-knowledge.
I have been fascinated by this thought experiment for many years as well.
>I suppose it is hard for people to feel a connection with a group or org as intelligence
In theory (one theory, at least), we should have as much chance to understand what such an intelligence thinks as a single brain cell would have of understanding what goes on in a human mind.
What worries me is that in many industries (health & education particularly) there doesn't seem to be any actual limit to admin demand.
Like; admin demands increase until the system barely functions and compliance is just barely possible.
BUT administrators would always like more data, controls and process.
So one possible outcome of AI is that computers magically pick up our admin burdens, freeing us to do our jobs in peace.
The _other_ possible outcome is that AI increases our capacity for admin and therefore generates way more admin.
A particular risk is where AI generates asymmetric leverage for administrators.
An example of this would be admin requiring lengthy forms to do such-and-such. If a human administrator no longer needs to spend their time validating or checking the forms, there's less pushback on making the forms longer or producing even more of them.
> AI increases the capacity for admin and therefore generates way more admin
This is the most likely scenario.
The original sales pitch for Computers were originally to free doctors and nurses and admins by making things easier. But instead the administrative work exponentially grew.
I agree that effective human organization is, by far, our largest problem today. All of our contemporary problems are, at their core, organizational problems.
However, we actually lack an understanding of how large systems are organized.
In the human body, each cell performs its task without a centralized authority but as an individual agent. The cells are selfish and just try to survive. But emergent in this behavior is a human that thinks itself the master of all the cells.
Michael Levin has done surprising things that fall from studying cells of animal bodies and how they can act independently.
We really need a science of agent based systems. I don't know of something that captures it entirely.
What I do know is that top-down hierarchical organizations are fragile and will corrupt with time. Agent based systems are the only way nature will allow to progress much.
Every attempt to organize people bureaucratically ends in failure or war. We don't seem to learn this lesson as we keep trying.
No bureaucracy will ever care for individual goals, needs, desires as much as you will for yourself and your family.
We need some rules of the road and limits. History also proves that. But it's a fools errand to think that we can organize 8 billion people to work together, other than within some fairly loose rules that prohibit undesirable extremes of behavior.
Can you expand on the "greatest technology" half of your comment?
I can see plenty of clear reasons why bureaucratic collective action is in need of investment, but I'm predisposed to suspect that's because currently it's pretty bad.
>It doesn’t take much for an Ai to find out that if it is to complete the tasks that are given it, then its first duty (obviously) is to survive.
Or just have another robot fetch the coffee if the first goes offline? Death and anthropomorphism are clearly the wrong concepts here. His error is to imagine households so poor they won't be able to send another robot to see where the last robot broke down. Fry is really out of touch with the way things and parts of things are tinkered with. Even chatgpt4-o1 CoT is not 1 thing but a system sending another agent to see where the last one went wrong and adjust.
So evolution magically explains self-preservation in AI and is sure to emerge in machines like animals and will alchemize more "self-awareness". Fantastic paradoxical sci-fi storytelling but not convincing at all in the real world.
The facts are AI never had any self-awareness, it doesn't know where it begins or ends, neither the system or its creator can tune into actual evolutionary forces (artificial selection might be a better term for pretending to though it frightens less), and it cannot "lie" or "breach itself" with any sense of agency. The only things that will be breached for the foreseeable future are the pundits' fashionably wrong-headed expectations.
The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
Sometimes I think most of our problems come down to not being on the same page. And I wonder if somehow in the future we gradually become a tiny bit like The Borg.
So maybe we are headed towards a "meta-system transition" where we have some kind of direct links between groups of AI and humans that combine to form a more intelligent and effective organism in some way.
I guess I just came up with a Black Mirror episode concept.
There are extremely well researched worms with only a few hundred neurons which we cannot yet simulate with anything resembling accuracy. How can that statement be true, if LLMs are close to delivering superhuman intelligence?
LLMs are a huge step forward. Sure, they might not be the thing to ultimately deliver superhuman intelligence. But it's unfair to say that we're not any closer at all.
I do suspect however that there's something to the biological experience of "being the life support system" for the brain, that significantly affects the training process. It might be challenging to simulate that.
The biggest danger I see is a widespread AI with a set of badly defined goals, not a particularly smart and evil one.
> "there is absolutely zero evidence indicating that we are any closer to AGI than what James Watt was to realizing nuclear fusion"
James Watt lived before Rutherford split the atom, he didn't know they could be split or fused, he was not trying for nuclear fusion. We do know that information exists and can be processed. Still, James Watt was closer to large scale controlled release of energy than humans before the control of fire.
We know that human level intelligence is possible, in a way that Watt didn't know fusion was possible. We have looked for other mechanisms hiding in the brain - Penrose and Hameroff's ideas of quantum tubules for one - and rejected them. We've pretty closely bounded the amount of energy a brain uses, what it can sense, how it can input and output, what it's made of, and through what mechanisms of electric pulses and neurotransmitters it seems likely to operate. We've identified brain regions which cause predictable effects when damaged (on language, memory, executive decision making). We've dissected it, photographed it, x-rayed it, imaged it in layers, imaged it while active.
We've represented facts as data, written code which uses genetic algorithms to solve problems, written code which writes code to solve problems, written code which crunches data in large quantities to pull out higher dimensional patterns, code which appears to identify concepts from that, code which does specialised tasks such as face recognition, language recognition, language generation, person recognition.
It's pretty indefensible to say "absolutely zero evidence indicating we are any closer than people of 10k BC, people of 0 AD, people of 1800, or people of 1900". You're basically doing the God of the gaps argument, as we map the brain physically in more and more detail, and encroach from the data an information processing side, the gap between them where intelligence could be hiding appears to be shrinking. But with no evidence or support, you suggest something else is hiding there which makes the gap a Jaunt so large that encroachment from either side is immeasurably tiny. And you hide it behind the "G" for general.
Twenty years ago you could talk to a nonsense chatbot. Ten-ish years ago you could talk to a somewhat coherent GPT. Five years ago you could talk to a surprisingly coherent ChatGPT. Today people collaborate with Claude, ChatGPT, Gemini, ask them to explain concepts, provide examples and get them to follow on, offload work to them, learn from them, have them proof read documents, have them rewrite in different styles.
I think Hofstadter said he wasn't interested in computers because they don't have concepts. Here's an article in the ACM last year saying similar: ""Neural networks can look at a picture and tell whether it contains a dog, a cat, or a car, but they do not have a rich understanding of any of those categories. Take the concept of a bridge. Humans can extend the notion of a bridge to abstract levels. We can talk about a bridge between people or bridging the gender gap. We can instantly understand what these expressions mean because we have a rich mental model of what a bridge can be."" - https://cacm.acm.org/news/artificial-intelligence-still-cant...
Asking ChatGPT:
Seems to have some grasp of the general concept of "bridging" more than Eliza or Eugene Goostman, doesn't it? "I have tried some letter-string problems in GPT-3, and in some cases it could solve them. It learned, for example, the concept of successorship. Not perfect, not robust, but I found it still surprising that it can do this. Therefore, I don't agree that these systems are only 'stochastic parrots', as some scientists have called them. I have seen evidence of GPT building simple internal models of situations."" says the expert in the article.I'm not saying this is an AGI or will lead directly to AGI, but stubbornly refusing to accept this obvious progress as any progress towards machine intelligence at all, calling it "absolutely zero" evidence of progress seems wilfully blinkered.
Do you genuinely put us absolutely no closer, not a single step closer, to AGI than the Mechanical Turk or the people of 50k BC?
This is literally the AI doomer position which is very widespread and advocated clearly and explicitly.
(I also don't think a world government might be as dystopic as people imply or fear, for the record, because it can follow democratic principles just as we currently have; and its effects might not be as great as we already have today with the globalizing effects of internet and specially social media. Just given the resistence against the concept, I think the tools we already have and simply coordinating as countries, if may be less effective, probably can work)
The key thing about them is that accords must be enforceable and enforced. There must be consequences for violating them (with escalating gravity, sanctions and measures).
I actually think disarmament of governments worldwide should be one of those goals I believe (and I think very urgent). We know that war is essentially the worst possible form of diplomacy or two nations resolving a disagreement (when it isn't just a power grab or senseless destruction). We shouldn't have that option readily available at all. As for the danger that one nation rapidly industrializes and acquires weapons, against defenseless opponents, we would have very stringent and effective vigilance at all countries industrial capacity against production of significant quantities of military weapons. We can actually extinguish war forever like we extinguish diseases, I really believe. For anyone thinking about existential risks to humanity, I think this should be near the top of the list beside climate change I think.
Of course, there will still be conflicts coming from a myriad of reasons, but those conflicts can be managed and resolved in other ways. And practicing mutual agreements and conflict resolution will be as important as ever -- simply military disarming doesn't magically resolve all issues. For lasting stability worldwide promotion of fundamental human values and rights as well as peace is extremely important (a wise population will not wage war on fellow humans). I think there could be some international work in this area as well. The UN is usually great at it too.
That’s naive. Even if a global government were possible, cypherpunks, "AI terrorists" or other actors (many HN users) would easily find ways to bypass those controls. It's unrealistic to assume that centralization could prevent exploitation of AI.
Dead Comment
I'm anti-AI at the best of times, but this is alarmist at best, and dishonest at worst—especially to suggest a single government as a solution. Name a single existing government that you would want running the world. Fuck that.
what are you talking about; the main marketing strategy for so terribly many ai companies is to run around declaring the end is soon because their product is so powerful
>The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture.
At this point I think you're joking. Tightly centralizing power always results in oligarchy, kleptocracy, and collapse. And why do you think this central world government wouldn't militarize via your unstoppable robots?
The Globotax for UBIbucks is just Inescapable Socialism by another name, today it solves imagined ills from murky AI predictions, yesterday it solved automation. There is more innovative work to be done and that needs an accurate human feedback loop.
I see this as a good thing though, the only thing that separates us from everything else in the universe is our inteligence, in some sense we are that inteligence and this is the next evolutionary step it needs to finally break out from the confines of a tiny 20W brain that's been pushed as far as it practically can be.
When you no longer need to consume energy constantly unless you die, are functionally immortal and can hibernate for a thousand years, you eat only sunlight, don't get cancer from radiation or bone loss from zero gravity... the galaxy becomes your oyster.
Evolution, Morpheus, evolution. Like the dinosaur ;)
And in the end, this long term predictions based on where we are now and how fast we are moving are always a bit iffy. Nobody really know what challenges must current AI overcome to become AGI (if that even means anything).
Now, I actually do not believe you are right and I am more inclined to believe that artificial intelligence stemming from humanity would see the value in human ideals. Certain things like love, morality, and humility have been deeply held ideals for millennia. I would suppose that an AI would, to some extent, value these things too. Think more “The Day the Earth Stood Still” than Terminator.
This is all conjecture, but one thing I would suggest to you: if you see the natural evolution of your own species to be the removal of itself, you probably should be nowhere near AI or alignment. What you suggest is akin to world-wide genocide. Ironically, in my estimation, you are less aware of humanity than AI will be.
Now listening to Hitchhiker's Guide To Galaxy. Nice to see he has also a blog, and even sometimes reads it with his own voice.
https://youtu.be/r84pbPbBAFY
The first book is amazing, but he never narrated the remaining books. The rest of the books in the series are narrated by a different guy, not nearly as good, and he does a weird New York accent for Zaphod which is just totally wrong.
IMO accent of Tonks' impression by Stephen Fry was also kinda strange to me... I wonder why the accent was so thick, she was just Scottish...
https://marshallbrain.com/manna
and it all makes me wonder what homesteading in the 21st century could be like, and what the resource limits are --- Isaac Asimov once posited that if one converted the entirety of the earth's crust into biomass the limiting element is phosphorous --- what is the limiting material for our modern lifestyle?
There's at least one recent book which looks at this:
https://www.goodreads.com/book/show/125937631-material-world
Who is going to determine how resources are divided/allocated? Using what mechanism?
My grandfather lived in a time when commercial hunting was outlawed (and multiple species were made extinct before that decision was arrived at) --- will my children live in a time when commercial fishing is halted?
The homestead act in the U.S. had families setting up farms on 160 acres or so --- how do modern technologies affect the minimum acreage which a family would need for self-sufficiency to any measurable degree?
What sort of industrial base is needed for technologies such as bottled hydrogen being made by solar power? How long do the bottles and the bottling/capture system last? How long does a geothermal system last and what sort of on-going maintenance is needed and how does replacing it get budgeted for?
Modern industrial farming practices are burning/using as many as 10 calories of petro-chemical energy for 1 calorie of food energy --- what happens to food prices when we get past peak oil? Solar is supposed to work as a replacement --- when the cost of a barrel of oil is ~$400 last I checked --- what does food cost at that price point?
We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
Humanity also has a huge problem with the mortality of the species as a whole. At some point we will be extinct. Will we have evolved into something else before this? Or will we choose to replace ourselves with machines? It is a valid choice to make in the face of changing circumstances which will occur no matter what we do.
I'm not so sure on the utility of cars. If you replace "car" with "internal combustion engines", I would agree 100%. But most social utility of ICEs is in cargo transport and utility vehicles (ambulances, firefighter trucks...). Mass use of small cars solve mostly the problems they create themselves. For example, car-centric cities suffer from sprawl, so everything is far away and you can't even buy bread without owning a car, because walking to the bakery would take too much time.
The road deaths are an incredibly minor party of the harm caused by cars.
An assumption that was correct and involved large-scale propaganda efforts to blame pedestrians for the harm inflicted upon them, including blaming children for their own deaths. U.S. car accident fatalities are also still to this day at developing country levels and every actually developed country largely has public transportation infrastructure. And this is before we get into the blight that is American urban design, evolving to serve the needs of this mass car ownership, with all its down stream environmental harms.
And along the way we have arranged our physical surroundings, legal system, and cultural mores in such a way that cars are significantly more dangerous than they need to be.
Humanity will never learn
I am not sure we could do that today. I think the push would largely to be to ban the automobile completely as too dangerous if we had to do it in 2024.
This is all just a mirror being held up to such a wealthy society and the changing risk preferences that come with that over time.
Bio-engineering seems much more likely.
The point is that just as the car replaced the horse, programs on computers are set to replace the human mind. Right now, we see only the beginnings of the trend; inventions which look interesting but limited. But eventually, and perhaps sooner than many believe, the replacement of human cognition will radically transform the world in unprecedented ways.
> The point is that just as the car replaced the horse, programs on computers are set to replace the human mind
My point is this is not inherently a bad thing. At some point either the human mind is overtaken by another species or AI. We must learn to accept this inevitability.
I have considered corporations, nation states, etc as forms of artificial intelligence that long predates computers. Just because the intelligence is collective doesn't make it less artificial. and just because the "code" that makes up these collective intelligences are written as laws and governance documents doesn't change the fact that it is an agent that can make "decisions".
I suppose it is hard for people to feel a connection with a group or org as intelligence opposed to an personified "individual" like ChatGPT. Kind of like how it is hard to see an anthill as an organism when we are so much more related to the ant. However just because its hard for us to relate to an anthill, it doesn't make it any less of an organism that behaves like an individual.
In order to advance AI, it is imperative to study and understand how our current "super intelligences" of nation-states and mega-corporations behave and how they interact with others and govern themselves.
It should come as no surprise how little understanding there is of the social framing of technology and tools.
From chipping flint to writing code, it's a huge mass of inherited and circulated technique-knowledge.
>I suppose it is hard for people to feel a connection with a group or org as intelligence
In theory (one theory, at least), we should have as much chance to understand what such an intelligence thinks as a single brain cell would have of understanding what goes on in a human mind.
Like; admin demands increase until the system barely functions and compliance is just barely possible.
BUT administrators would always like more data, controls and process.
So one possible outcome of AI is that computers magically pick up our admin burdens, freeing us to do our jobs in peace.
The _other_ possible outcome is that AI increases our capacity for admin and therefore generates way more admin.
A particular risk is where AI generates asymmetric leverage for administrators.
An example of this would be admin requiring lengthy forms to do such-and-such. If a human administrator no longer needs to spend their time validating or checking the forms, there's less pushback on making the forms longer or producing even more of them.
This is the most likely scenario.
The original sales pitch for Computers were originally to free doctors and nurses and admins by making things easier. But instead the administrative work exponentially grew.
However, we actually lack an understanding of how large systems are organized.
In the human body, each cell performs its task without a centralized authority but as an individual agent. The cells are selfish and just try to survive. But emergent in this behavior is a human that thinks itself the master of all the cells.
Michael Levin has done surprising things that fall from studying cells of animal bodies and how they can act independently.
We really need a science of agent based systems. I don't know of something that captures it entirely.
What I do know is that top-down hierarchical organizations are fragile and will corrupt with time. Agent based systems are the only way nature will allow to progress much.
Wasn't that Stephen Wolfram's _A New Kind of Science_?
We need to heavily (re)invest in processes, tools, ethics, technology that boost human social power and our ability to work together.
There are 8 billion people on the planet. So much potential.
No bureaucracy will ever care for individual goals, needs, desires as much as you will for yourself and your family.
We need some rules of the road and limits. History also proves that. But it's a fools errand to think that we can organize 8 billion people to work together, other than within some fairly loose rules that prohibit undesirable extremes of behavior.
I can see plenty of clear reasons why bureaucratic collective action is in need of investment, but I'm predisposed to suspect that's because currently it's pretty bad.
""" Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/ """
Or just have another robot fetch the coffee if the first goes offline? Death and anthropomorphism are clearly the wrong concepts here. His error is to imagine households so poor they won't be able to send another robot to see where the last robot broke down. Fry is really out of touch with the way things and parts of things are tinkered with. Even chatgpt4-o1 CoT is not 1 thing but a system sending another agent to see where the last one went wrong and adjust.
So evolution magically explains self-preservation in AI and is sure to emerge in machines like animals and will alchemize more "self-awareness". Fantastic paradoxical sci-fi storytelling but not convincing at all in the real world.
The facts are AI never had any self-awareness, it doesn't know where it begins or ends, neither the system or its creator can tune into actual evolutionary forces (artificial selection might be a better term for pretending to though it frightens less), and it cannot "lie" or "breach itself" with any sense of agency. The only things that will be breached for the foreseeable future are the pundits' fashionably wrong-headed expectations.