Their focus is long running digital intelligences that interact with their environment over long periods of time. I enjoyed reading The Alberta Plan when it was released earlier this year https://arxiv.org/abs/2208.11173
I wondered the other day what the result would be if you strapped a microphone on a baby and every utterance that both the baby and mic heard was fed through speech-to-text and went toward training the baby's personal LLM.
By the time the child had grown to an adult I wonder what kind of results their LLM would produce and the degree to which it might compare to what the child (now grown) would answer?
An "LLM" that could take in visuals via a baby-mounted camera is of course a whole other discussion. (Though I'm sure there are people training ANN's with video feeds rather than the static images that feed systems like DALL•E.)
>I wondered the other day what the result would be if you strapped a microphone on a baby and every utterance that both the baby and mic heard was fed through speech-to-text and went toward training the baby's personal LLM.
I read about a paper[1] a while back, it was a rather unpleasant animal study in cats. Using two kittens, one was free to look around, but the other they immobilised in some way, and made it see what the other kitten saw as it looked around. They discovered the immobilised kitten's visual processing did not develop normally whereas the mobile kitten's did, suggesting that it's not just the sensory input, but it being feedback to some internal agency within the brain.
I suspect the same thing would be true for attempts to develop AGI by giving them an audio-visual copy of a human's environment growing up: that internal state driving (and getting feedback from) the action to investigate/interact with the world is key.
> By the time the child had grown to an adult I wonder what kind of results their LLM would produce and the degree to which it might compare to what the child (now grown) would answer?
It very likely would not compare. Humans are shaped by their subjective experiences not by incoming data and you cannot know what a subject is experiencing solely through the incoming data (except if you have a theory accurately modeling the mind of the subject which is exactly what we are currently missing).
Your LLM won't experience the subject's heartbreaks, joy, grief, shame, hope etc. It will have heard and be able to talk about those, but it will not give accurate answers about what it felt like. Also it won't be able to predict/model accurately how the subject has been changed by those experiences so it could make very wrong assumptions about what the subject could/would do in the future.
I thought about this too. People say that LLM models are only saying the most common tokens that come after the previous token. And that this makes them incomparable to human intelligence.
But Humans are basically long running LLMs that are retrained in real-time. We are the product of our environment.
This is the long term goal of any Augmented Reality interface IMO
This is why I got into AR initially, because a computing system needs the input persistence of a literal parent or in the case of a self learning agent - something like a baby to perfectly observe in order to be able to create the data environment necessary for learning at the rate humans learn.
You could do it with a collection of sensors, but I think the idealized implementation is basically a perfect recreation of the sense inputs of a person as well as monitoring the person to infer the precise Markov Decision Process.
They're not saying it's urgent; they want to bring urgency - convince people to move faster.
It seemed appropriate to ask AI about the meaning of the phrase:
The phrase "bring urgency to something" means to inject a sense of importance and immediate attention to a specific issue, project, or situation. The aim is to motivate people to prioritize the task at hand and to act more quickly than they might otherwise.
Without taking a particular position on how the probabilities shake out, I suggest critically reading “What We Owe the Future” by William MacAskill and:or other long-term writings that use statistical thinking over a wide range of future scenarios.
I don't get to meet the people who spend time on HN. But I'm curious: who doesn't see the value in _critically_ reading long-term thinking over a wide range of future scenarios?
Too busy coding? Reading about something for _your_ career? In service of _your_ family? Trying to get ahead? Yeah, I get it. But tell me: isn't thinking long-term worth something akin to a few hours a month?
You don't need to be doom-and-gloom about it. Sure, get stuff done. Be in the moment. All good.
Wouldn't it be nice to have some confidence that we're setting up future generations to have the possibility to at least have what we do, if not better?
P.S. FWIW, the morality of valuing future generations is _not_ properly addressed by most moral philosophies.
Creator of "Centre for Effective Altruism"? Longtermism? No thanks. I thought that philosophy has been debunked as a scam after the likes of Sam Bankman and others. I mean, living unethically today in order to affect future lives positively is not for me.
> Carmack and Sutton are deeply focused on developing a genuine AI prototype by 2030, including establishing, advancing and documenting AGI signs of life.
Seems a bit inappropriate to use the phrase "signs of life" here. In this context, it sounds like they're bringing sentience into the conversation, which I don't think Carmack is interested in contemplating or discussing.
He’s very interested in having it in the conversation. He didn’t fund this startup to lose money. Loosely breathing AGI gets you funds even if your best case is loosely competing with OpenAI.
There was an Infocom game called AMFV (A Mind Forever Voyaging) where the protagonist wakes up and realizes that he was just a simulation of a human being that has "grown" up into adulthood.
All the formative things that shaped him in life - kindergarten, your first high school romance - a breakup and parent squabbling you experienced.
I doubt it - modern computers are already pretty fast at brute-forcing all sorts of things and the first AGI will be perfectly functional even if it takes a full day to respond, and AGI research requires you to fully understand what you're creating before you create it so full-system testing is something you'd only want to do near the end of the project anyway. At which point you basically want an ASIC.
I wondered the other day what the result would be if you strapped a microphone on a baby and every utterance that both the baby and mic heard was fed through speech-to-text and went toward training the baby's personal LLM.
By the time the child had grown to an adult I wonder what kind of results their LLM would produce and the degree to which it might compare to what the child (now grown) would answer?
An "LLM" that could take in visuals via a baby-mounted camera is of course a whole other discussion. (Though I'm sure there are people training ANN's with video feeds rather than the static images that feed systems like DALL•E.)
I read about a paper[1] a while back, it was a rather unpleasant animal study in cats. Using two kittens, one was free to look around, but the other they immobilised in some way, and made it see what the other kitten saw as it looked around. They discovered the immobilised kitten's visual processing did not develop normally whereas the mobile kitten's did, suggesting that it's not just the sensory input, but it being feedback to some internal agency within the brain.
I suspect the same thing would be true for attempts to develop AGI by giving them an audio-visual copy of a human's environment growing up: that internal state driving (and getting feedback from) the action to investigate/interact with the world is key.
1: https://arxiv.org/abs/1604.03670 "Interactive Perception: Leveraging Action in Perception and Perception in Action"
It very likely would not compare. Humans are shaped by their subjective experiences not by incoming data and you cannot know what a subject is experiencing solely through the incoming data (except if you have a theory accurately modeling the mind of the subject which is exactly what we are currently missing).
Your LLM won't experience the subject's heartbreaks, joy, grief, shame, hope etc. It will have heard and be able to talk about those, but it will not give accurate answers about what it felt like. Also it won't be able to predict/model accurately how the subject has been changed by those experiences so it could make very wrong assumptions about what the subject could/would do in the future.
Published in 1990, by the way.
But Humans are basically long running LLMs that are retrained in real-time. We are the product of our environment.
This is why I got into AR initially, because a computing system needs the input persistence of a literal parent or in the case of a self learning agent - something like a baby to perfectly observe in order to be able to create the data environment necessary for learning at the rate humans learn.
You could do it with a collection of sensors, but I think the idealized implementation is basically a perfect recreation of the sense inputs of a person as well as monitoring the person to infer the precise Markov Decision Process.
Deleted Comment
Why is this urgent?
But I agree with you ... it is not really urgent since we know answers for the most of the problems but do not like the solution.
What an absolutely wonderful insight put extremely succinctly, thank you!
It seemed appropriate to ask AI about the meaning of the phrase:
The phrase "bring urgency to something" means to inject a sense of importance and immediate attention to a specific issue, project, or situation. The aim is to motivate people to prioritize the task at hand and to act more quickly than they might otherwise.
>> Why is this urgent?
that's an understatement. The rush to AGI reminds me of the rush to human cloning.
Too busy coding? Reading about something for _your_ career? In service of _your_ family? Trying to get ahead? Yeah, I get it. But tell me: isn't thinking long-term worth something akin to a few hours a month?
You don't need to be doom-and-gloom about it. Sure, get stuff done. Be in the moment. All good.
Wouldn't it be nice to have some confidence that we're setting up future generations to have the possibility to at least have what we do, if not better?
P.S. FWIW, the morality of valuing future generations is _not_ properly addressed by most moral philosophies.
I really think the calculus is about integration / immortality, and the staggeringly few humans who might have that opportunity.
I hope I am wrong.
God mode with 7 billion lives!
Seems a bit inappropriate to use the phrase "signs of life" here. In this context, it sounds like they're bringing sentience into the conversation, which I don't think Carmack is interested in contemplating or discussing.
All the formative things that shaped him in life - kindergarten, your first high school romance - a breakup and parent squabbling you experienced.
Always thought that was a way to make an AGI.
All the sci-fi seems think this. I have seen Intel chips from the 80s that implemented neuron like behavior on a chip.
We are still waiting on a practical memristor.