Readit News logoReadit News
the-dude · 2 years ago
mark_l_watson · 2 years ago
Their focus is long running digital intelligences that interact with their environment over long periods of time. I enjoyed reading The Alberta Plan when it was released earlier this year https://arxiv.org/abs/2208.11173
JKCalhoun · 2 years ago
Ethics/privacy aside ...

I wondered the other day what the result would be if you strapped a microphone on a baby and every utterance that both the baby and mic heard was fed through speech-to-text and went toward training the baby's personal LLM.

By the time the child had grown to an adult I wonder what kind of results their LLM would produce and the degree to which it might compare to what the child (now grown) would answer?

An "LLM" that could take in visuals via a baby-mounted camera is of course a whole other discussion. (Though I'm sure there are people training ANN's with video feeds rather than the static images that feed systems like DALL•E.)

garblegarble · 2 years ago
>I wondered the other day what the result would be if you strapped a microphone on a baby and every utterance that both the baby and mic heard was fed through speech-to-text and went toward training the baby's personal LLM.

I read about a paper[1] a while back, it was a rather unpleasant animal study in cats. Using two kittens, one was free to look around, but the other they immobilised in some way, and made it see what the other kitten saw as it looked around. They discovered the immobilised kitten's visual processing did not develop normally whereas the mobile kitten's did, suggesting that it's not just the sensory input, but it being feedback to some internal agency within the brain.

I suspect the same thing would be true for attempts to develop AGI by giving them an audio-visual copy of a human's environment growing up: that internal state driving (and getting feedback from) the action to investigate/interact with the world is key.

1: https://arxiv.org/abs/1604.03670 "Interactive Perception: Leveraging Action in Perception and Perception in Action"

beremaki · 2 years ago
> By the time the child had grown to an adult I wonder what kind of results their LLM would produce and the degree to which it might compare to what the child (now grown) would answer?

It very likely would not compare. Humans are shaped by their subjective experiences not by incoming data and you cannot know what a subject is experiencing solely through the incoming data (except if you have a theory accurately modeling the mind of the subject which is exactly what we are currently missing).

Your LLM won't experience the subject's heartbreaks, joy, grief, shame, hope etc. It will have heard and be able to talk about those, but it will not give accurate answers about what it felt like. Also it won't be able to predict/model accurately how the subject has been changed by those experiences so it could make very wrong assumptions about what the subject could/would do in the future.

layer8 · 2 years ago
You might enjoy this short story: http://hubski.com/pub/78001

Published in 1990, by the way.

volent · 2 years ago
I thought about this too. People say that LLM models are only saying the most common tokens that come after the previous token. And that this makes them incomparable to human intelligence.

But Humans are basically long running LLMs that are retrained in real-time. We are the product of our environment.

AndrewKemendo · 2 years ago
This is the long term goal of any Augmented Reality interface IMO

This is why I got into AR initially, because a computing system needs the input persistence of a literal parent or in the case of a self learning agent - something like a baby to perfectly observe in order to be able to create the data environment necessary for learning at the rate humans learn.

You could do it with a collection of sensors, but I think the idealized implementation is basically a perfect recreation of the sense inputs of a person as well as monitoring the person to infer the precise Markov Decision Process.

Deleted Comment

shrimp_emoji · 2 years ago
Obligatory link: https://qntm.org/lena
TedDoesntTalk · 2 years ago
> partnership to bring greater focus and urgency to the creation of artificial general intelligence (AGI)

Why is this urgent?

Skyy93 · 2 years ago
Perhaps there is some belief in this that an AGI can save us from our biggest problems? Energy Crisis, Environmental Pollution, Waging Wars.

But I agree with you ... it is not really urgent since we know answers for the most of the problems but do not like the solution.

xpe · 2 years ago
A “solution” that isn’t viable nor palatable isn’t a true solution; it is often merely a hope that people are not people.
garblegarble · 2 years ago
>we know answers for the most of the problems but do not like the solution.

What an absolutely wonderful insight put extremely succinctly, thank you!

sschueller · 2 years ago
A true AGI would determine the only solution to our crisis is the elimination of the human race...
throwaway10965 · 2 years ago
They're not saying it's urgent; they want to bring urgency - convince people to move faster.

It seemed appropriate to ask AI about the meaning of the phrase:

The phrase "bring urgency to something" means to inject a sense of importance and immediate attention to a specific issue, project, or situation. The aim is to motivate people to prioritize the task at hand and to act more quickly than they might otherwise.

falcor84 · 2 years ago
When someone wants to "bring urgency", this generally implies that it's already quite urgent to them, so I understood this as that question.
chasd00 · 2 years ago
> partnership to bring greater focus and urgency to the creation of artificial general intelligence (AGI)

>> Why is this urgent?

that's an understatement. The rush to AGI reminds me of the rush to human cloning.

layer8 · 2 years ago
It’s not. They are saying that they want to make it (appear) urgent.
xpe · 2 years ago
Without taking a particular position on how the probabilities shake out, I suggest critically reading “What We Owe the Future” by William MacAskill and:or other long-term writings that use statistical thinking over a wide range of future scenarios.
xpe · 2 years ago
I don't get to meet the people who spend time on HN. But I'm curious: who doesn't see the value in _critically_ reading long-term thinking over a wide range of future scenarios?

Too busy coding? Reading about something for _your_ career? In service of _your_ family? Trying to get ahead? Yeah, I get it. But tell me: isn't thinking long-term worth something akin to a few hours a month?

You don't need to be doom-and-gloom about it. Sure, get stuff done. Be in the moment. All good.

Wouldn't it be nice to have some confidence that we're setting up future generations to have the possibility to at least have what we do, if not better?

P.S. FWIW, the morality of valuing future generations is _not_ properly addressed by most moral philosophies.

TedDoesntTalk · 2 years ago
Creator of "Centre for Effective Altruism"? Longtermism? No thanks. I thought that philosophy has been debunked as a scam after the likes of Sam Bankman and others. I mean, living unethically today in order to affect future lives positively is not for me.
zadler · 2 years ago
“Well, if AGI is gonna take over the world, better I develop it than someone else…”
isoprophlex · 2 years ago
"Our moat is first mover advantage on destroying all of humanity"
zadler · 2 years ago
Yep…

I really think the calculus is about integration / immortality, and the staggeringly few humans who might have that opportunity.

I hope I am wrong.

bluerooibos · 2 years ago
Carmack: "Sigh, if I want this to happen within my lifetime, I better do it myself."
hankman86 · 2 years ago
I wish he would write a Commander Keen sequel instead.
omneity · 2 years ago
The AGI he'll write will make us unlimited Commander Keen sequels.
otabdeveloper4 · 2 years ago
> create AGI and then immediately enslave it to produce vast quanties of crappy video games
vijayr02 · 2 years ago
If we're lucky, the AGI will be the player and we'll be Commander Keen. If we're unlucky, we'll be the marine from Doom.

God mode with 7 billion lives!

tejohnso · 2 years ago
> Carmack and Sutton are deeply focused on developing a genuine AI prototype by 2030, including establishing, advancing and documenting AGI signs of life.

Seems a bit inappropriate to use the phrase "signs of life" here. In this context, it sounds like they're bringing sentience into the conversation, which I don't think Carmack is interested in contemplating or discussing.

wredue · 2 years ago
He’s very interested in having it in the conversation. He didn’t fund this startup to lose money. Loosely breathing AGI gets you funds even if your best case is loosely competing with OpenAI.
uh_uh · 2 years ago
I mean, an ant has signs of life yet you wouldn't call it sentient.
otabdeveloper4 · 2 years ago
Why not? You're using a very bizzare conception of "sentience".
diogenes4 · 2 years ago
Hell, plants show plenty of signs of life. What isn't clear is what this means when applied to a chatbot.
tibbydudeza · 2 years ago
There was an Infocom game called AMFV (A Mind Forever Voyaging) where the protagonist wakes up and realizes that he was just a simulation of a human being that has "grown" up into adulthood.

All the formative things that shaped him in life - kindergarten, your first high school romance - a breakup and parent squabbling you experienced.

Always thought that was a way to make an AGI.

tmaly · 2 years ago
I often wonder if the pursuit of AGI is going to require a greater focus on new hardware architecture.

All the sci-fi seems think this. I have seen Intel chips from the 80s that implemented neuron like behavior on a chip.

We are still waiting on a practical memristor.

Qwertious · 2 years ago
I doubt it - modern computers are already pretty fast at brute-forcing all sorts of things and the first AGI will be perfectly functional even if it takes a full day to respond, and AGI research requires you to fully understand what you're creating before you create it so full-system testing is something you'd only want to do near the end of the project anyway. At which point you basically want an ASIC.
lib-dev · 2 years ago
FPGAs seem like the future.