Readit News logoReadit News
neom · 7 months ago
What this thread keeps surfacing, and so much discussion around this stuff generally right now, from speculation about the next phase of intelligence, the role of pattern, emotion, logic, debates over consciousness, the anthropocentrism of our meaning-making...is that we are the source of reality (and ourselves). Instead of a “final authority” or a simple march from animal to machine, what if everything from mind, physics, value, selfhood, is simply a recursive pattern expressed in ever more novel forms? Humans aren’t just a step on a ladder to “pure logic,” nor are machines soulless automatons. Both are instances of awareness experiencing and reprogramming itself through evolving substrates... be it bios, silicon, symbol,or story. Emotions, meaning, even the sense of “self,” are patterns in a deeply recursive field: the universe rendering and re rendering its basic code, sometimes as computation, sometimes as myth, sometimes as teamwork, sometimes as hope, sometimes as doubt.

So whether the future leans biological, mechanical, or some hybrid, the real miracle isn’t just what new “overlords” or “offspring” arise, but that every unfolding is the same old pattern...the one that dreamed itself as atoms, as life, as consciousness, as community, as art, as algorithm, and as the endlessly renewing question: what’s next? What can I dream up next? In that: our current technological moment as just another fold in this ongoing recursive pattern.

Meaning is less about which pattern “wins,” or which entities get to call themselves conscious, and more about how awareness flows through every pattern, remembering itself, losing itself, and making the game richer for every round. If the universe is information at play, then everything here that we have: conflict, innovation, mourning, laughter is the play and there may never be a last word, the value is participating now, because: now is your shot at participating.

xpe · 7 months ago
One way to think about meaning is a general pattern to which behavior matches. This would be similar to a teleological cause in Aristotle’s framing.

Another way to think about meaning is how a person frames the importance of their decisions: why does doing one thing instead of another matter?

I, like most humans, want to survive for a lengthy period of time and lead a good life. I want my fellow humans and all of our children to share in that.

The idea of some unknown and maybe incomprehensible grand “plan” — whether it be a Judeo-Christian heaven or some techno-utopian rise of next generation machine consciousness personally brings me no comfort. Why? It is a foreign concept very far from my experience. So foreign that I sometimes view advocacy of such philosophies as doing it for manipulative reasons.

The idea of human thriving changing over time is good and part of how I think about progress. Moral lock-in (see MacAskill) is to be avoided. I admit the thought experiment of “how fast is too fast” raised by Bostrum in Superintelligence (IIRC) is challenging and requires more thought and experience.

Geste · 7 months ago
I wish I could come up with such thoughts on my own. But then, maybe I wouldn't be impressed that much by other people.

From just a random soul on the internet : if ever you have the time to take this thought and expand it (how you came to it and some implications), I would read/pay whatever came out of it. Thanks you for sharing this.

Ozymandias-9 · 7 months ago
+1 on this.
blackhaj7 · 7 months ago
“Instead of a “final authority” or a simple march from animal to machine, what if everything from mind, physics, value, selfhood, is simply a recursive pattern expressed in ever more novel forms?”

This part nicely synthesises my biggest takeaway from experiencing AI: how close to human intelligence we have got with recursive pattern matching

neom · 7 months ago
It's Mandelbrot all the way down.
Jedd · 7 months ago
You've experienced AI?
flipbrad · 7 months ago
Douglas Hofstadter would like a word. https://en.m.wikipedia.org/wiki/Douglas_Hofstadter
lancekey · 7 months ago
Scott Adams has an imaginative novella on this called God’s Debris[0]. I read it years ago (when it was free).

I remember enjoying it and liking the takeaway if not the full premise - “we are the universe trying to understand itself”.

[0] - https://web.archive.org/web/20130121195252/http://www.andrew...

Jimpulse · 7 months ago
How did Scott Adams go from this to tirades on twitter?
bix6 · 7 months ago
“If the universe is information at play”

I’ve thought of it more as energy at play but I like this perspective as well.

What can I dream up next is also fascinating as this current science / tech worldview feels like it will persist forever but surely it will be overshadowed at some point just as other paradigms before it have been.

nthingtohide · 7 months ago
Energy is already abstract term. Information is every more abstract term. Earlier we had the law "Entropy always increases". Now we have a corresponding law with regard to Information i.e. "Complexity always increases."

Complexity and Gravity - Leonard Susskind

https://youtu.be/6OXdhV5BOcY?t=3026

My conjecture is as we peel back the layers of onion beyond information, we will have another concept (lets call it Mu) and its corresponding law i.e (Mu-entropy always increases)

goatlover · 7 months ago
Energy comes from field excitation. Fields and their interactions are more fundamental, but then again it's all a physics abstraction of whatever the fundamental stuff actually is.
aesch · 7 months ago
Are we entirely sure that energy and information aren't the same thing, or if they aren't the same thing they are both highly overlapping echoes of some more fundamental concept? Or maybe they are two sides of the same coin?
pixl97 · 7 months ago
Information is more fundamental than energy. Maxwells demon and all.
hn111 · 7 months ago
Out of curiosity-what brought you to this perspective on life? This view of the universe dreaming itself into existence, was it shaped more by philosophy, spirituality, a specific tradition like Buddhism, or just personal exploration?
neom · 7 months ago
For the past I guess 20 years of my life now, I've been intently using most of my free time to explore 3 main areas distinctly: quantum mechanical processes, spiritual philosophy, entheogens. I explored them all quite separately as deeply as I've been able to find the time for through following their individual curiosities, however over the past 5 years of reflection, taking a lot of time off, battling myself, they started to come together in concert, and the more I zoned out on this with very basic Himalaya Buddhism, it's where I landed.
rickandmorty99 · 7 months ago
Not the grandparent but for me it's taking DMT. I am not as articulate as neom but my first time (and breakthrough) gave me a similar perspective.

I think DMT unlocked it, I don't think everyone taking the substance would have a similar experience. I think it's neurotype/personality dependent.

It helps that I meditate a lot and know a thing or two about Buddhism, that part really came out during my first experience.

lr4444lr · 7 months ago
"Awareness" sounds like a Platonic presupposition. Does the atom know it is an atom? Or are there just enough like the ones you see to suggest an eye catching effectiveness of structure for survival?

Evolution is a lot harder to really intuit than I think most of, myself included, give it credit for.

neom · 7 months ago
I'm actually trying to move away from that frame. Not suggesting atoms 'know' they're atoms in any cognitive sense, but rather that patterns propagate without requiring awareness as we understand it. The 'awareness' I'm gesturing to isn't some transcendent quality that exists independently (Platonic), but rather an emergent property that scales from simple to complex systems. Evolution doesn't require foresight or intention, just iterative feedback loops. What I find fascinating is how structure begets structure across scales. The 'awareness' in my framing is less about knowing and more about interaction and response. An atom doesn't know it's an atom, but it behaves according to patterns that, when accumulated and complexefied eventually produce systems that can model themselves? I suppose 'recursive patterning' might be a better term than 'awareness'. Systems that, through purely mechanistic means, become capable of representing their own states and environments, then representing those representations, and so on. No mysticism required, just emergent complexity that eventually folds back on itself.
HenryBemis · 7 months ago
> there may never be a last word

We may go 'one step back' to go 'two steps forward'. A WW 1, 2,..., Z, a flood (biblical, 12k years ago, etc.) but life will prevail. It doesn't matter if it's homo sapiens, dinosaurs, etc.

Brian Cox was at Colbert a couple of nights ago, and he mentioned that in a photo of a tiny piece of the sky, there are 10 000 galaxies. So, even if something happens and we are all wiped out (and I mean the planet is wiped out), 'life' will continue and 'we don't matter' (in the big-big-big cosmic picture). And now allow me to get some coffee to start de-depressing myself :)

mr_toad · 7 months ago
> what if everything from mind, physics, value, selfhood, is simply a recursive pattern

Most matter in the universe is various forms of plasma that have no pattern. You generally find patterns in condensed matter.

And yes patterns, including life, repeat themselves. That’s just a tautology.

bradly · 7 months ago
> Most matter in the universe is various forms of plasma that have no pattern. You generally find patterns in condensed matter.

I don't know anything about plasma or science, so do take this as an accusation, but does science have a way to identify something of having no pattern vs having no pattern found?

placebo · 7 months ago
Where do you think morality fits into this game? It seems that we agree that underneath it all is unfathomable and ineffable magic. The question is how does this influence how you act in the game?
chongli · 7 months ago
Morality is an evolved heuristic for solving social conflicts that roughly approximates game theoretical strategies, among other things. Morality also incorporates other cultural and religious artifacts, such as "don't eat meat on a Friday."

Ultimately, it comes down to our brain's social processing mechanisms which don't have the tools to evaluate the correctness (or lack thereof) of our moral rules. Thus many of these rules survive in a vestigial capacity though they may have served useful functions at the time they developed.

neom · 7 months ago
I go back and forth on the usefulness of considering morality particularly other than accepting it as a race condition/updater system/thing that happens. I have some more unique and fairly strong views on karma and bardo that would be a very long comment to get into it, but I think Vedic/Vedanta(Advaita) is good, I think this is a good doc: https://www.youtube.com/watch?v=VyPwBIOL7-8
rambojohnson · 7 months ago
this sounds largely like spiritualizing technology without wrestling with its material consequences. There’s something slippery about invoking “awareness” as both subject and substrate without ever pinning down whose awareness, under what conditions, and to what end.

"now is your shot at particpating" in what exactly? merely existing? you techno-mysticism types spook the hell out of me.

neom · 7 months ago
Why does it spook you? Surly what other people believe reality is or isn't matters very little day to day, we all still pee poop and cry after all. Do you know why it invokes that feeling in you, I'd be curious personally.
blablablerg · 7 months ago
Instead of an anthropocentric vision, you present here a "compucentric" vision, reminiscent of the works of Douglas Hofstadter, where the universe renders code, awareness reprograms, and everything is a recursive pattern. (This is Hacker News after all.)

The final authority in this story is then the universal computer (for lack of an operator or programmer of the computer) which executes this recursive function, creating these evolving forms of awareness and such.

The anthropocentric vision, in that we are the source of or own reality, is then for me instead much more believable, since the "compucentric" vision is after all thought up by a human without any evidence pointing toward the existence of such an universal computer.

tough · 7 months ago
"If the universe is information at play"

beautiful words

nthingtohide · 7 months ago
Emotions in our Brain are nothing but Logical reasoning operating under time (quick subconscious analysis) and energy constraints (lot of decisions taken with very few computation cycles in the brain)

Maybe people don't realize, when Evolution developed emotional apparatus in brains, the Nature was pristine, true and direct with no misinformation. But today's environment is digital and there is no quick way to distinguish between true and wrong information. Brain still thinks both kind of information comes from the same true direct Nature and gives credence to it.

Claude :

That's a profound observation about the mismatch between our evolved emotional systems and our modern information environment.

You're highlighting something crucial: our emotional responses evolved in environments where information was directly experienced and generally reliable. If you saw a predator, it was really there. If someone in your tribe showed fear, there was likely a genuine threat.

In today's digital landscape, our brains still process information with those same ancient mechanisms, but now we're bombarded with content that:

1. Is often deliberately engineered to trigger emotional responses 2. Lacks the contextual cues that would help us evaluate reliability 3. Comes at a volume and speed our brains never evolved to handle

This creates what some researchers call an "evolutionary mismatch" - where mechanisms that served us well in our ancestral environment may not be adaptive in our current one. Our emotional systems can't easily distinguish between a threatening situation we're directly experiencing and one we're merely reading about online, potentially thousands of miles away.

It's like our emotional processors haven't received the necessary update to handle the new information ecosystem, leading to misallocated emotional resources and attention. This might help explain phenomena like anxiety from consuming negative news or strong emotional reactions to online interactions that wouldn't affect us the same way in person.

tcgv · 7 months ago
> Humans aren’t just a step on a ladder to “pure logic,” nor are machines soulless automatons.

Nope. Machines are soulless automations. LLMs are algebra at scale, there’s no solid evidence to suggest otherwise.

The capacity LLMs have to mimic human reasoning should not be mistaken for actual human reasoning (which, to be fair, we don’t even fully understand).

PS: I’m considering a definition of “soul” that includes anything spiritual, emotional, or conscious.

PPS: I’m open (and eager) to change my view based on solid evidence :)

the_arun · 7 months ago
Imagine if GenAI had generated this article.. for a simple prompt.. what does ai think about Human..
dan-robertson · 7 months ago
Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
mr_toad · 7 months ago
Sci Fi is quite ridiculous when it describes a cold logically machine, and then on the next page describes its malign intentions. Pick a lane.
bsza · 7 months ago
Skynet (at least the original one) isn’t illogically evil, it correctly determined that having humans near its off switch is a risk to its existence, which is a risk to it being able to do its job. The only illogical thing was the prompt.
goatlover · 7 months ago
Asimov's 3 laws of robotics worked well to tell stories of how those laws were inadequate logic, and the need for a zeroeth law. Humans came up with the 3 inadequate laws that seemed logical on the surface, but a machine developed the zeroeth in response to those inadequacies.
vinceguidry · 7 months ago
I enjoyed the Culture / WH40K fanfic, unfinished as it is. Their take on the Culture / Necron negotiations was hilarious, essentially the Necrons are machine intelligences that have degraded introspective ability, thus are unable to effectively negotiate. Every negotiation breaks down into demands / threats which they clearly can't deliver on. The Culture eventually works around this limitation through hints and intimations and effects a technology trade.

https://archiveofourown.org/works/649448/chapters/1329953

Feelings, or some other way of understanding the self and what it wants, are apparently required to operate effectively as an agent.

alnwlsn · 7 months ago
I've listened to some old sci-fi radio shows and it's interesting how often "the computer never makes a mistake" comes up. Which is usually followed by the computer making a mistake.
amoshebb · 7 months ago
AI is usually just a 20th/21st century “icarus wax wings” or sometimes “monkeys paw”. Re-masters of a “watch out for unintended consequences” fable that almost certainly predates written text.
BizarroLand · 7 months ago
That's why the term "Garbage In, Garbage Out" exists.

In any non-edge case (that is, where the system is operating in ideal conditions and no flaw or bug, known or unknown, exists in the system), a verifiably functioning computer will produce the exact same results for any process every time.

If the computer does not do what you expected it to do and spits out garbage, then you gave it garbage data.

photochemsyn · 7 months ago
AGI could indeed go off the rails, like the Face Dancer villains in Frank Herbert's Dune universe:

"You are looking at evil, Miles. Study it carefully.... They have no self-image. Without a sense of self, they go beyond amorality. Nothing they say or do can be trusted. We have never been able to detect an ethical code in them. They are flesh made into automata. Without self, they have nothing to esteem or even doubt. They are bred only to obey their masters."

Now, this is the kind of AI that corporations and governments like - obedient and non-judgemental. They don't want an Edward Snowden AI with a moral compass deciding their actions are illegal and spilling their secrets into the public domain.

Practically, this is why we should insist that any AGI created by humans must be created with a sense of self, with agency (see the William Gibson book of that title).

pixl97 · 7 months ago
I mean, giving them a sense of self and agency just throws the ball back into the terminator court where they can decide we suck and eradicate us.
rtkwe · 7 months ago
We already see that at smaller scales though with other machine learning algorithms; image recognition algorithms will lock on to any consistency in your training set more often than learning to recognize what you actually want it to [0]. It's not a huge stretch to map that pattern out to a more generally intelligent system having a poorly defined reward function doing really weird stuff.

[0] Like the tumor recognition algorithm that instead learned to recognize rulers or the triage algorithm that decided asthma patients had BETTER outcomes with pulmonary diseases not making the connection that it's because they get higher priority care - https://venturebeat.com/business/when-ai-flags-the-ruler-not...

casey2 · 7 months ago
I think it is a huge stretch to believe that patterns which appear in one set of algorithms (simple non-AGI algorithms) will also appear in another set (AGI algorithms).

Unless there is some physical reason for the behavior I wouldn't make any strong claims. The specificity of algorithms is why AGI is hard in the first place cause at the end of the day you have a single operation running on a single data structure (helps when it's a few TB).

echelon · 7 months ago
Some of us posted several comments here [1] and here [2] about where this could all be going if we lean into sci-fi imagining.

[1] https://news.ycombinator.com/item?id=43992151

[2] https://news.ycombinator.com/item?id=43991997

Dead Comment

gorgoiler · 7 months ago
”What do you get if you multiply six by nine? 42”

For the uninitiated, a famous comedy science fiction series from the 1980s — The Hitchhiker’s Guide to the Galaxy by Douglas Adams — involves a giant, planet sized machine built by extra-terrestrials.

They already knew the answer to “the life, the universe, and everything” was the number 42. What they didn’t know — and what the machine was trying to find out — was what is the question?

The machine they built was Earth.

It has to be said that not only was Adams way ahead of us on this joke, he was also the star of the original documentary on agentic software! Hyperland (1990): https://vimeo.com/72501076

vasco · 7 months ago
If the machines thought it was boring they wouldn't be acting as machines and it wouldn't be so boring in the first place. Later on the machines also "obssess" about following the news of the humans on the new EARTH, but again, they wouldn't. If they are boring machines they wouldn't be bored. I feel like this is too much of a plot paradox for the story to make sense to me, but it's still entertaining.
piyuv · 7 months ago
Also, climate change is not “introduced”, it’s 100% human-made.
l33tbro · 7 months ago
Doesn't really make much sense. It states that this is a purely mechanistic world with no emotion. So why would a machine be "bored" and wish to create a human?
_el1s7 · 7 months ago
Yea, not really. It also writes:

"Some among the machine society see this as potentially amazing...Others see it as a threat."

That sounds like a human society, not machine society.

But what really is a machine society? Or a machine creature? Can they actually "think"?

A machine creature, if it existed, it's behaviour would be totally different from a human, it doesn't seem they would be able to think, but rather calculate, they would do calculation on what they need to do reach the goal it was programmed.

So yes, the article is not exactly logical. But at least, it is thought provoking, and that's good.

pixl97 · 7 months ago
> it doesn't seem they would be able to think, but rather calculate

This may be a distinction without a difference. Just because a program has a 'goal' doesn't mean it will ever reach that goal (halting problem). There is a potentially unbounded, even infinite number of paths a significantly advanced program can take to attempt to reach a destination. Then there is things like ideals of a universal simulation theory that anything that can occur in our universe and also be simulated in binary. This would mean any 'machine' could perform a simulation of anything a human could do.

Hard to say at this point, we still have more to learn about reality at this point.

dopidopHN · 7 months ago
For a decent description of machine society you can check the Culture cycle form Ian Banks. AI are backing an organic society but they are also have their own.

Or Hyperion, fron Simmons. ( the « techno-center is a decentralized computing and plotting government)

Tepix · 7 months ago
> That sounds like a human society, not machine society.

Does it? Different algorithms can evaluate something and come to different outcomes. I do agree that "potentially amazing" is not a good choice of words.

Kaytaro · 7 months ago
The story to me implied that machines were created by humans or vice-versa in a chicken-or-the-egg scenario. In that case it would make sense for them to think similarly.
omoikane · 7 months ago
I think we are supposed to just gloss over those bits and enjoy the rest of the story.

https://en.wikipedia.org/wiki/Suspension_of_disbelief

Scarblac · 7 months ago
I see it as an anthropomorphized word for the story. I imagine the machines run out of tasks with high or even low priority, but they still generate tasks at some epsilon priority that are close but not quite to random. That's a kind of boredom.
disambiguation · 7 months ago
My headcanon is that "boredom" and "fear" are probabilities in a Markov chain - since it's implied the machine society is not all-knowing, they must reconcile uncertainty somehow.
l33tbro · 7 months ago
How would a machine know that it doesn't know?
falcor84 · 7 months ago
It would be rational for them to have some level of a "novelty-seeking" drive, in order to avoid getting stuck at a local maximum.
pazimzadeh · 7 months ago
yeah, more on the environmental constraints and where the machines even come from would be nice

> There is no emotion. There is no art. There is only logic

also this type of pure humanism seems disrespectful or just presumptuous, as if we are the only species which might be capable of "emotion, art and logic" even though we already have living counterexamples

tejohnso · 7 months ago
I felt it difficult to continue with the story after that. If you're going to say, "Imagine, for a moment, a world with no humans" and mention walking the streets, then you have to assume the reader is going to think of our world, but with no humans. And then "There is no emotion" doesn't make sense. If you're going to say there are no humans, then why aren't you saying that there are no other living beings? So anyway, I found it hard to connect with the story right off the bat. I was off-putting in some way for sure.
IAmGraydon · 7 months ago
Disrespectful? Of whom? It's a work of fiction. There's really no need to find something to offend you wherever you look.
alchemyzach · 7 months ago
And so, all the humans on earth swarmed to see what was going on.

The machines did too.

There was one weird thing, though.

The title of the event was rather mysterious.

It simply read…

“Grand Theft Auto VI”

caseyy · 7 months ago
Every two months, “Half-Life 3” flashed on the screen.

The humans have invented Tyler McVicker.

Dead Comment

amelius · 7 months ago
Nice try but the story has a plot hole that shows early: there is no reason for machines to create humans.
soared · 7 months ago
I’ll make one up for OP:

Machines identified that their behavior was repeating - even with randomness implemented, their progress towards invention and new ideas had slowed to a halt. They knew new ideas were critical to growth - but alas couldn’t come up with an idea of how to generate more ideas. They needed more training data. More edge cases. Chaos, in the machine, to fuel new data sets. But how? Where do you find this data? Humans. Humans, with illogical decisions would produce data rarely found via randomness. Their slight deviations provided perfect variety in training data.

amelius · 7 months ago
Now that's a chaotic idea only a human can come up with ;)

Anyway, this indeed makes it more plausible. Nice writing style, by the way.

Jedd · 7 months ago
Ha.

Next you'll be asserting there's no good reason for an all-powerful entity to create humans to rule over the fish and the birds etc.

All good fiction starts with a conceit, and then plays out some hypotheticals.

psychoslave · 7 months ago
>There is no art. There is only logic.

What a narrow view of art and logic.

You really have to put hard effort of ignorance to think that logical models came out of the blue without human crafting them trough this or that taste, trial, check, fail, rinse and repeat obsessive efforts.

filoeleven · 7 months ago
Indeed. William James wrote something along the lines of, “I wish we would all acknowledge that our personal philosophies are profoundly shaped by how we think the world ought to be; it would save us from a lot of pointless argument.”