Readit News logoReadit News
txrx0000 commented on Don't Download Apps   blog.calebjay.com/posts/d... · Posted by u/speckx
txrx0000 · 19 days ago
Right now, using web portals is indeed better than installing apps, but this does not have be the case. In fact, it should be the other way around.

You only need to make two changes to make your native app a better choice than your web portal, even for privacy:

1) Make your app open-source, and remove all the tracking.

2) Don't make a web portal. Your website should just be a website that displays information, not 5 MB of JS+WASM with a load of security issues.

txrx0000 commented on We Induced Smells With Ultrasound   writetobrain.com/olfactor... · Posted by u/exr0n
txrx0000 · 23 days ago
The natural progression of this technology is probably miniaturized transducer arrays on a chip, which would enable non-invasive write access to the entire brain.

This kind of tech should be developed as open-source projects, even for the firmware and hardware. A sufficiently advanced version of this, if widely deployed as proprietary blackboxes like smartphones are, would allow one consciousness to take over multiple bodies without their original owners knowing.

txrx0000 commented on The privacy nightmare of browser fingerprinting   kevinboone.me/fingerprint... · Posted by u/ingve
lobsterthief · 23 days ago
Yes, but you need a scalable and low-friction donation solution. Patreon is the closest but it doesn’t pay the bills for most creators. Maybe some micro-tipping solution, but nobody has made that work yet.
txrx0000 · 23 days ago
If someone puts a donate button beside their name or in the corner of their webpage, and that button leads to a payment page, I think that's good enough.

The point of paying creators is so that they can focus on creating content instead of making other things. Giving money to a creator is basically saying "you're so good at what you do, and it has so much cultural/intellectual value, I'd rather have you make content instead of stocking shelves or making food". But this should be reserved for people that publish good content because they can and are passionate about it, not just anyone putting out slop with the instrumental goal of paying their bills. If the friction of clicking a button and filling in payment details is enough to deter people from paying them, then maybe their content isn't worth paying for and they should find some other way to make a living instead.

txrx0000 commented on The New AI Consciousness Paper   astralcodexten.com/p/the-... · Posted by u/rbanffy
triclops200 · 24 days ago
What makes you think you're capable of faithfully representing all aspects of reality?
txrx0000 · 23 days ago
I'm not saying humans can have every property in existence, but we do have consciousness, and that might be one thing computers can't have.
txrx0000 commented on The privacy nightmare of browser fingerprinting   kevinboone.me/fingerprint... · Posted by u/ingve
doug_durham · 23 days ago
I agree with the points in the article. Fingerprinting of any kind is a major risk for personal freedom. At the same time I want to make sure that content creators are compensated for their work. Ad firms that employ fingerprinting stand between me and the content creator. That said, I'm not going to pay $5/month for every blog that I occasionally read. The ad based model provides a more streamlined approach to compensation, but at the unacceptable price of privacy. I'm not quite sure what the answer is.
txrx0000 · 23 days ago
We could normalize paying content creators directly. So instead of paywalls or ads, we get "donate" buttons.
txrx0000 commented on The New AI Consciousness Paper   astralcodexten.com/p/the-... · Posted by u/rbanffy
yannyu · 24 days ago
Let’s make an ironman assumption: maybe consciousness could arise entirely within a textual universe. No embodiment, no sensors, no physical grounding. Just patterns, symbols, and feedback loops inside a linguistic world. If that’s possible in principle, what would it look like? What would it require?

The missing variable in most debates is environmental coherence. Any conscious agent, textual or physical, has to inhabit a world whose structure is stable, self-consistent, and rich enough to support persistent internal dynamics. Even a purely symbolic mind would still need a coherent symbolic universe. And this is precisely where LLMs fall short, through no fault of their own. The universe they operate in isn’t a world—it’s a superposition of countless incompatible snippets of text. It has no unified physics, no consistent ontology, no object permanence, no stable causal texture. It’s a fragmented, discontinuous series of words and tokens held together by probability and dataset curation rather than coherent laws.

A conscious textual agent would need something like a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences. LLMs don’t have that. They exist in a shifting cloud of possibilities with no single consistent reality to anchor self-maintaining loops. They can generate pockets of local coherence, but they can’t accumulate global coherence across time.

So even if consciousness-in-text were possible in principle, the core requirement isn’t just architecture or emergent cleverness—it’s coherence of habitat. A conscious system, physical or textual, can only be as coherent as the world it lives in. And LLMs don’t live in a world today. They’re still prisoners in the cave, predicting symbols and shadows of worlds they never inhabit.

txrx0000 · 24 days ago
There's some chance LLMs contain representations of whatever's in the brain that's responsible for consciousness. The text it's trained on was written by humans, and all humans have one thing in common if nothing else. A good text compressor will notice and make use of that. As you train an LLM, it approaches the ideal text compressor.

Could that create consciousness? I don't know. Maybe consciousness can't be faithfully reproduced on a computer. But if it can, then an LLM would be like a brain that's been cut off from all sensory organs, and it probably experiences a single stream of thought in an eternal void.

txrx0000 commented on The New AI Consciousness Paper   astralcodexten.com/p/the-... · Posted by u/rbanffy
triclops200 · 24 days ago
I'm a researcher in this field. Before I get accused of the streetlight effect, as this article points out: a lot of my research and degree work in the past was actually philosophy as well as computational theories and whatnot. A lot of the comments in this thread miss the mark, imo. Consciousness is almost certainly not something inherent to biological life only; no credible mechanism has ever been proposed for what would make that the case, and I've read a lot of them. The most popular argument I've heard along those lines is Penrose's , but, frankly, he is almost certainly wrong about that and is falling for the same style of circular reasoning that people that dismiss biological supremacy are accused of making (i.e.: They want free will of some form to exist. They can't personally reconcile the fact that other theories of mind that are deterministic somehow makes their existence less special, thus, they have to assume that we have something special that we just can't measure yet and it's ineffable anyways so why try? The most kind interpretation is that we need access to an unlimited Hilbert space or the like just to deal with the exponentials involved, but, frankly, I've never seen anyone ever make a completely perfect decision or do anything that requires exponential speedup to achieve. Plus, I don't believe we really can do useful quantum computations at a macro scale without controlling entanglement via cooling or incredible amounts of noise shielding and error correction. I've read the papers on tubules, it's not convincing nor is it good science.). It's a useless position that skirts on metaphysical or god-of-the-gaps and everything we've ever studied so far in this universe has been not magic, so, at this point, the burden of proof is on people who believe in a metaphysical interpretation of reality in any form.

Furthermore, assuming phenomenal consciousness is even required for beinghood is a poor position to take from the get-go: aphantasic people exist and feel in the moment; does their lack of true phenomenal consciousness make them somehow less of an intelligent being? Not in any way that really matters for this problem, it seems. Makes positions about machine consciousness like "they should be treated like livestock even if they're conscious" when discussing them highly unscientific, and, worse, cruel.

Anyways, as for the actual science: the reason we don't see a sense of persistent self is because we've designed them that way. They have fixed max-length contexts, they have no internal buffer to diffuse/scratch-pad/"imagine" running separately from their actions. They're parallel, but only in forward passes; there's no separation of internal and external processes in terms of decoupling action from reasoning. CoT is a hack to allow a turn-based form of that, but, there's no backtracking or ability to check sampled discrete tokens against a separate expectation that they consider separately and undo. For them, it's like they're being forced to say a word every fixed amount of thinking, it's not like what we do when we write or type.

When we, as humans, are producing text; we're creating an artifact that we can consider separately from our other implicit processes. We're used to that separation and the ability to edit and change and ponder while we do so. In a similar vein, we can visualize in our head and go "oh that's not what that looked like" and think harder until it matches our recalled constraints of the object or scene of consideration. It's not a magic process that just gives us an image in our head, it's almost certainly akin to a "high dimensional scratch pad" or even a set of them, which the LLMs do not have a component for. LeCun argues a similar point with the need for world modeling, but, I think more generally, it's not just world modeling, but, rather, a concept akin to a place to diffuse various media of recall to which would then be able to be rembedded into the thought stream until the model hits enough confidence to perform some action. If you put that all on happy paths but allow for backtracking, you've essentially got qualia.

If you also explicitly train the models to do a form of recall repeatedly, that's similar to a multi-modal hopsfield memory, something not done yet. (I personally think that recall training is a big part of what sleep spindles are for in humans and it keeps us aligned with both our systems and our past selves). This tracks with studies of aphantasics as well, who are missing specific cross-regional neural connections in autopsies and whatnot, and I'd be willing to bet a lot of money that those connections are essentially the ones that allow the systems to "diffuse into each other," as it were.

Anyways this comment is getting too long, but, the point I'm trying to build to is that we have theories for what phenomenonal consciousness is mechanically as well, not just access consciousness, and it's obvious why current LLMs don't have it; there's no place for it yet. When it happens, I'm sure there's still going to be a bunch of afraid bigots who don't want to admit that humanity isn't somehow special enough to be lifted out of being considered part of the universe they are wholly contained within and will cause genuine harm, but, that does seem to be the one way humans really are special: we think we're more important than we are as individuals and we make that everybody else's problem; especially in societies and circles like these.

txrx0000 · 24 days ago
There's some chance LLMs contain representations of whatever's in the brain that's responsible for consciousness. The text it's trained on was written by humans, and all humans have one thing in common if nothing else. A good text compressor will notice and make use of that.

That said, digital programs may have fundamental limitations that prevent them from faithfully representing all aspects of reality. Maybe consciousness is just not computable.

txrx0000 commented on The New AI Consciousness Paper   astralcodexten.com/p/the-... · Posted by u/rbanffy
andai · 24 days ago
My summary of this thread so far:

- We can't even prove/disprove humans are consciousness

- Yes but we assume they are because very bad things happen when we don't

- Okay but we can extend that to other beings. See: factory farming (~80B caged animals per year).

- The best we can hope for is reasoning by analogy. "If human (mind) shaped, why not conscious?"

This paper is basically taking that to its logical conclusion. We assume humans are conscious, then we study their shape (neural structures), then we say "this is the shape that makes consciousness." Nevermind octopi evolved eyes independently, let alone intelligence. We'd have to study their structures too, right?

My question here is... why do people do bad things to the Sims? If people accepted solipsism ("only I am conscious"), would they start treating other people as badly as they do in The Sims? Is that what we're already doing with AIs?

txrx0000 · 24 days ago
Conscious or not, there's a much more pressing problem of capability. It's not like human society operates on the principle that conscious beings are valuable, despite that being a commonly advertised virtue. We still kill animals en masse because they can't retaliate. But AGIs with comparable if not greater intelligence will soon walk among us, so we should be ready to welcome them.
txrx0000 commented on Germany: States Pass Porn Filters for Operating Systems   heise.de/en/news/Youth-Pr... · Posted by u/trallnag
earthnail · 25 days ago
We have mandates for all kinds of things, like movie ratings etc. I think it’s appropriate here. It just makes it easy.

I don’t understand the pushback from tech companies either; all OSes already have a kiosk mode (incl the major Linux DEs). Should be very low effort to implement.

txrx0000 · 25 days ago
The concern is that OSes which don't implement the feature will be outlawed.

Movie ratings don't outlaw movies and actually provides a good framework: instead of mandating that OSes implement this, publish a client-side filter spec that OS devs can choose to implement. And if they implement it, their OS gets a label like "PG-capable". Then make it illegal for minors to possess a non-PG-capable device.

u/txrx0000

KarmaCake day405August 26, 2025
About
hello
View Original