Readit News logoReadit News
hnfong commented on Using secondary school maths to demystify AI   raspberrypi.org/blog/seco... · Posted by u/zdw
hackinthebochs · a day ago
That's a fair reading but not what I was going for. I'm trying to argue for the irrelevance of causal scope when it comes to determining realness for consciousness. We are right to privilege non-virtual existence when it comes to things whose essential nature is to interact with our physical selves. But since no other consciousness directly physically interacts with ours, it being "real" (as in physically grounded in a compatible causal scope) is not an essential part of its existence.

Determining what is real by judging causal scope is generally successful but it misleads in the case of consciousness.

hnfong · 13 hours ago
I don't think causal scope is what makes a virtual candle virtual.

If I make a button that lights the candle, and another button that puts it off, and I press those buttons, then the virtual candle is causally connected to our physical reality world.

But obviously the candle is still considered virtual.

Maybe a candle is not as illustrative, but let's say we're talking about a very realistic and immersive MMORPG. We directly do stuff in the game, and with the right VR hardware it might even feel real, but we call it a virtual reality anyway. Why? And if there's an AI NPC, we say that the NPC's body is virtual -- but when we talk about the AI's intelligence (which at this point is the only AI we know about -- simulated intelligence in computers) why do we not automatically think of this intelligence as virtual in the same way as a virtual candle or a virtual NPC's body?

hnfong commented on Using secondary school maths to demystify AI   raspberrypi.org/blog/seco... · Posted by u/zdw
hackinthebochs · a day ago
>If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.

Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.

hnfong · a day ago
Right, but doesn't your argument imply that the only "real" consciousness is mine?

I'm not against this conclusion ( https://en.wikipedia.org/wiki/Philosophical_zombie ) but it doesn't seem to be compatible with what most people believe in general.

hnfong commented on GNU Unifont   unifoundry.com/unifont/in... · Posted by u/remywang
nycticorax · a day ago
Shouldn't the first sentence on that website describe what GNU Unifont actually is? I guess it's a single copyleft font designed to have coverage of all (or nearly all?) unicode code points?
hnfong · a day ago
Note that "nearly all" isn't "all". I have some side project that require rendering of very uncommon CJK characters, and Unifont does not display them as expected. (For that project, I used https://kamichikoichi.github.io/jigmo/ which was the font that was most complete in terms of CJK glyphs )

Unifont seems to have about the same glyph coverage as my system default CJK font (unfortunately I don't know what it is).

hnfong commented on Using secondary school maths to demystify AI   raspberrypi.org/blog/seco... · Posted by u/zdw
gnull · 2 days ago
What makes the simulation we live in special compared to the simulation of a burning candle that you or I might be running?

That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.

hnfong · 2 days ago
They do have a valid subtle point though.

If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?

Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.

hnfong commented on Using secondary school maths to demystify AI   raspberrypi.org/blog/seco... · Posted by u/zdw
palmotea · 2 days ago
> You can simulate a human brain on pen and paper too.

That's an assumption, though. A plausible assumption, but still an assumption.

We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.

hnfong · 2 days ago
This is basically the Church-Turing thesis and one of the motivations of using tape(paper) and an arbitrary alphabet in the Turing machine model.

It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).

hnfong commented on Using secondary school maths to demystify AI   raspberrypi.org/blog/seco... · Posted by u/zdw
snickerbockers · 2 days ago
>Equivalent statements could be made about how human brains are not magic, just biology - yet I think we still think.

They're not equivalent at all because the AI is by no means biological. "It's just maths" could maybe be applied to humans but this is backed entirely by supposition and would ultimately just be an assumption of its own conclusion - that human brains work on the same underlying principles as AI because it is assumed that they're based on the same underlying principles as AI.

hnfong · 2 days ago
Well, a better retort would be "Human brains are not magic, just physics. Protons, neutrons and electrons don't think".

But I think most people get what GP means.

hnfong commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
goobatrooba · 2 days ago
I feel there is a point when all these benchmarks are meaningless. What I care about beyond decent performance is the user experience. There I have grudges with every single platform and the one thing keeping me as a paid ChatGPT subscriber is the ability to sort chats in "projects" with associated files (hello Google, please wake up to basic user-friendly organisation!)

But all of them * Lie far too often with confidence * Refuse to stick to prompts (e.g. ChatGPT to the request to number each reply for easy cross-referencing; Gemini to basic request to respond in a specific language) * Refuse to express uncertainty or nuance (i asked ChatGPT to give me certainty %s which it did for a while but then just forgot...?) * Refuse to give me short answers without fluff or follow up questions * Refuse to stop complimenting my questions or disagreements with wrong/incomplete answers * Don't quote sources consistently so I can check facts, even when I ask for it * Refuse to make clear whether they rely on original documents or an internal summary of the document, until I point out errors * ...

I also have substance gripes, but for me such basic usability points are really something all of the chatbots fail on abysmally. Stick to instructions! Stop creating walls of text for simple queries! Tell me when something is uncertain! Tell me if there's no data or info rather than making something up!

hnfong · 2 days ago
There's a leaderboard that measures user experience, the "lmsys" Chatbot Arena Leaderboard ( https://huggingface.co/spaces/lmarena-ai/lmarena-leaderboard ). Main issue with it these days are that it kinda measures sycophancy and user preferred tone more than substance.

Some issues you mentioned like length of response might be user preference. Other issues like "hallucination" are areas of active research (and there are benchmarks for these).

hnfong commented on Horses: AI progress is steady. Human equivalence is sudden   andyljones.com/posts/hors... · Posted by u/pbui
tim333 · 5 days ago
We are in control (for now). The horses were not. The whole alignment debate is basically about keeping us in control.
hnfong · 4 days ago
Member of ruling class spotted!!
hnfong commented on Over fifty new hallucinations in ICLR 2026 submissions   gptzero.me/news/iclr-2026... · Posted by u/puttycat
vkou · 7 days ago
> IMHO what should change is we stop putting "peer reviewed" articles on a pedestal.

Correct. Peer review is a minimal and necessary but not sufficient step.

hnfong · 5 days ago
I agree in principle, and I think this is what's happening mostly. But IMHO the public perception of a paper being peer reviewed as somehow "more trustworthy" is also kind of... bad.

I mean, being peer reviewed is a signal of a paper's quality, but in the hands of an expert in that domain it's not a very valuable signal, because they can just read the paper themselves, and figure out whether it's legit. So instead of having "experts" try to explain a paper and commenting on whether it's peer reviewed or not, I think the better practice is to have said expert say "I read the paper and it's legit", or "I read the paper and it's nonsense".

IMHO the reason they make note of whether it's peer reviewed is because they don't know enough to make the judgement themselves. And the fallback is to trust a couple anonymous reviewers attest to the quality of a paper! If you think of it that way, using this signal to vet the quality of a publication to the lay public isn't really a good idea.

hnfong commented on Over fifty new hallucinations in ICLR 2026 submissions   gptzero.me/news/iclr-2026... · Posted by u/puttycat
jqpabc123 · 6 days ago
Computer code is highly deterministic. This allows it to be tested fairly easily. Unfortunately, code productionn is not the only use-case for AI.

Most things in life are not as well defined --- a matter of judgment.

AI is being applied in lots of real world cases where judgment is required to interpret results. For example, "Does this patient have cancer". And it is fairly easy to show that AI's judgment can be highly suspect. There are often legal implications for poor judgment --- i.e. medical malpractice.

Maybe you can argue that this is a mis-application of AI --- and I don't necessarily disagree --- but the point is, once the legal system makes this abundantly clear, the practical business case for AI is going to be severely reduced if humans still have to vet the results in every case.

hnfong · 5 days ago
Why do you think AI is inherently worse than humans in judging whether a patient has cancer, assuming they are given the same information as the human doctor? Is there some fundamental assumption that makes AI worse, or are you simply projecting your personal belief (trust) in human doctors? (Note that given the speed of progress of AI and that we're talking about what the law ought to be, not what it was in the past, the past performance of AI on cancer cases do not have much relevance unless a fundamental issue with AI is identified)

Note that whether a person has cancer is generally well-defined, although it may not be obvious at first. If you just let the patient go untreated, you'll know the answer quite definitely in a couple years.

u/hnfong

KarmaCake day2555March 17, 2021
About
[ my public key: https://keybase.io/hnfong; my proof: https://keybase.io/hnfong/sigs/4Xa6Z6G6bT9_v0zd7MJTrV3MduH4olW1ipP9YH5VRTA ]
View Original