Readit News logoReadit News
MyOutfitIsVague commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
paulddraper · 2 days ago
You have a subtle slight of hand.

You use the word “plausible” instead of “correct.”

MyOutfitIsVague · 2 days ago
Do you have a better word that describes "things that look correct without definitely being so"? I think "plausible" is the perfect word for that. It's not a sleight of hand to use a word that is exactly defined as the intention.
MyOutfitIsVague commented on SQLite JSON at full index speed using generated columns   dbpro.app/blog/sqlite-jso... · Posted by u/upmostly
AlexErrant · 2 days ago
I was looking for a way to index a JSON column that contains a JSON array, like a list of tags. AFAIK this method won't work for that; you'll either need to use FTS or a separate "tag" table that you index.
MyOutfitIsVague · 2 days ago
Yeah, SQLite doesn't have any true array datatype. I think you could probably do it with a virtual table, but that would be adding a native extension, and it would have to pack its own index.
MyOutfitIsVague commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
coppsilgold · 6 days ago
If you come up with a genetic algorithm scaffolding to affect both the architecture and the training algorithm, and then you instantiate it in an artificial selection environment, and you also give it trillions generations to evolve evolvability just right (as life had for billions of years) then the answer is yes, I'm certain it will and probably much sooner than we did.

Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia). Finding this set of weights is the problem.

MyOutfitIsVague · 6 days ago
I'm certain it wouldn't, and you're certain it would, and we have the same amount of evidence (and probably roughly the same means for running such an expensive experiment). I think they're more likely to go slowly mad, degrading their reasoning to nothing useful rather than building something real, but that could be different if they weren't detached from sensory input. Human minds looping for generations without senses, a world, or bodies might also go the same way.

> Also, I think there is a very high chance that given an existing LLM architecture there exists a set of weights that would manifest a true intelligence immediately upon instantiation (with anterograde amnesia).

I don't see why that would be the case at all, and I regularly use the latest and most expensive LLMs and am aware enough of how they work to implement them on the simplest level myself, so it's not just me being uninformed or ignorant.

MyOutfitIsVague commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
djoldman · 6 days ago
As a consequence of my profession, I understand how LLMs work under the hood.

I also know that we data and tech folks will probably never win the battle over anthropomorphization.

The average user of AI, nevermind folks who should know better, is so easily convinced that AI "knows," "thinks," "lies," "wants," "understands," etc. Add to this that all AI hosts push this perspective (and why not, it's the easiest white lie to get the user to act so that they get a lot of value), and there's really too much to fight against.

We're just gonna keep on running into this and it'll just be like when you take chemistry and physics and the teachers say, "it's not actually like this but we'll get to how some years down the line- just pretend this is true for the time being."

MyOutfitIsVague · 6 days ago
These discussions often end up resembling religious arguments. "We don't know how any of this works, but we can fathom an intelligent god doing it, therefore an intelligent god did it."

"We don't really know how human consciousness works, but the LLM resembles things we associate with thought, therefore it is thought."

I think most people would agree that the functioning of an LLM resembles human thought, but I think most people, even the ones who think that LLMs can think, would agree that LLMs don't think in the exact same way that a human brain does. At best, you can argue that whatever they are doing could be classified as "thought" because we barely have a good definition for the word in the first place.

MyOutfitIsVague commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
coppsilgold · 6 days ago
Is a brain not a token prediction machine?

Tokens in form of neural impulses go in, tokens in the form of neural impulses go out.

We would like to believe that there is something profound happening inside and we call that consciousness. Unfortunately when reading about split-brain patient experiments or agenesis of the corpus callosum cases I feel like we are all deceived, every moment of every day. I came to realization that the confabulation that is observed is just a more pronounced effect of the normal.

MyOutfitIsVague · 6 days ago
Could an LLM trained on nothing and looped upon itself eventually develop language, more complex concepts, and everything else, based on nothing? If you loop LLMs on each other, training them so they "learn" over time, will they eventually form and develop new concepts, cultures, and languages organically over time? I don't have an answer to that question, but I strongly doubt it.

There's clearly more going on in the human mind than just token prediction.

MyOutfitIsVague commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
bloaf · 6 days ago
Everyone is out here acting like "predicting the next thing" is somehow fundamentally irrelevant to "human thinking" and it is simply not the case.

What does it mean to say that we humans act with intent? It means that we have some expectation or prediction about how our actions will effect the next thing, and choose our actions based on how much we like that effect. The ability to predict is fundamental to our ability to act intentionally.

So in my mind: even if you grant all the AI-naysayer's complaints about how LLMs aren't "actually" thinking, you can still believe that they will end up being a component in a system which actually "does" think.

MyOutfitIsVague · 6 days ago
> Everyone is out here acting like "predicting the next thing" is somehow fundamentally irrelevant to "human thinking" and it is simply not the case.

Nobody is. What people are doing is claiming that "predicting the next thing" does not define the entirety of human thinking, and something that is ONLY predicting the next thing is not, fundamentally, thinking.

MyOutfitIsVague commented on What will enter the public domain in 2026?   publicdomainreview.org/fe... · Posted by u/herbertl
Night_Thastus · 12 days ago
Something about this page doesn't seem to work for me. Clicking the tiles doesn't do anything. It's not ad-blocker-related, I disabled those to test.
MyOutfitIsVague · 12 days ago
> In our advent-style calendar below, find our top pick of what lies in store for 2026. Each day, as we move through December, we’ll open a new window to reveal our highlights! By public domain day on January 1st they will all be unveiled — look out for a special blogpost from us on that day. (And, of course, if you want to dive straight in and explore the vast swathe of new entrants for yourself, just visit the links above).
MyOutfitIsVague commented on Advent of Code 2025   adventofcode.com/2025/abo... · Posted by u/vismit2000
gray_-_wolf · 14 days ago
I am very happy that we get the advent of code again this year, however I have read the FAQ for the first time, and I must admit I am not sure I understand the reasoning behind this:

> If you're posting a code repository somewhere, please don't include parts of Advent of Code like the puzzle text or your inputs.

The text I get, but the inputs? Well, I will comply, since I am getting a very nice thing for (almost) free, so it is polite to respect the wishes here, but since I commit the inputs (you know, since I want to be able to run tests) into the repository, it is bit of a shame the repo must be private.

MyOutfitIsVague · 14 days ago
I make my code public, and keep my inputs in a private submodule.
MyOutfitIsVague commented on KDE Plasma 6.8 Will Go Wayland-Exclusive in Dropping X11 Session Support   phoronix.com/news/KDE-Pla... · Posted by u/mikece
serf · 18 days ago
>"Linux" does not force anything on you right?

>It's the community that has by and large decided to move to maintaining other solutions. If you still want to use fvwm you can still run it on arch with x11 until x11 is not maintained and the kernel breaks it somehow

well you just framed it perfectly; it's still forced on the end-user regardless of whether or not you want to call it 'linux' or 'the community that controls and steers linux" .

MyOutfitIsVague · 18 days ago
It's not forced if you were getting it all for free anyway and can walk away at any time. "They've stopped giving away old thing for free and are now only doing new thing" doesn't put you in the position of a captive who has no freedom. You can complain, you can develop your own solutions, you can leave, but I find it over the line the number of people in the X11/Wayland conversation whose position amounts to looking at people who are working for free, and demanding that they do a specific kind of free work without compensation or help. It's all people working on their free time, or companies sponsoring the developments they need. It's hard to make demands as an end user who isn't paying or even helping.
MyOutfitIsVague commented on KDE Plasma 6.8 Will Go Wayland-Exclusive in Dropping X11 Session Support   phoronix.com/news/KDE-Pla... · Posted by u/mikece
mx7zysuj4xew · 18 days ago
Why do you think it's acceptable to insult someone when they have a legitimate concern regarding a software defect?
MyOutfitIsVague · 18 days ago
For the record, it's a Malcolm in the Middle reference: https://youtube.com/watch?v=CzBi5tIfzK4

u/MyOutfitIsVague

KarmaCake day894February 10, 2025View Original