Readit News logoReadit News
staticman2 commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
Workaccount2 · 2 days ago
Anytime you use LLMs you should be keenly aware of their knowledge cutoff. Like any other tool, the more you understand it, the better it works.
staticman2 · 2 days ago
I'm sorry but I don't see what "knowledge cutoff" has to do with what we were talking about- which is using a LLM find PDFs and other sources for research.
staticman2 commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
dmd · 3 days ago
I’m not uploading PDFs at all. I’m talking about PDFs it finds while searching than it extracts data from for the conversation.
staticman2 · 3 days ago
I'm surprised to hear anyone finds these models trustworthy for research.

Just today I asked Claude what year over year inflation was and it gave me 2023 to 2024.

I also thought some sites ban A.I. crawling so if they have the best source on a topic, you won't get it.

staticman2 commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
dmd · 3 days ago
I consistently have exactly the opposite experience. ChatGPT seems extremely willing to do a huge number of searches, think about them, and then kick off more searches after that thinking, think about it, etc., etc. whereas it seems like Gemini is extremely reluctant to do more than a couple of searches. ChatGPT also is willing to open up PDFs, screenshot them, OCR them and use that as input, whereas Gemini just ignores them.
staticman2 · 3 days ago
Are you uploading PDFs that already have a text layer?

I don't currently subscribe to Gemini but on A.I. Studio's free offering when I upload a non OCR PDF of around 20 pages the software environment's OCR feeds it to the model with greater accuracy than I've seen from any other source.

staticman2 commented on New benchmark shows top LLMs struggle in real mental health care   swordhealth.com/newsroom/... · Posted by u/RicardoRei
threetonesun · 4 days ago
I was watching "A Charlie Brown Christmas" the other day, and Lucy (who has a running gag in Peanuts of being a terrible, or at least questionable, psychologist) tells Charlie Brown to get over his seasonal depression he should get involved in a Christmas project, and suggests he be the director of their play.

Which is to say, your stance might not be as controversial as you think, since it was the adult take in a children's cartoon almost 60 years ago.

staticman2 · 4 days ago
Your Peanuts reference made me smile but I don't see why you thought a little girl's comment in a 1960s Christmas special was supposed to represent the "adult take" on mental health in the 1960s.

Lucy isn't actually a psychologist which is part of the reason the "gag" is funny.

staticman2 commented on “The Matilda Effect”: Pioneering Women Scientists Written Out of Science History   openculture.com/2025/12/m... · Posted by u/binning
gldrk · 4 days ago
>How did people even come to this bizarre conclusion?

The first reason is that it is true. All of the best evidence suggests a minor male advantage on g and a major advantage in more specific abilities, such as mental rotation. See https://emilkirkegaard.dk/en/2021/04/the-claim-of-substantia...

It is easy to see why that would be the case from an evolutionary point of view. Ironically, your own post contains a clue: in a male-dominated society where men are far more valued for their intelligence than women, such differences are bound to arise.

The egalitarian bad faith interpretation of this claim is that any man is smarter than Marie Curie. What it actually says is that a hypothetical Mario Curie would almost certainly outshine his real-life counterpart.

The other reason is related to sexual selection. Even if a certain man is less intelligent or physically weaker than most women, it may be adaptive for him to pretend otherwise. What beliefs come to dominate in a given population is determined by reproductive success, not directly by their truth value.

staticman2 · 4 days ago
For context Wikipedia says the guy you linked to is a far right white supremacist who founded a pseudoscience journal.
staticman2 commented on Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?    · Posted by u/embedding-shape
Arainach · 5 days ago
I keep this link handy to send to such coworkers/people:

https://distantprovince.by/posts/its-rude-to-show-ai-output-...

staticman2 · 4 days ago
Starting a blog post with a lengthy science fiction novel excerpt without setup is annoying.
staticman2 commented on Show HN: Gemini Pro 3 imagines the HN front page 10 years from now   dosaygo-studio.github.io/... · Posted by u/keepamovin
colinplamondon · 5 days ago
Sure - and the people responsible for a new freaking era of computing are the ones who asked "given how incredible it is that this works at all at 0.5b params, let's scale it up*.

It's not hyperbole - that it's an accurate description at a small scale was the core insight that enabled the large scale.

staticman2 · 5 days ago
Well it's obviously hyperbole because "all human thought" is not in a model's training data nor available in a model's output.

If your gushing fits a 0.5b it probably doesn't tell us much about A.I. capabilities.

staticman2 commented on Show HN: Gemini Pro 3 imagines the HN front page 10 years from now   dosaygo-studio.github.io/... · Posted by u/keepamovin
colinplamondon · 5 days ago
- A queryable semantic network of all human thought, navigable in pure language, capable of inhabiting any persona constructible from in-distribution concepts, generating high quality output across a breadth of domains.

- An ability to curve back into the past and analyze historical events from any perspective, and summon the sources that would be used to back that point of view up.

- A simulator for others, providing a rubber duck inhabit another person's point of view, allowing one to patiently poke at where you might be in the wrong.

- Deep research to aggregate thousands of websites into a highly structured output, with runtime filtering, providing a personalized search engine for any topic, at any time, with 30 seconds of speech.

- Amplification of intent, making it possible to send your thoughts and goals "forward" along many different vectors, seeing which bear fruit.

- Exploration of 4-5 variant designs for any concept, allowing rapid exploration of any design space, with style transfer for high-trust examples.

- Enablement of product craft in design, animation, and micro-interactions that were eliminated as tech boomed in the 2010's as "unprofitable".

It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.

People can use it badly - and engagement-aligned models like 4o are cognitive heroin - but the invention of LLMs is an absolute wonder.

staticman2 · 5 days ago
>A queryable semantic network of all human thought

This hyperbole would describe any LLM of any size and quality, including a 0.5b model.

staticman2 commented on Paramount launches hostile bid for Warner Bros   cnbc.com/2025/12/08/param... · Posted by u/gniting
bananaflag · 5 days ago
Episode 8 was a retread of Empire Strikes Back (ships chase through empty space while the main character trains with the old master on a wild planet). It seemed subversive just because ESB was subversive relative to ANH.
staticman2 · 5 days ago
Episode 8 was subversive because it had self aware moments "trolling" the audience throughout like Luke mocking the idea Rey (and the audience) thought he would pick up a lightsaber again.

It also has weird "subversive" dialogue about sacrifice being bad that doesn't really fit what's happening in the movie itself where sacrifice of two characters saves the day. Which is "subversive" in the sense that a movie with dialogue saying "this is a shitty movie plot" is subversive.

It also rips off the ending of Return of the Jedi by killing the main bad guy so is "subversive" in that it trolls whoever was stuck making episode 9 without a functional villain.

staticman2 commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
panarky · 6 days ago
When someone says "AIs aren't really thinking" because AIs don't think like people do, what I hear is "Airplanes aren't really flying" because airplanes don't fly like birds do.
staticman2 · 5 days ago
Whenever someone paraphrases a folksy aphorism about airplanes and birds or fish and submarines I suppose I'm meant to rebut with folksy aphorisms like:

"A.I. and humans are as different as chalk and cheese."

As aphorisms are a good way to think about this topic?

u/staticman2

KarmaCake day1837October 2, 2018View Original