Readit News logoReadit News
hocuspocus · 3 months ago
I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.

The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?

In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.

frm88 · 3 months ago
I wrote about an entry on Sri Lanka a couple of days ago [0] where I checked grok's source reference (factsanddetails.com) against scamdetector which gave it a 38.4 score on a 100 trustworthiness scale. Today that score is 12.2. Every entry in grokipedia that covers topics vaguely Asian has a reference to factsanddetails.com. You can check for yourself: just search for it on grokipedia - it'll come up with worth 601 pages of results.

Today the page I linked in my HN post is completely gone.

But worse: yesterday tumblr user sophieinwonderland found that they were quoted as a source on Multiplicity [1]. Tumblr is definitely not a reliable source and I don't mean to throw shade on sophieinwonderland who might very well be an expert on that topic.

[0] https://news.ycombinator.com/item?id=45743033

[1] https://www.tumblr.com/sophieinwonderland/798920803075883008...

jaredklewis · 3 months ago
What’s the article?
jameslk · 3 months ago
It was just launched? I remember when Wikipedia was pretty useless early on. The concept of using an LLM to take a ton of information and distill it down into encyclopedia form seems promising with iteration and refinement. If they add in an editor step to clean things up, that would likely help a lot (not sure if maybe they already do this)
9dev · 3 months ago
Nothing about that seems promising! The one single thing you want from an Encyclopedia is compressing factual information into high-density overviews. You need to be able to trust the article to be faithful to its sources. Wikipedia mods are super anal about that, and for good reason! Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources. On Wikipedia, at least there’s lots of people checking on each other. There are no such guardrails for an LLM. You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.

If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.

f33d5173 · 3 months ago
It really isn't a promising idea at all. Llms arem't "there" yet with respect to this sort of thing. Having an editor is totally infeasible, at that point you might as well have the humans write the articles.
drysart · 3 months ago
There's a significant difference between a site being useless because it just doesn't have the breadth yet to cover the topic you're looking for (as in early Wikipedia); versus a site being useless by not actually having facts about the topic you're looking for, yet spouting out authoritative-looking nonsense anyway.
ef2k · 3 months ago
Maybe it's just me, but reading through LLM generated prose becomes a drag very quickly. The em dashes sprinkled everywhere, the "it's not this, it's that" style of writing. I even tried listening to it and it's still exhausting. Maybe it's the ubiquity of it nowadays that is making me jaded, but I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.
tim333 · 3 months ago
I find the Grokipedia writing especially a drag. I don't think it's em dashes and similar so much as the ideas not being clear. In good writing the writer normally has a clear idea in mind and is communicating it but the Grokipedia writing is kind of a waffley mess. I guess maybe because LLMs don't have much of an idea in mind so much as stringing words together.
madeofpalk · 3 months ago
It’s right there in the seconds paragraph of the article:

> My Grokipedia entry has over seven thousand words, compared to a mere 1,300 in my Wikipedia article

andrewflnr · 3 months ago
> I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.

Nah dude, what you're describing from LLMs is terrible writing. Just because it has good grammar and punctuation doesn't make it good, for exactly the reasons you listed. Good writing pulls you through.

ajross · 3 months ago
I completely agree. There's an "obsequious verbosity" to these things, like they're trying to convince you they they're not bullshitting. But that seems like a tuning issue (you can obviously get an LLM to emit prose in any style you want), and my guess is that this result has been extensively A/B tested to be more comforting or something.

One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.

jhanschoo · 3 months ago
I'm fine with Gemini's tone as I'm reading for information and argumentation, and Gemini's prose is quite clear. I prefer its style and tone over OpenAI's which seems more inclined to punchy soundbites. I don't use Claude enough for general purpose information to have an opinion on it.
rsynnott · 3 months ago
Yeah, I find it extremely grating. I’m kind of surprised that people are willing to put up with it.
generationP · 3 months ago
Wondering if the project will get better from the pushback or will just be folded like one of Elon's many ADHD experiments. In a sense, encyclopedias should be easy for LLMs: they are meant to survey and summarize well-documented material rather than contain novel insights; they are often imprecise and muddled already (look at https://en.wikipedia.org/wiki/Binary_tree and see how many conventions coexist without an explanation of their differences; it used to be worse a few years ago); the writing style is pretty much that of GPT-5. But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.

beloch · 3 months ago
Bray brought up a really good point. The Grokipedia entry on him was several times the length of his Wikipedia entry, not just because Grok's writing style is verbose, but also because it went into exhaustive detail on insignificant parts of his life simply because the sources were online. My own brief browsings of Grokipedia have left me with the same impression. The current iteration of Grokipedia, besides being untrustworthy, wastes a lot of time beating around the bush and, frequently, off into the weeds.

Just as LLM's lack the capacity for basic logic, they also lack the kind of judgment required to pare down a topic to what is of interest to humans. I don't know if this is an insurmountable shortcoming of LLM's, but it certainly seems to be a brick wall for the current bunch.

-------------

The technology to make Grokipedia work isn't there yet. However, my real concern is the problem Grokipedia is intended to solve: Musk wants his own version of Wikipedia, with a political slant of his liking, and without any pesky human authors. He also clearly wants Wikipedia taken down[1]. This is reality control for billionaires.

Perhaps LLM generated encyclopedias could be useful, but what Musk is trying to do makes it absolutely clear that we will need to continue carefully evaluating any sources we use for bias. If Musk wants to reframe the sum of human knowledge because he doesn't like being called out for his sieg heils, only a fool would place any trust in the result.

[1]https://www.lemonde.fr/en/pixels/article/2025/01/29/why-elon...

morkalork · 3 months ago
>reality control for billionaires

Not to beat a dead horse, but one really could wake up one day and find out we've always been at war with Oceana after the flip of a switch in an LLM encyclopedia.

relaxing · 3 months ago
An encyclopedia article is already an exercise in survey-and-summarize.

Asking an LLM to reprocess it again is only going to add error.

rsynnott · 3 months ago
> But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

And if you believe that you’ll believe anything. “Try to _change_ the bias” would be closer.

__s · 3 months ago
> can probably be used to shame the WP into cleaning their shit up

what if your goal is for wikipedia to be biased in your favor?

9dev · 3 months ago
No no no, you see, you got it all wrong. If the Wikipedia article on, let’s say, transsexualism, says that’s an orientation, not a disease—then that’s leftist bias. Removing that bias means correcting it to say it’s a mental illness, obviously. That makes the article unbiased, pure truth.
spankibalt · 3 months ago
> "If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases [...] The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up [...]"

One thing I love about the Wikipedias (plural, as they're all different orgs): anyone "in the know" can very quickly tell who's got no practical knowledge of Wikipedia's structure, rules, customs, and practices to begin with. What you're proposing like it's some sort of Big Beautiful Idea has already been done countless times, is being done, and will be done for as long as Wikis exist.

And Groggypedia? It's nothing more but a pathetic vanity project of an equally pathetic manbaby for people who think LLM-slop continously fine-tuned to reflect the bias of their guru, and the tool's owner, is a Seal of Quality.

generationP · 3 months ago
Don't forget that public opinion and the media landscape are quite different in 2025 from what they were in the 2010s when most prior studies on WP bias have been written. Sufficiently pertinent (sadly this isn't synonymous with high quality) conservative and anti-woke content can reach wide audiences, particularly when Elon puts his thumb on the scale. Besides, to my knowledge, none of the prior attempts at studying WP bias has even tried to make a big enough fuss to change said bias; the final outcomes of the studies were conference papers.
physarum_salad · 3 months ago
"Wikipedia, in my mind, has two main purposes: A quick visit to find out the basics about some city or person or plant or whatever, or a deep-dive to find out what we really know about genetic linkages to autism or Bach’s relationship with Frederick the Great or whatever."

Completely agree with the first purpose but would never use wikipedia for the second purpose. Its only good at basics and cannot handle complex information well.

ajross · 3 months ago
I think that's actually wrong, or hangs on a semantic argument about "complexity". Wikipedia is an overview source. It's not going to give you "all" the information, but it's absolutely going to tell you what information there is. And in particular where there's significant argument or controversy, or multiple hypotheses, Wikipedia is going to be arguably the best source[1] for reflecting the state of discourse.

Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?

[1] In fact, talk pages are often ground zero!

physarum_salad · 3 months ago
The best source is the one that provides the widest breadth of information on a topic.

This is a good use of wikipedia: "Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?"

But that is like skim reading or basic introductions rather than in-depth understanding.

generationP · 3 months ago
Yeah, encyclopedias are meant to be indexes to knowledge, not repositories thereof. The WP feature-creeped its way to the latter, but it is not reliably good at it, and I'm not sure if there is an easy way to tell how good a given page is without knowing the subject in the first place.
skeeter2020 · 3 months ago
what I think it IS good at is parlaying the first purpose into a broad, meandering journey of the basics. I would never use it for deep study of genetics & autism or Bach and Fredrick the Great, but I love following some shallow thread that travels across all of them.
dragonwriter · 3 months ago
Its often good for the latter when, as a tertiary source should be, it is used not just for its narrative content but for its references to secondary sources, which are themselves used for both their content and their references.
spankibalt · 3 months ago
> Its only good at basics and cannot handle complex information well.

Poppycock! Because of MediaWiki's multimedia capabilities it can handle complex information just fine, obviously much better than printed predecessors. What you mean is a Wiki's focus, which can take the form of a generalized or universal encylopedia (e. g. Wikipedia), or a specialized one, or a free-form one (Wikipedia, in practice, again). Wikipedias even negotiate integration of different information streams, e. g. up-to-date news-like information, both in the lemmata (often a huge problem, i. e. "newstickeritis"), in its own news wiki (Wikinews), or the English Wikipedia's newspaper, The Signpost.

And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge.

physarum_salad · 3 months ago
Don't understand the implications of this:

"And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge."

We should just accept wholescale editing and knowledge production when we personally agree with it? Otherwise its verboten? You are aware of "edit-a-thons"? Are these biased?

Why not just have an AI print all of the currently available information on a topic with minimised (not zero) biases?

If we lived in a utopia where wikipedia could randomly allocate tasks to a diverse group of expert level civilians and then aggregate these takes/edits into a full description of a topic I would agree with wikipedia maximalists. This does not happen and a bunch of bad or naive actors have reduced the quality.

siliconc0w · 3 months ago
Not sure it still does this but for awhile if you asked Grok a question about a sensitive topic and expanded the thinking, it said it was searching Elon's twitter history for its ground truth perspective.

So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.

sunaookami · 3 months ago
This was unintended as observed by Simon here: https://simonwillison.net/2025/Jul/11/grok-musk/ and confirmed by xAI themselves here: https://x.com/xai/status/1945039609840185489

>Another was that if you ask it “What do you think?” the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company.

The diff for the mitigation is here: https://github.com/xai-org/grok-prompts/commit/e517db8b4b253...

epistasis · 3 months ago
There's a chance it was unintended, but no proof of that.
siliconc0w · 3 months ago
The problem is it's part of a pattern of several 'bugs' and even 'unauthorized prompt changes' that have caused Grok to be more Elon-aligned.

And when asked by right wing people about an embarrassing Grok response that refutes their view, Elon has agreed it's a problem and said he is "working on it".

josefritzishere · 3 months ago
I looked at Grokopedia today and spot-checked for references to my own publications which exist in Wikipedia. As is often reported, it very directly plagerizes Wikipedia. But it did remove dead links. This is pretty underwhelming even on the Musk hype scale.
tptacek · 3 months ago
Why give it oxygen?
tshaddox · 3 months ago
Same reason you posted that comment: it's sometimes interesting to discuss a thing even if you dislike the thing.
tptacek · 3 months ago
I'm fine with the logic of discussing it here but can't fathom why Tim Bray thought this would be a useful post given his own objectives.
meowface · 3 months ago
To play devil's advocate: Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter, contrary to popular conception. Elon keeps trying to tweak its system prompt to make it less effective at that, but Grokipedia was worth an initial look from me out of curiosity. It took me 10 seconds to realize it was ideologically-motivated garbage and significantly more right-biased than Wikipedia is left-biased.

(Unfortunately, Reply-Grok may have been successfully partially lobotomized for the long term, now. At the time of writing, if you ask grok.com about the 2020 election it says Biden won and Trump's fraud claims are not substantiated and have no merit. If you @grok in a tweet it now says Trump's claims of fraud have significant merit, when previously it did not. Over the past few days I've seen it place way too much charity in right-wing framings in other instances, as well.)

tptacek · 3 months ago
Wikipedia is probably in the running for one of the greatest contributions to public knowledge of the past 100 years, and that's a consequence of how it functions, warts and all. I don't care how good Grok is or isn't. I'm a fan of frontier model LLMs. They don't meaningfully replace Wikipedia.
jayd16 · 3 months ago
It's not controlled by a trusted actor so it doesn't matter how it happens to act at the moment.

They could pull the rug at any future time and its almost better to gain trust now and cash in that trust later.

pstuart · 3 months ago
The problem of debunking right-wing misinformation is that it doesn't seem to matter. The consumers of that misinformation want it and those of us who think it's bad for society already know that its garbage.

It feels like we've reached Peak Stupidity but it's clear it can (and likely will) get much worse with AI videos.

LastTrain · 3 months ago
“ Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter”

Well, no, it hasn’t. It has debunked some things. It has made some incorrect shit up. But it isn’t historically one of the “biggest debunkers” of anything. Do we only speak hyperbole now?

Deleted Comment

Dead Comment

bebb · 3 months ago
Because it's a genuinely good idea, and hopefully one for which the execution will be improved upon over time.

In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.

Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.

epistasis · 3 months ago
I don't why an LLM would be better in theory. The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.

Humans looking through sources, applying knowledge of print articles and real world experiences to sift through the data, that seems far more valuable.

quantified · 3 months ago
Summarizing all the knowledge is very very far from summarizing all that is written. All it takes is including everything published. The earth must be flat. Disease is caused by bad morals. Etc etc.
mensetmanusman · 3 months ago
It's great idea to share knowledge bases collected and curated by LLMs.

Amazing that Musk did it first. (Although it was suggested to him as part of an interview a month before release).

These systems are very good at finding obscure references that were overlooked by mere mortals.

simonw · 3 months ago
"It's great idea to share knowledge bases collected and curated by LLMs"

Is it though?

LLMs are great at answering questions based on information you make available to them, especially if you have the instincts and skill to spot when they are likely to make mistakes and to fact-check key details yourself.

That doesn't mean that using them to build a knowledge base itself is a good idea! We need reliable, verified knowledge bases that LLMs can make use-of.

jayd16 · 3 months ago
> collected and curated by LLMs.

Wah? LLMs don't collect things.

I mean, if any of these AI companies want to open up all their training data as a searchable archive, I'd be all for it.