Readit News logoReadit News
cozzyd · 2 months ago
Amusing to see what Grokipedia thinks of various cities.

And no surprise, apartheid apologetics: https://grokipedia.com/page/Apartheid#debunking-prevailing-n...

Hilarious factual errors in https://grokipedia.com/page/Green_Line_(CTA)

palmotea · 2 months ago
> Hilarious factual errors in https://grokipedia.com/page/Green_Line_(CTA)

Impossible! That article was "Fact checked by Grok yesterday!"

I'm glad we've solved the LLM hallucination problem by fact-checking with LLMs. No way that could go wrong.

roryirvine · 2 months ago
I've spotted surprising amounts of confidently-stated nonsense even in fairly neutral articles where Elon / xAI is unlikely to have a particular political slant.

Many of the most glaring errors are linked to references which either directly contradict Grokipedia's assertion or don't mention the supposed fact one way or the other.

I guess this is down to LLM hallucinations? I've not used Grok before, but the problems I spotted in 15 mins of casual browsing made it feel like the output of SoA models 2-3 years ago.

Has this been done on the cheap? I suspect that xAI should probably have prioritised quality over quantity for the initial launch.

sholladay · 2 months ago
Some time ago, there was a project called Citizendium that aimed for quality over quantity, with articles written and peer-reviewed by subject matter experts who had to use their real names and working email addresses, among other requirements. I always thought that was interesting, since the main critique of Wikipedia is its open editing model.

Citizendium is still around, though they've loosened some of the requirements in order to encourage more contributions, which seems self-defeating to me. I think they should have tried to cooperate with Wikipedia instead. The edits and opinions of subject matter experts could be a special layer on top of existing Wikipedia articles. Maybe there could be a link for various experts with highlights of sections they have peer-reviewed and a diff of what they would change about the article if those changes haven't been accepted. There could also be labels for how much expert consensus and trust there is on a given snapshot of an article or how frozen the article should be based on consensus and evidence provided by the experts. This would help users delineate whether an article contains a lot of common knowledge or whether it's more speculative or controversial.

rsynnott · 2 months ago
> I suspect that xAI should probably have prioritised quality over quantity for the initial launch.

I mean, I don't think this is _for_ people who care about quality, tbh. For those, there is wikipedia. This is more of a safe space for Musk.

TYPE_FASTER · 2 months ago
> I've spotted surprising amounts of confidently-stated nonsense

I find this to be the most annoying aspect of AI. The initial Google AI results were especially bad. It is getting better, but still spout info I know is false without any warning.

Like, I find blowhards tiring enough in RL. Don't really want to deal with artificial blowhards when I'm trying to solve a problem.

davydm · 2 months ago
I'm no apartheid apologist, but I have lived here (ZA) all my life.

Whilst I haven't read the entire article, the first paragraph is actually on-point: apartheid was shit in a lot of respects, but the schools, especially in rural areas, have dramatically declined since 1994, as have most government-run companies (with the exceptions like Eskom being bailed out every year).

You don't have to like the facts, but that's what they are.

a0123 · 2 months ago
I wonder if something called "context" and the socio-economic direction might have something to do with it.

"I think we gotta hand it to Apartheid because schools were very slightly less worse" isn't the argument you think it is. It does paint where you stand quite clearly.

Never start a sentence with "I'm no apartheid apologist, but". Nothing good can ever come out of it.

tim333 · 2 months ago
Yeah I used to date a coloured girl from Jo'berg who'd grown up in that era and she was positive that they had some degree of prosperity and modern comforts unlike the surrounding African countries. The overwhelming flow of people voting with their feet and walking across the borders was from the surrounding countries to SA rather than vice versa.
iloveyou1xrobot · 2 months ago
But this is literal apartheid apologia.

Saying "advancements in black literacy and real wages during the era" as if those things are due to apartheid is offensively absurd.

How about we advance literacy and wages without, you know, all the apartheid.

extraduder_ire · 2 months ago
Might be a good idea to copy some example snippets. The website doesn't have a revision history and could change after you post a link.
LudwigNagasena · 2 months ago
By "apartheid apologetics" do you mean that it is factually wrong or merely that you dislike the framing? I think there is a huge difference between those two accusations.
cozzyd · 2 months ago
Apologetics has a well-defined meaning. In this case, it's a bad faith deluge of out-of-context non-sequiturs posing as a coherent argument in order to defend something deplorable.
croes · 2 months ago
>A prevailing narrative depicts apartheid as a system of unremitting total oppression for black South Africans, yet empirical data indicate substantial advancements in black literacy and real wages during the era.

This is a false contradiction. You can oppress people and still pay them higher wages and fight illteracy.

reaperducer · 2 months ago
Hilarious factual errors in https://grokipedia.com/page/Green_Line_(CTA)

Weird that it displaying some other web site's embed/shortcodes:

> ![Cottage Grove-bound Green Line train approaching Roosevelt station][float-right] The Green Line utilizes primarily 5000-series railcars

cozzyd · 2 months ago
Strange indeed. That feels like it shouldn't happen at all.
qingcharles · 2 months ago
As a Green Line enjoyer, I'm not enough of an expert to spot the factual errors. The article does seem a bit much though. I've noticed a lot of the Grokipedia articles just go on. If I wanted to know that much about the Green Line I could probably just buy a book on it.
cozzyd · 2 months ago
Among the more obvious errors are short runs to UIC/Halsted and a northwestern trajectory along Lake St from the loop.

But yes, it also suffers from attention to irrelevant detail.

113 · 2 months ago
The problem I have with most AI output like this is that it's just a huge wall of text that doesn't really say anything.
TYPE_FASTER · 2 months ago
While ChatGPT tells me it's unable to access the linked page directly from Grokipedia (lolol), I was able to download the content, copy/paste it into ChatGPT, and ask it to fact check it. I think I will do this more often, with other sites (and other models) as well, going forward, as Chat is able to categorize statements as being correct vs. misleading vs. flat out wrong.

> The article does seem a bit much though. I've noticed a lot of the Grokipedia articles just go on.

And yes, that's what I'm noticing as well. There is a clear attempt to establish a narrative.

Dead Comment

sergiotapia · 2 months ago
You can select text, and send factual errors to be fixed. If you found something wrong in that article you should submit some fixes.
measurablefunc · 2 months ago
Why would you do free work for a company which is planning to profit from your labor? Wikipedia/Wikimedia is a non-profit. All of their money pays for real expenses instead of whatever vanity project Musk has decided is necessary to sell xAI to the masses.
solid_fuel · 2 months ago
> If you found something wrong in that article you should submit some fixes.

Why? This site isn't run by people who are interested in factual accuracy.

If they think Wikipedia articles are inaccurate, they could always propose changes and have a proper discussion with the rest of the contributors. Grok was trained on Wikipedia so realistically this is just a jumbled regurgitation of Wikipedia articles blended with other sources from across the web without the usual source vetting process that Wikipedia uses.

This is a politically motivated side project being run by the worlds richest man, and frankly I doubt many people are interested in helping him create his own padded version of reality.

rsynnott · 2 months ago
Why on earth would anyone do free work for Elon Musk, of all people?
cozzyd · 2 months ago
eh that requires making an account, which I'd prefer not to.
lewismenelaws · 2 months ago
Looking up the Republican Party "controversies" vs the Democratic Party "controversies" should let you know exactly what this projects intentions are.

That being said, my biggest issue with it is how Grok is writing everything. It's like it is trying REALLY hard to be neutral but it's conversational training slips up and starts "spicing" things up a little. For example on Elon's article:

"...at age 12 in 1983, developing a space-themed video game called Blastar, which he sold to PC and Office Technology magazine for approximately $500. *This early entrepreneurial act foreshadowed Musk's later pursuits in technology and business*."

Sentences like that are designed to subtly bring emotion to certain topics.

rsynnott · 2 months ago
I feel like ‘Wikipedia for stupid people’ is already quite a crowded market, tbh.
superkuh · 2 months ago
I remember when I was a kid I childishly edited the Bible so that all occurances of 'god' or 'lord' and the like were replaced with my nickname. I then uploaded these altered copies all over the 1990s internet. This seems like pretty much the same energy.
mudkipdev · 2 months ago
Silly glitch on the page: https://grokipedia.com/page/Sri_Lanka
tim333 · 2 months ago
Comparing the Elon Musk articles between Grokipedia and Wikipedia, the first factual difference is with Tesla:

>..Musk founded SpaceX in 2002 as CEO and chief engineer, Tesla in 2003 ... (grok)

>Musk joined the automaker Tesla as an early investor in 2004 and became its CEO ... (wikipedia)

I think Wikipedia is more accurate on that one.

Meekro · 2 months ago
You're right, that's definitely a mistake. Though to be fair, the same article gets it right if you scroll down to the Tesla section. The article on Tesla also gets it right.
dabinat · 2 months ago
I guess this poses an interesting question: if Wikipedia was being created today, would it be a human- edited encyclopedia or would they just resort to AI because it’s easier? It makes me wonder if people will shy away from hard problems and just take the easy path, resulting in a shallower and less useful product to society.
archagon · 2 months ago
Without Wikipedia's corpus, today's AI might not even be possible.
hagbard_c · 2 months ago
Oh yes it would be possible. It would probably be less biased as well. Don't forget that these models are trained on libraries of congress worth of books as well as things like Wikipedia. Given that Wikipedia - like any encyclopedia - does not (or should not, at least) contain original research but only refers to existing sources and given that the companies which train these models have their ways to access those sources - sometimes illegally but still - all Wikipedia adds to the mix is a biased interpretation of the original research.
Gigachad · 2 months ago
I have to wonder, obviously the grok version exists just to push Elons politics and take control over the “truth”. But it seems like an LLM could take over. Considering Wikipedia is not meant to contain any original facts, just a collection of references to external information.
dev2roofer · 2 months ago
Kinda funny how Grokipedia looks like an encyclopedia but clearly talks like an LLM. Lots of confidence, not so much evidence.

It’s not that it’s trying to lie — it’s just how these models work. They’re great at making language sound right, not necessarily be right. Feels more like a mirror of what the internet “thinks” than an actual source of truth.

If they framed it that way — more experiment, less Wikipedia — I think people would take it a lot better.