Readit News logoReadit News
gojomo · 3 months ago
Look, we just need to add some new 'planes' to Unicode - that mirror all communicatively-useful characters, but with extra state bits for...

guaranteed human output - anyone who emits text in these ranges that was AI generated, rather than artisanally human-composed, goes straight to jail.

for human eyes only - anyone who lets any AI train on, or even consider, any text in these ranges goes straight to jail. Fnord, "that doesn't look like anything to me".

admittedly AI generated - all AI output must use these ranges as disclosure, or – you guessed it - those pretending otherwise go straight to jail.

Of course, all the ranges generate visually-indistinguishable homoglyphs, so it's a strictly-software-mediated quasi-covert channel for fair disclosure.

When you cut & paste text from various sources, the provenance comes with it via the subtle character encoding differences.

I am only (1 - epsilon) joking.

io84 · 3 months ago
Just like with food: there will be a market value in content that is entirely “organic” (or in some languages “biological”). I.e. written, drawn, composed, edited, and curated by humans.

Just like with food: defining the boundaries of what’s allowed will be a nightmare, it will be impossible to prove content is organic, certifying it will be based entirely on networks of trust, it will be utterly contaminated by the thing it professes to be clean of, and it may even be demonstrably worse while still commanding a higher price point.

godelski · 3 months ago
The entire world operates on trust of some form. Often people are acting in good faith. But regulation matters too.

If you don't go after offenders then you create a lemon markets. Most customers/people can't tell, so they operate on what they can. That doesn't mean they don't want the other things, it means they can't signal what they want. It is about available information, that's what causes lemon markets, information asymmetry.

It's also just a good thing to remember since we're in tech and most people aren't tech literate. Makes it hard to determine what "our customers" want

bitmasher9 · 3 months ago
I do wonder what would be an acceptable level of guarantee to trigger a “human written” bit.

I actually think a video of someone typing the content, along with the screen the content is appearing on, would be an acceptably high bar at this present moment. I don’t think it would be hard to fake, but I think it would very rarely be worth the cost of faking it.

I think this bar would be good for about 60 days, before someone trains a model that generates authentication videos for incredibly cheap and sells access to it.

short_sells_poo · 3 months ago
Fully in agreement with you. There'll be ultimately two groups of consumers of "organic" content:

1. Those who just want to tick a checkbox will buy mass produced "organic" content. AI slop that had some woefully underpaid intern in a sweatshop add a bit of human touch.

2. People who don't care about virtue signalling but genuinely want good quality will use their network of trust to find and stick to specific creators. E.g. I'd go to the local farmer I trust and buy seasonal produce from them. I can have a friendly chat with them while shopping, they give me honest opinions on what to buy (e.g. this year was great for strawberries!). The stuff they sell on the farm does not have to go through the arcane processes and certifications to be labelled organic, but I've known the farmer for years, I know that they make an effort to minimize pesticide use, they treat their animals with care and respect and the stuff they sell on the farm is as fresh as it can be, and they don't get all their profits scalped by middlemen and huge grocery chains.

dmsnell · 3 months ago
Unicode has a range of Tag Characters, created for marking regions of text as coming from another language. These were deprecated for this purpose in favor of higher level marking (such as HTML tags), but the characters still exist.

They are special because they are invisible and sequences of them behave as a single character for cursor movement.

They mirror ASCII so you can encode arbitrary JSON or other data inside them. Quite suitable for marking LLM-generated spans, as long as you don’t mind annoying people with hidden data or deprecated usage.

https://en.m.wikipedia.org/wiki/Tags_(Unicode_block)

akoboldfrying · 3 months ago
Can't I get around this by starting my text selection one character after the start of some AI-generated text and ending it one character before the end, Ctrl-C, Ctrl-V?
thih9 · 3 months ago
> emits text in these ranges that was AI generated

How would you define AI generated? Consider a homework and the following scenarios:

1. Student writes everything themselves with pen & paper.

2. Student does some research with an online encyclopedia, proceeds to write with pen and paper. Unbeknownst to them, the online encyclopedia uses AI to answer their queries.

3. Student asks an AI to come up with the structure of the paper, its main points and the conclusion. Proceeds with pen and paper.

4. Student writes the paper themselves, runs the text through AI as a final step, to check for typos, grammar and some styling improvements.

5. Student asks the AI to write the paper for them.

The first one and the last one are obvious, but what about the others?

Edit, bonus:

6. Student writes multiple papers about different topics; later asks an AI to pick the best paper.

juancroldan · 3 months ago
7. Student spent the entire high school and bachelor's degree learning from content that teachers generate using AI and using it to do homework, hence becoming AI-contaminated
WithinReason · 3 months ago
This is about the characters themselves, therefore:

1. Not AI 2. Not AI 3. Not AI 4. The characters directly generated by AI are AI characters 5. AI 6. Not AI

Applejinx · 3 months ago
6 is extremely interesting, in that it's tantamount to asking a panel of innumerably many people to give an opinion on which paper is best for a general audience.

It's hard to imagine that NOT working unless it's implemented poorly.

crubier · 3 months ago
Twelve millisecond after this law gets into effect, typing factories open in India, where human operators hand-recopy text from AI sources to perform "data laundering".
miki123211 · 3 months ago
If somebody writes in a foreign language and asks Chat GPT to translate to English, is that AI generated content? What about if they write on paper and use an LLM to OCR? What if they give the AI a very detailed outline, constantly ask for rewrites and are ruthless in removing any facts they're not 100% sure of if they slip in? What if they only use AI to fix the grammar and rewrite bad English into a proper scientific tone?

My answer would be a clear "no" to all of these, even though the content ultimately ends up fully copy-pasted from an LLM in all those cases.

theamk · 3 months ago
My answer is clear "yes" to most of those.

Yes, machine translations are AI-generated content - I read foreign-language news sites which sometimes has machine translation articles and the quality stands out and not in a good way.

"Maybe" for "writing on paper and using LLM for OCR". It's like automatic meeting transcript - if the speaker has perfect pronunciation, it works well. If they don't, then the meeting notes still look coherent but have little relationship to what speaker said and/or will miss critical parts. Sadly there is no way for reader to know that from reading the transcript, so I'd recommend labeling "AI edited" just in case.

Yes, even if "they give the AI a very detailed outline, constantly ask for rewrites, etc.." it's still AI generated. I am not sure how can you argue otherwise - it's not their words. Also, it's really easy to convince yourself that you are "ruthless in removing any facts they're not 100% sure" while actually you are anything but.

"What if they only use AI to fix the grammar and rewrite bad English into a proper scientific tone?" - I'd label it "AI-edited" if the rewrites are minor or "AI-generated" if the rewrites are major. This one is especially insidious as people may not expect rewrites to change meaning, so they won't inspect them too much, so it will be easier for hallucinations to slip in.

a57721 · 3 months ago
It really depends on the context, e.g. if you need texts for a database of word frequencies, then the answer is a clear "yes", and LLMs have already ruined everything [1]. The only exception from your list would be OCR where a human proofreads the output.

[1] https://github.com/rspeer/wordfreq/blob/master/SUNSET.md

diffeomorphism · 3 months ago
For the translate part let me just point out the offensively bad translations that reddit (sites with an additional ?tl=foo) and YouTube automatic dubbing force upon users.

These are immediately, negatively obvious as AI content.

For the other questions the consensus of many publications/journals has been to treat grammar/spellcheck just like non-AI but require that other uses have to be declared. So for most of your questions the answer is a firm "yes".

zdc1 · 3 months ago
If the purpose is to identify text that can be used as training data, in some ways it makes sense to me to mark anything and everything that isn't hand-typed as AI generated.

Like for your last example: to me, the concept "proper scientific tone" exists because humans hand-typed/wrote in a certain way. If we use AI edited/transformed text to act as a source for what "proper scientific tone" looks like, we still could end up with an echo chamber where AI biases for certain words and phrases feed into training data for the next round.

Being strict about how we mark text could mean a world where 99% of text is marked as AI-touched and less than 1% is marked as human-originated. That's still plenty of text to train on, though such a split could also arguably introduce its own (measurable) biases...

RodgerTheGreat · 3 months ago
All four of your examples are situations where an LLM has potential to contaminate the structure or content of the text, so in all four cases it is clear-cut that the output poses the same essential hazards to training or consumption as something produced "whole cloth" from a minimal prompt; post-hoc human supervision will at best reduce the severity of these risks.
gojomo · 3 months ago
OK, sure, there are gradations.

The new encoding can contain a FLOAT32 side channel on every character, to represent its proportional "AI-ness" – kinda like the 'alpha' transparency channel on pixels.

BugheadTorpeda6 · 3 months ago
Yes yes yes yes
c-linkage · 3 months ago
Stop ruining my simple and perfect ideas with nuance and complexity!
slashdev · 3 months ago
I’ll take the contrarian view. I don’t care if content is generated by a human or by an AI. I care about the quality of the content, and in many cases, the human does a better job currently.

I would like a search engine algorithm that penalizes low quality content. The ones we currently have do a piss poor job of that.

andsoitis · 3 months ago
> I would like a search engine algorithm that penalizes low quality content. The ones we currently have do a piss poor job of that.

Without knowing the full dataset that got trimmed to the search result you see, how do you evaluate the effectiveness?

ianburrell · 2 months ago
Maybe have the glyph be zero width by default but have way to show them? I think begin-end markers would work better to make a whole range. It would need support from editor to manage the ranges and change editing AI generated text to mixed.

What might make sense is source marking. If you copy and paste text, it becomes a citation. AI source is always cited.

I havebeen thinking that there should be metadata in images for the provenance. Maybe a list of hashes of source images. Real cameras would include the raw sensor data. Again, AI image would be cited.

andrewflnr · 3 months ago
It would be much less disruptive to require that any network traffic containing AI generated content must have the IP evil bit set.
sneak · 3 months ago
I have long thought that we should extend the plain text format to allow putting provenance metadata into substrings in the file.

This is that, but a different implementation. Plain text is like two conductor cables; it’s so useful and cost effective but the moment you add a single abstraction layer above it (a data pin) you can do so much more cool stuff.

crubier · 3 months ago
That would be an evolution of HTML. Plain text is just plain text by definition, it can't include markup and annotations etc.
throwaway290 · 3 months ago
> for human eyes only - anyone who lets any AI train on, or even consider, any text in these ranges goes straight to jail. Fnord, "that doesn't look like anything to me".

Won't work because on day 0 someone will write a conversion library and apparently if you are big enough and have enough lawyers you can just ignore the jail threat (all popular LLMs just scrape internet and skip licensing any text or code. Show me one that isn't)

sebzim4500 · 3 months ago
You'd probably want to distinguish between content being readable by AI and being trainable by AI.

E.g. you might be fine with the search tool in chatgpt being able to read/link to your content but not be fine with your content being used to improve the base model.

akoboldfrying · 3 months ago
Each character should be, in effect, a signed git commit: in addition to a few bits for the Unicode code point itself, it should store a pointer back to the previous character's hash, plus a digital signature identifying the keyboard that typed it.
achierius · 3 months ago
Rather than new planes, some sort of combining-character or even just an invisible signifying-mark would achieve the same purpose with far less encoding space. Obviously this would still be a nightmare for everyone who has to process text regardless.
function_seven · 3 months ago
Nope. Too easy to accidentally strip out. Each and every glyph must carry the taint.

We don’t want to send innocent people to jail! (Use UCS-18 for maximum benefit.)

foxglacier · 3 months ago
But why? It's nice that somebody's collecting sources of pre-AI content that might be useful for curiosity or research or something. But other than that, why does it matter? AI text can still be perfectly good text. What's the psychological need behind this popular anti-AI ludditism?
jofzar · 3 months ago
You’re absolutely right that AI-generated text can be good—sometimes even great. But the reason people care about preserving or identifying pre-AI content isn’t always about hating AI. It's more about context and trust.

Think of it like knowing the origin of food. Factory-produced food can be nutritious, but some people want organic or local because it reflects a different process, value system, or authenticity. Similarly, pre-AI content often carries a sense of human intention, struggle, or cultural imprint that people feel connected to in a different way.

It’s not necessarily a “psychological need” rooted in fear—it can be about preserving human context in a world where that’s becoming harder to spot. For researchers, historians, or even just curious readers, knowing that something was created without AI helps them understand what it reflects: a human moment, not a machine-generated pattern.

It’s not always about quality—it’s about provenance.

Edit: For those that can't tell this is obviously just copy and pasted from chatgpt response.

qwertycrackers · 3 months ago
Sounds like the plot of God Shaped Hole
brian-armstrong · 3 months ago
Seems kind of excessive to send them to jail when the prisons are already pretty full. Might be more productive to do summary executions?
sReinwald · 3 months ago
I understand that you're not completely serious about it, but you're proposing a very brittle technical solution for what is fundamentally a social and motivational issue.

The core flaw is that any such marker system is trivially easy to circumvent. Any user intending to pass AI content as their own would simply run the text through a basic script to normalize the character set. This isn't a high-level hack; it's a few dozen lines in Python and trivially easy to write for anyone who can follow a few basic Python tutorials or a 5-second task for ChatGPT or Claude.

Technical solutions to something like this exist in the analog world, of course, like the yellow dots on printers that encode date, time, and the printer's serial number. But, there is a fundamental difference: The user has no control over that enforcement mechanism. It's applied at a firmware/hardware layer that they can't access without significant modification. Encoding "human or AI" markers within the content itself means handing the enforcement mechanism directly to the people you're trying to constrain.

The real danger of such a system isn't even just that it's blatantly ineffective; it's that it creates a false sense of security. The absence of "AI-generated" markers would be incorrectly perceived as a guarantee for human origin. This is a far more dangerous state than even our current one, where a healthy level of skepticism is required for all content.

It reminds me of my own methods of circumventing plagiarism checkers back in school. I'm a native German speaker, and instead of copying from German sources for my homework, I would find an English source on the topic, translate it myself, and rewrite it. The core ideas were not my own, but because the text passed through an abstraction layer (my manual translation), it had no direct signature for the checkers to match. (And in case any of my teachers from back then read this: Obviously I didn't cheat in your class, promise.)

Stripping special Unicode characters is an even simpler version of the same principle. The people this system is meant to catch - those aiming to cheat, deceive, or manipulate - are precisely the ones who will bypass it effortlessly. Apart from the most lazy and hapless, of course. But we are already catching those constantly from being dumb enough to include their LLM prompts, or "Sure, I'll do that for you." when copying and pasting. But if you ask me, those people are not the ones we should be worried about.

//edit:

I'm sure there are way smarter people than me thinking about this problem, but I genuinely don't see any way to solve this problem with technology that isn't easily circumvented or extremely brittle.

The most promising would likely be something like unperceivable patterns in the content itself, somehow. Like hiding patterns in the length of words used, length of sentences, punctuation, starting letters for sentences, etc. But even if the big players in AI were to implement something like this immediately, it would be completely moot.

Local open-source models that can be run on consumer hardware already are more than capable enough to re-phrase input text without altering the meaning, and likely wouldn't contain these patterns. Manual editing breaks stylometric patterns trivially - swap synonyms, adjust sentence lengths, restructure paragraphs. You could even attack longer texts piecemeal by having different models rephrase different paragraphs (or sentences), breaking the overall pattern. And if all else fails, there's always my manual approach from high school.

K0balt · 3 months ago
Ai generated content is inherently a regression to the mean and harms both training and human utility. There is no benefit in publishing anything that an AI can generate, just ask the question yourself. Maybe publish all AI content with <AI generated content> tags, but other than that it is a public nuisance much more often than a public good.
px1999 · 3 months ago
Following this logic, why write anything at all? Shakespeare's sonnets are arrangements of existing words that were possible before he wrote them. Every mathematical proof, novel, piece of journalism is simply a configuration of symbols that existed in the space of all possible configurations. The fact that something could be generated doesn't negate its value when it is generated for a specific purpose, context, and audience.
pickledoyster · 3 months ago
> William Shakespeare is credited with the invention or introduction of over 1,700 words that are still used in English today

https://www.shakespeare.org.uk/explore-shakespeare/shakesped...

K0balt · 3 months ago
Following that logic, we should publish all unique random orderings of words. I think there is a book about a library like that, but it is a great read and is not a regression to the mean of ideas.

Writing worth reading as a non-child surprises, challenges, teaches, and inspires. LLM writing tends towards the least surprising, worn out tropes that challenge only the patience and attention of the reader. The eager learner, however will tolerate that , so I suppose that I’ll give them teaching. They are great at children’s stories, where the goal is to rehearse and introduce tropes and moral lessons with archetypes, effectively teaching the listener the language of story.

FWIW I am not particularly a critic of AI and am engaged in AI related projects. I am quite sure that the breakthrough with transformer architecture will lead to the third industrial revolution, for better or for worse.

But there are some things we shouldn’t be using LLMs for.

gojomo · 3 months ago
This was an intuitively-appealing belief, even with some qualified experimental support, as of a few years ago.

However, since then, a bunch of capability breakthroughs from (well-curated) AI generations has definitively disproven it.

DennisP · 3 months ago
AI generates useful stuff, but unless it took a lot of complicated prompting, it's still true that you could "just ask the question yourself."

This will change as contexts get longer and people start feeding large stacks of books and papers into their prompts.

wahern · 3 months ago
> a bunch of capability breakthroughs from (well-curated) AI generations has definitively disproven it.

How much work is "well-curated" doing in that statement?

nicbou · 3 months ago
How will AI write about a world it never experiences? By training on the work of human beings.
K0balt · 2 months ago
One example of useful output does not negate the flood of pollution. I’m not denying or downplaying the usefulness of AI. I am doubting the wisdom of blindly publishing -anything- without making at least a trivial attempt to ensure that it is useful and worth publishing. It is a form of pollution.

The problem is that it lowers the effort required to produce SEO spam and to “publish” to nearly zero, which creates a perverse incentive to shit on the sidewalk.

The amount of AI created, blatantly false blog posts about drug interactions, for example. Not advertising, just banal filler to create site visits, with dangerously false information.

It’s not like shitting on the sidewalk was never a problem before, it’s just that shitting on the sidewalk as a service (SOTSAAS) maybe is something we should try to avoid.

K0balt · 3 months ago
I didn’t mean to imply that -no- ai generated content is useful, only that the vast, vast majority is pollution. The problem is that it is so cheap to produce garbage content with AI that writing actual content is disincentivized, and doing web searches has become an exercise is sifting through AI generated slop.

That at least will add extra work to filter usable training data, and costs users minutes a day wading through the refuse.

sneak · 3 months ago
What about AI modified or copy edited content?

I write blog posts now by dictating into voice notes, transcribing it, and giving it to CGPT or Claude to work on the tone and rhythm.

theamk · 3 months ago
So IMHO an right thing is to add "AI rewritten" label to your blog.

hm.. I wonder where this kind of label should live? For a personal blog, putting it on every post seems redundant, as if author uses it, it's likely they use it for all posts. And many blogs don't have dedicated "about this blog" section.

I wonder if things will end up like organic food labeling or "made in .." labels. Some blogs might say "100% by human", some might say "Designed by human, made by AI" and some might just say nothing.

jbc1 · 3 months ago
If I ask the question myself then there's no step where a human expert has vetted the content and put their name on it. That curation and vouching is of value.

Now your mind might have immediately went "pffff as if they're doing that" and I agree but only to the extent that it largely wasn't happening prior to AI anyway. The vast majority of internet content was already low quality and rushed out by low paid writers who lacked expertise in what they were writing about. AI doesn't change that.

flir · 3 months ago
Completely agree. We are used to thinking of authorship as the critical step. We're going to have to adjust to thinking of publication as the critical step. In an ideal world, publication of a piece would be seen as vouching for that piece. Putting your reputation on the line.

I wonder if we'll see a resurgence in reputation systems (probably not).

SamPatt · 3 months ago
Nonsense. Have you used any of the deep research tools?

Don't fall for the utopia fallacy. Humans also publish junk.

krapht · 3 months ago
Yes, and deep research was junk for the hard topics that I actually needed to sit down and research. Anything shallower I can usually reach by search engine use and scan; deep research saves me about 15-30 minutes for well-covered topics.

For the hard topics, the solution is still the same as pre-AI - search for popular survey papers, then start crawling through the citation network and keeping notes. The LLM output had no idea of what was actually impactful vs what was a junk paper in the niche topic I was interested in so I had no other alternative than quality time with Google Scholar.

We are a long way from deep research even approaching a well-written survey paper written by grad student sweat and tears.

cobbzilla · 3 months ago
Steel-man angle: A desire for data provenance is a good thing with benefits that are independent of utopias/humans vs machines kinds of questions.

But, all provenance systems are gamed. I predict the most reliable methods will be cumbersome and not widespread, thus covering little actual content. The easily-gamed systems will be in widespread use, embedded in social media apps, etc.

Questions: 1. Does there exist a data provenance system that is both easy to use and reliable "enough" (for some sufficient definition of "enough")? Can we do bcrypt-style more-bits=more-security and trade time for security?

2. Is there enough of an incentive for the major tech companies to push adoption of such a system? How could this play out?

cryptonector · 3 months ago
Yes, but GP's idea of segregating AI-generated content is worth considering.

If you're training an AI, do you want it to get trained on other AIs' output? That might be interesting actually, but I think you might then want to have both, an AI trained on everything, and another trained on everything except other AIs' output. So perhaps an HTML tag for indicating "this is AI-generated" might be a good idea.

munificent · 3 months ago
The observation that humans poop is not sufficient justification for spending millions of dollars building an automated firehose that pumps a torrent of shit onto the public square.
Legend2440 · 3 months ago
I'm not convinced this is going to be as big of a deal as people think.

Long-run you want AI to learn from actual experience (think repairing cars instead of reading car repair manuals), which both (1. gives you an unlimited supply of noncopyrighted training data and (2. handily sidesteps the issue of AI-contaminated training data.

AnotherGoodName · 3 months ago
The hallucinations get quoted and then sourced as truth unfortunately.

A simple example. "Which MS Dos productivity program had connect four built in?".

I have an MSDOS emulator and know the answer. It's a little obscure but it's amazing how i get a different answer from all the AI's every time. I never saw any of them give the correct answer. Try asking it the above. Then ask it if it's sure about that (it'll change it's mind!).

Now remember that these types of answers may well end up quoted online and then learnt by AI with that circular referenced source as the source. We have no truth at that point.

And seriously try the above question. It's a great example of AI repeatedly stating an authoritative answer that's completely made up.

dwringer · 3 months ago
When I asked, "Good afternoon! I'm trying to settle a bet with a friend (no money on the line, just a friendly "bet"!) Which MS DOS productivity program had a playable version of the game Connect Four built in as an easter egg?", it went into a very detailed explanation of how to get to the "Hall of Tortured Souls" easter egg in Excel 5.0, glossing over the fact that I said "MS DOS" and also conflating the easter eggs by telling me specifically that the "excelkfa" cheat code would open a secret door/bridge to the connect four game.

So, I retried with, "Good afternoon! I'm trying to settle a bet with a friend (no money on the line, just a friendly "bet"!) Which *MS DOS* [not Win95, i.e., Excel 5] productivity program had a playable version of the game Connect Four built in as an easter egg?". I got Lotus 1-2-3 once, Excel 4 twice, and Borland Quattro Pro three different times, all from that prompt.

The correct answer you point out in another subthread was never returned as a possibility, and the responses all definitely came across as confident. Definitely a fascinating example.

MostlyStable · 3 months ago
Claude 4 Sonnet gave the (reasonable given the obscurity, but wrong) answer that there was no such easter egg:

>I'm not aware of any MS-DOS productivity program that had Connect Four as a built-in easter egg. While MS-DOS era software was famous for including various easter eggs (like the flight simulator in Excel 97, though that was Windows-era), I can't recall Connect Four specifically being hidden in any major DOS productivity applications.

>The most well-known DOS productivity suites were things like Lotus 1-2-3, WordPerfect, dBase, and later Microsoft Office for DOS, but I don't have reliable information about Connect Four being embedded in any of these.

>It's possible this is a case of misremembered details - perhaps your friend is thinking of a different game, a different era of software, or mixing up some details. Or there might be an obscure productivity program I'm not familiar with that did include this easter egg.

>Would you like me to search for more information about DOS-era software easter eggs to see if we can track down what your friend might be thinking of?

That seems like a pretty reasonable response given the details, and included the appropriate caveat that the model was not aware of any such easter egg, and didn't confidently state that there was none.

ziml77 · 3 months ago
Whenever I ask these AI "Is the malloc function in the Microsoft UCRT just a wrapper around HeapAlloc?", I get answers that are always wrong.

They claim things like the function adds size tracking so free doesn't need to be called with a size or they say that HeapAlloc is used to grab a whole chunk of memory at once and then malloc does its own memory management on top of that.

That's easy to prove wrong by popping ucrtbase.dll into Binary Ninja. The only extra things it does beyond passing the requested size off to HeapAlloc are: handle setting errno, change any request for 0 bytes to requests for 1 byte, and perform retries for the case that it is being used from C++ and the program has installed a new-handler for out-of-memory situations.

Legend2440 · 3 months ago
ChatGPT 4o waffles a little bit and suggests the Microsoft Entertainment pack (which is not productivity software or MS-DOS), but says at the end:

>If you're strictly talking about MS-DOS-only productivity software, there’s no widely known MS-DOS productivity app that officially had a built-in Connect Four game. Most MS-DOS apps were quite lean and focused, and games were generally separate.

I suspect this is the correct answer, because I can't find any MS-DOS Connect Four easter eggs by googling. I might be missing something obscure, but generally if I can't find it by Googling I wouldn't expect an LLM to know it.

kbenson · 3 months ago
So, like normal history just sped up exponentially to the point it's noticeable in not just our own lifetime (which it seemed to reach prior to AI), but maybe even within a couple years.

I'd be a lot more worried about that if I didn't think we were doing a pretty good job of obfuscating facts the last few years ourselves without AI. :/

spogbiper · 3 months ago
just tried this with gemini 2.5 flash and pro several times, it just keeps saying it doesn't know of any such thing and suggesting it was a software bundle where the game was included alongside the productivity application or I'm not remembering correctly.

not great (assuming there actually is such a software) but not as bad as making something up

Bjartr · 3 months ago
AIs make knowledge work more efficient.

Unfortunately that also includes citogenesis.

https://xkcd.com/978/

tough · 3 months ago
probably chatgpt search function already finds this thread soon to answer correctly, hn domain does well on seo and shows up on search results soon enough
jonchurch_ · 3 months ago
What is the correct answer?
bongodongobob · 3 months ago
Wait until you meet humans on the Internet. Not only do they make shit up, but they'll do it maliciously to trick you.
abeppu · 3 months ago
> which both (1. gives you an unlimited supply of noncopyrighted training data and (2. handily sidesteps the issue of AI-contaminated training data.

I think these are both basically somewhere between wrong and misleading.

Needing to generate your own data through actual experience is very expensive, and can mean that data acquisition now comes with real operational risks. Waymo gets real world experience operating its cars, but the "limit" on how much data you can get per unit time depends on the size of the fleet, and requires that you first get to a level of competence where it's safe to operate in the real world.

If you want to repair cars, and you _don't_ start with some source of knowledge other than on-policy roll-outs, then you have to expect that you're going to learn by trashing a bunch of cars (and still pay humans to tell the robot that it failed) for some significant period.

There's a reason you want your mechanic to have access to manuals, and have gone through some explicit training, rather than just try stuff out and see what works, and those cost-based reasons are true whether the mechanic is human or AI.

Perhaps you're using an off-policy RL approach -- great! If your off-policy data is demonstrations from a prior generation model, that's still AI-contaminated training data.

So even if you're trying to learn by doing, there are still meaningful limits on the supply of training data (which may be way more expensive to produce than scraping the web), and likely still AI-contaminated (though perhaps with better info on the data's provenance?).

nradov · 3 months ago
There is an enormous amount of actual car repair experience training data on YouTube but it's all copyrighted. Whether AI companies should have to license that content before using it for training is a matter of some dispute.
AnotherGoodName · 3 months ago
>Whether AI companies should have to license that content before using it for training is a matter of some dispute.

We definitely do not have the right balance of this right now.

eg. I'm working on a set of articles that give a different path to learning some key math knowledge (just comes at it from a different point of view and is more intuitive). Historically such blog posts have helped my career.

It's not ready for release anyway but i'm hesitant to release my work in this day and age since AI can steal it and regurgitate it to the point where my articles appear unoriginal.

It's stifling. I'm of the opinion you shouldn't post art, educational material, code or anything that you wish to be credited for on the internet right now. Keep it to yourself or else AI will just regurgitate it to someone without giving you credit.

smikhanov · 3 months ago
Prediction: there won’t be any AI systems repairing cars before there will be general intelligence-capable humanoid robots (Ex Machina-style).

There also won’t be any AI maids in five-star hotels until those robots appear.

This doesn’t make your statement invalid, it’s just that the gap between today and the moment you’re describing is so unimaginably vast that saying “don’t worry about AI slop contaminating your language word frequency databases, it’ll sort itself out eventually” is slightly off-mark.

ToucanLoucan · 3 months ago
It blows my mind that some folks are still out here thinking LLMs are the tech-tree towards AGI and independently thinking machines, when we can't even get copilot to stop suggesting libraries that don't exist for code we fully understand and created.

I'm sure AGI is possible. It's not coming from ChatGPT no matter how much Internet you feed to it.

sebtron · 3 months ago
I don't understand the obsession with humanoid robots that many seem to have. Why would you make a car repairing machine human-shaped? Like, what would it use its legs for? Wouldn't it be better to design it tailored to its purpose?
bravesoul2 · 3 months ago
Long-run you want AGI then? Once we get AGI, the spam will be good?

https://xkcd.com/810/

protocolture · 3 months ago
I like how the chosen terminology is perfectly picked to paint the concern as irrelevant.

"Since the end of atmospheric nuclear testing, background radiation has decreased to very near natural levels, making special low-background steel no longer necessary for most radiation-sensitive uses, as brand-new steel now has a low enough radioactive signature that it can generally be used."

I dont see that:

1. There will be a need for "uncontaminated" data. LLM data is probably slightly better than the natural background reddit comment. Falsehoods and all.

2. "Uncontaminated" data will be difficult to find. What with archive.org, gutenberg etc.

3. That LLM output is going to infest everything anyway.

fer · 2 months ago
>2. "Uncontaminated" data will be difficult to find. What with archive.org, gutenberg etc.

But recent uncontaminated data is hard to find. https://github.com/rspeer/wordfreq/blob/master/SUNSET.md

protocolture · 2 months ago
>Now the Web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies.

I really do just bail out whenever anyone uses the word slop.

>As one example, Philip Shapira reports that ChatGPT (OpenAI's popular brand of generative language model circa 2024) is obsessed with the word "delve" in a way that people never have been, and caused its overall frequency to increase by an order of magnitude.

Should run the same analysis against the word slop.

jbs789 · 3 months ago
Umm… we stopped nuclear testing, which is what allowed the background radiation to reduce.
protocolture · 2 months ago
And cars replaced horses in london, rendering forecasts of london being buried under a mountain of horse manure irrelevant too.

Change really is the only constant. The short term predictive game is rigged against hard predictions.

ACCount36 · 3 months ago
Currently, there is no reason to believe that "AI contamination" is a practical issue for AI training runs.

AIs trained on public scraped data that predates 2022 don't noticeably outperform those trained on scraped data from 2022 onwards. Hell, in some cases, newer scrapes perform slightly better, token for token, for unknown reasons.

numpad0 · 3 months ago
Yeah, the thinking behind "low background steel" concept is that AI training on synthetic data could lead into a "model collapse" that render the AIs anyhow completely mad and useless. That either didn't happen, or all the AI companies internally holds a working filter to sieve out AI data. I'd bet on the former. I still think there might be chances of model collapse happening to humans after too much exposure to AI generated data, but that's just my anecdotal observations and gut feelings.
demosthanos · 3 months ago
> AIs trained on public scraped data that predates 2022 don't noticeably outperform those trained on scraped data from 2022 onwards. Hell, in some cases, newer scrapes perform slightly better, token for token, for unknown reasons.

This is really bad reasoning for a few reasons:

1) We've gotten much better at training LLMs since 2022. The negative impacts of AI slop in the training data certainly don't outweigh the benefits of orders of magnitude more parameters and better training techniques, but that doesn't mean they have no negative impact.

2) "Outperform" is a very loose term and we still have no real good answer for measuring it meaningfully. We can all tell that Gemini 2.5 outperforms GPT-4o. What's trickier is distinguishing between Gemini 2.5 and Claude 4. The expected effect size of slop at this stage would be on that smaller scale of differences between same-gen models.

Given that we're looking for a small enough effect size that we know we're going to have a hard time proving anything with data, I think it's reasonable to operate from first principles in this case. First principles say very clearly that avoiding training on AI-generated content is a good idea.

ACCount36 · 3 months ago
No, I mean "model" AIs, created explicitly for dataset testing purposes.

You take small AIs, of the same size and architecture, and with the same pretraining dataset size. Pretrain some solely on skims from "2019 only", "2020 only", "2021 only" scraped datasets. The others on skims from "2023 only", "2024 only". Then you run RLHF, and then test the resulting AIs on benchmarks.

The latter AIs tend to perform slightly better. It's a small but noticeable effect. Plenty of hypothesis on why, none confirmed outright.

You're right that performance of frontier AIs keeps improving, which is a weak strike against the idea of AI contamination hurting AI training runs. Like-for-like testing is a strong strike.

rjsw · 3 months ago
I don't think people have really got started on generating slop, I expect it to increase by a lot.
schmookeeg · 3 months ago
I'm not as allergic to AI content as some (although I'm sure I'll get there) -- but I admire this analogy to low-background steel. Brilliant.
jgrahamc · 3 months ago
I am not allergic to it either (and I created the site). The idea was to keep track of stuff that we know humans made.
ris · 3 months ago
> I'm not as allergic to AI content as some

I suspect it's less about phobia, more about avoiding training AI on its own output.

This is actually something I'd been discussing with colleagues recently. Pre-AI content is only ever going to become more precious because it's one thing we can never make more of.

Ideally we'd have been cryptographically timestamping all data available in ~2015, but we are where we are now.

abound · 3 months ago
One surprising thing to me is that using model outputs to train other/smaller models is standard fare and seems to work quite well.

So it seems to be less about not training AI on its own outputs and more about curating some overall quality bar for the content, AI-generated or otherwise

glenstein · 3 months ago
>more about avoiding training AI on its own output.

Exactly. The analogy I've been thinking of is if you use some sort of image processing filter over and over again to the point that it overpowers the whole image and all you see is the noise generated from the filter. I used to do this sometimes with Irfanview and it's sharp and blur.

And I believe that I've seen TikTok videos showing AI constantly iterating over an image and then iterating over its output with the same instructions and seeming to converge on a style of like a 1920s black and white cartoon.

And I feel like there might be such a thing as a linguistic version of that. Even a conceptual version.

seadan83 · 3 months ago
I'm worried about humans training on AI output. Example, a rare fish had a viral AI image made. The image is completely fake. Though, when you search for that fish, the image is what comes up, repeatedly. It is hard to know it is all fake, looks real. Content fabrication at scale has a lot of second order impacts.
smikhanov · 3 months ago
It’s about keeping different corpuses of written material that was created by humans, for research purposes. You wouldn’t want to contaminate your human language word frequency databases with AI slop, the linguists of this world won’t like it.
koolba · 3 months ago
I feel oddly prescient today: https://news.ycombinator.com/item?id=44217676
saberience · 3 months ago
I heard this example made at least a year ago on hackernews, probably longer ago too.

See (2 years ago): https://news.ycombinator.com/item?id=34085194

zargon · 3 months ago
This has been a common metaphor since the launch of ChatGPT.
echelon · 3 months ago
I really think you're wrong.

The processes we use to annotate content and synthetic data will turn AI outputs into a gradient that makes future outputs better, not worse.

It might not be as obvious with LLM outputs, but it should be super obvious with image and video models. As we select the best visual outputs of systems, slight errors introduced and taste-based curation will steer the systems to better performance and more generality.

It's no different than genetics and biology adapting to every ecological niche if you think of the genome as a synthetic machine and physics as a stochastic gradient. We're speed running the same thing here.

stevenhuang · 3 months ago
I agree with you.

I voiced this same view previously here https://news.ycombinator.com/item?id=44012268

If something looks like ai, and if LLMs are that great at identifying patterns, who's to say this won't itself become a signal LLMs start to pickup on and improve through?

glenstein · 3 months ago
Nicely done! I think I've heard of this framing before, of considering content to be free from AI "contamination." I believe that idea has been out there in the ether.

But I think the suitability of low background steel as an analogy is something you can comfortably claim as a successful called shot.

Deleted Comment

onecommentman · 3 months ago
Used paper books, especially poor-but-functional copies known as “reading copies” or “ex-library”, are going for a song on the used book market. Recommend starting your own physical book library, including basic reference texts, and supporting your local public and university libraries. Paper copies of articles in your areas of expertise and interest. Follow the ways of your ancestors.

I’ve had AIs outright lie about facts, and I’m glad to have had a physical library available to convince myself that I was correct, even if I couldn’t convince the AI of that in all cases.

jonjacky · 2 months ago
This is the best comment -- with the best advice -- in this whole discussion.