I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.
>I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out
I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?
What the OP was talking about is the negative connotation that goes with the word; it's certainly a poor choice from a marketing point of view.
You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.
I have to say I had the same reaction. Sure, "prism" shows up in many contexts. But here it shows up in the context of a company and product that is already constantly in the news for its lackluster regard for other people's expectation of privacy, copyright, and generally trying to "collect it all" as it were, and that, as GP mentioned, in an international context that doesn't put these efforts in the best light.
They're of course free to choose this name. I'm just also surprised they would do so.
Plus there are lots of “legacy” products with the name prism in them. I also don’t think the public makes the connection. It’s mainly people who care to be aware of government overreach who think it’s a bad word association.
Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.
You do realize that obsessing over words like that is a pretty major part of what programming and computer science is right? Linguistics is highly intertwined with computer science.
>Has the technical and scientific community in the US already forgotten this huge breach of trust?
Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).
What Snowden did was heroic. What was shameful was the world's underwhelming reaction. Where were all these images
in the media of protest marches like against the Vietnam war?
Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.
I came here based to headline expecting some more cia & nsa shit, that word is tarnished for few decades in better part of IT community (that actually cares about this craft beyond paycheck)
This comment might make more sense if there was some connection or similarity between the OpenAI "Prism" product and the NSA surveillance program. There doesn't appear to be.
Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap.
OpenAI has a former NSA director on its board. [1] This connection makes the dilution of the term "PRISM" in search results a potential benefit to NSA interests.
>Has the technical and scientific community in the US already forgotten this huge breach of trust?
Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.
The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.
Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.
Probably gonna get buried at the bottom of this thread, but:
There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.
> Has the technical and scientific community in the US already forgotten this huge breach of trust?
We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill
We used to have “SEO spam”, where people would try to create news (and other) articles associated with some word or concept to drown out some scandal associated with that same word or concept. The idea was that people searching on Google for the word would see only the newly created articles, and not see anything scandalous. This could be something similar, but aimed at future LLM’s trained on these articles. If LLM’s learn that the word “Prism” means a certain new thing in a surveillance context, the LLM’s will unlearn the older association, thereby hiding the Snowden revelations.
As a datapoint, when I read this headline, the very first thing i thought of as "wasn't PRISM some NSA shit? Is OpenAI working with the NSA now?"
It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.
I mean it's also the name of the national engineering education journal and a few other things. There's only 14,000 5-letter words in English so you're going to have collisions.
Yeah, to be fair I would be hesitant to have anything to do with any program called prism as well. Hard to imagine that no one brought this up when they were thinking of a name.
I think it's probably just apparent to a small set of people; we're usually the ones yelling at the stupid cloud technologies that are ravaging online privacy and liberty, anyway. I was expecting some sort of OpenAI automated user data handling program, with the recent venture into adtech, but since it's a science project and nothing to do with surveillance and user data, I think it's fine.
If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.
Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.
The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.
The WASM constraints make sense given the resource limits, especially for mobile. If you are moving that compute server-side though I am curious about the unit economics. LaTeX pipelines are surprisingly heavy and I wonder how you manage the margins on that infrastructure at scale.
I was using Crixet before I switched over to Typst[0] for all of my writing. However, back when I did use Crixet, I never used its AI features. It was just a much better alternative to Overleaf for me. Sad to see that AI will be forced on all Crixet users now.
> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.
They’re quite open about Prism being built on top of Crixet.
It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!
This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.
Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.
I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.
Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.
That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren't that many desperate people who will do it en-masse and for peanuts.
Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts.
New unit of measurement proposed: verification debt.
Also introduces: Recursive Garbage → model collapse
This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.
Which proves its own points! Absolutely genius! The cost asymmetry of producing and checking for garbage truly is becoming a problem in the recent years, with the advent of LLMs and generative AI in general.
Yes, I did it as a joke inspired by the PRISM release. But unexpectedly, it makes a good point. And the funny part for was that the paper lists only LLMs as authors.
Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM
Plot twist: humans become the new Proof of Work consensus mechanism. Instead of GPUs burning electricity to hash blocks, we burn our sanity verifying whether that Medium article was written by a person or a particularly confident LLM.
"Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."
The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."
Bullish on verification miners. Bearish on whatever remains of our collective attention span.
Human CAPTCHA exists to figure out whether your clients are human or not, so you can segment them and apply human pricing. Synthetics, of course, fall into different tiers. The cheaper ones.
From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.
I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
GenAI largely seems like a DDoS on free resources. The effort to review this stuff is now massively more than the effort to "create" it, so really what is the point of even submitting it, the reviewer could have generated it themself. Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.
I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.
> The effort to review this stuff is now massively more than the effort to "create" it
I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.
It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.
> Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.
I've seen this complaint a lot of places, but the solution to me seems obvious. Massive PRs should be rejected. This was true before AI was a thing.
In some ways it might be a good thing that shorthand signals of quality are being destroyed because it forces all of us to meaningfully engage with the work. No more LGTM +1 when every PR looks good.
I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.
There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?
The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.
I totally agree. I spend my whole day from getting up to going to bed (not before reading HN!) on reviews for a conference I'm co-organizing later this year.
So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).
Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.
OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.
When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.
Speaking of conferences, might this not be the way to judge this work? You could imagine only orally defended work to be publishable, or at least have the prestige of vetting, in a bit of an old-school science revival.
I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.
If the penalty for a crime is a fine, then that law exists only for the lower class
In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.
That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.
You must have no idea how scientific publishing works. Typical acceptance rate for ok/good journal is 10-20% (and it was like that even before LLMs). Also it's a great idea to make business of scientific publishing even more predatory - now sciencists writing articles for free, reviewing for free and then having to pay for publication will also have to pay to even submit something, with 90% chance of rejection. Also think what kind of incentives it will create.
> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
This has been discussed previously as "workslop", where you produce something that looks at surface level like high quality work, but just shifts the burden to the receiver of the workslop to review and fix.
This fits into the broader evolution of the visualization market.
As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]
In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]
> Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
Or the providers of the models are capable of providing accepted/certified guarantees as to the quality of the output that their models and systems produce.
I'm curious if you'd be in favor of other forms of academic gate keeping as well. Isn't the lower quality overall of submissions (an ongoing trend with a history far pre-dating LLMs) an issue? Isn't the real question (that you are alluding to) whether there should be limits to the democratization of science? If my tone seems acerbic, it is only because I sense cognitive dissonance between the anti-AI stance common among many academics and the purported support for inclusivity measures.
"which is really not the point of these journals at all"- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they're agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?
I don't really understand how me saying that this tool isn't good for science as gatekeeping. The vibe-written papers that I am talking about have little-to-no valuable scientific content, and as such would always be rejected. It's just that it's way easier to produce something that _looks_ reasonable from a five-second glance than before, and that causes additional load on an already strained system.
I also don't understand your second paragraph at all.
> whether there should be limits to the democratization of science?
That is an interesting philosophical question, but not the question we are confronted with. A lot of LLM assisted materials have the _signals_ of novel research without having its _substance_.
If I may be the Devil's advocate, I'm not sure I fully agree with "The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research)".
Plenty of researchers hate writing and will only do it at gunpoint. Or rather, delegate it all to their underlings.
I don't see an issue with generative writing in principle. The Devil is in the details, but I don't see this as much different from "hey grad student, write me this paper". And generative writing already exists as copy-paste, which makes up like 90% of any random paper given the incrementality of it all.
I was initially a little indignated by the "find me some plausible refs and stick them in the paper" section of the video but, then again, isn't this what most people already do? Just copy-paste the background refs from the colleague's last paper introduction and maybe add one from a talk they saw in the meantime, plus whatever the group & friends produced since then.
My experience is most likely skewed (as all are), but I haven't met a permanent researcher that wrote their own papers yet, and most grad students and postdocs hate writing. Literally the only times I saw someone motivated to write papers (in a masochistic way) were just before applying to a permanent position or while wrapping up their PhD.
Onto your point, though, I agree this is somewhat worrisome in that, by reaction, the barrier to entry might rise by way of discriminating based on credentials.
I also am not sure why so many people are vehemently against this. I would bet that at least 90% of researchers would agree that the writing up is definitely not the part of the work they prefer (to stay polite). As you mentioned, work is usually relegated to students, and those students already had access to LLMs if they wanted to generate the work.
In my opinion, most of those tools become problematic when people use them without caution. Unfortunately, even in sciences, people are not as careful and pragmatic as we would like to imagine they are and a lot of people are cutting corners, especially in those "lesser" areas like writing and presenting your work.
Overall, I think this has the potential to reshape the publication system, which is long overdue.
I am a rather slow writer who certainly might benefit from something like Prism.
A good tool would encourage me, help me while I am writing, and maybe set up barriers that keep me from taking shortcuts (e.g. pushing me to re-read the relevant paragraphs of a paper that I cite).
Prism does none of these things - instead it pushes me towards sloppy practices, such as sprinkling citations between claims.
Why won't ChatGPT tell me how to build a bomb but Prism will happily fabricate fake experimental results for me?
The comparison to make here is that a journal submission is effectively a pull request to humanities scientific knowlegde base. That PR has to be reviewed. We're already seeing the effects of this with open source code - the number of PR submissions have skyrocketed, overwhelming maintainers.
This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.
As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.
> > who are looking to 'boost' their CV
Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.
I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
Perhaps the real issue is the gate-keeping scientific publishing model. Journals had a place and role, and peer-review is a critical aspect of the scientific process but new times (internet, citizien science, higher levels of scientific literacy, and now AI) diminish the benefits of journals creating "barriers to entry" as you put it.
I for one hope not to live in a world where academic journals fall out of favor and are replaced by vibe-coded papers by citizen scientists with inflated egos from one too many “you’re absolutely right!” Claude responses.
Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?
this is probably a net negative as there are many very good scientists with not very strong English skills.
the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.
> these kinds of tools cause many more problems than they actually solve
For whom? For OpenAI these tools are definitely the solutions. They are developing by throwing various AI-powered stuff at the wall to see what sticks. These tools also demonstrate to the investors that innovation did not stall and to show that AI usage is growing.
Same with Microsoft: none of the AI stuff they are shoving down the users' throats were actually designed for the users. All this stuff is only for the token usage to grow for the shareholders to see.
Similar with Google although no one can deny real innovation happening there.
Why not filter out papers from people without credentials? And also publicly call them out and register them somewhere, so that their submission rights can be revoked by other journals and conferences after "vibe writing".
These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.
What do credentials have to do with good science? There are already some roadblocks to publish science in important–sounding journals, but it's important for the neutrality of the scientific process that in principle anyone can do it.
I'm certain your journal will be using LLMs in reviewing incoming articles, if they aren't already. I also don't think this is in response to the flood of LLM generated articles. Even if authors were the same as pre-LLM, journals would succumb to the temptation, at least at the big 5 publishers, which already have a contentious relationship with the referees.
wouldn't AI actually be good for filtering given it's going to be a lot better at knowing what has been published? Also seems possible that it could actually work out papers that have ideas that are novel, or at least come up with some kind of likely score.
The real problem is that researchers are pushed to publish as their publication is the only way their career can advance. It's not even to "boost" your CV, as a researcher your publication history IS your CV.
It was already a problem 25 years ago when I did my Ph.D., and I don't think things changed that much since then.
This encourages researchers to publish barely valuable results, or to cut one articles into multiple ones with small variations to increase their number of publications. Also publishers creating more conferences and more journals to respond to the need that researchers have to publish.
I remember many experienced professors telling me cynically about this, about all the techniques they had to blow up one small finding into many articles.
Anyway - research slop started way before AI. It's probably going to make the problem worse, but the root issue have been there for a long time.
I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.
This is what I see, you need more of an active, accomplished helper at the keyboard.
If I can't have that, the next best thing is a helper while I'm at the keyboard my damn self.
>Why LaTeX is the bottleneck: scientists spend hours aligning diagrams, formatting equations, and managing references—time that should go to actual science, not typesetting
This is supposed to be only a temporary situation until people recover from the cutbacks of the 1970's, and a more comprehensive number of scientists once again have their own secretary.
Looks like the engineers at Crixet were tired of waiting.
I felt the same, but then thought of experts in their field. For example, my PhD advisor would already know all these papers. For him the prompt would actually be similar to what was shown in the video.
It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research
I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.
I know many people have negative opinions about this.
I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.
Translation is something Large Language Models are inherently pretty good at, without controversy, even though the output still should be independently verified. It's a language task and they are language models.
Are they good at translating scientific jargon specific to a niche within a field? I have no doubt LLMs are excellent at translating well-trodden patterns; I'm a bit suspicious otherwise..
I've heard that now that AI conferences are starting to check for hallucinated references, rejection rates are going up significantly. See also the Neurips hallucinated references kerfuffle [1]
Honestly, hallucinated references should simply get the submitter banned from ever applying again. Anyone submitting papers or anything with hallucinated references shall be publicly shamed. The problem isn't only the LLMs hallucinating, it's lazy and immoral humans who don't bother to check the output either, wasting everyone's time and corroding public trust in science and research.
Yeah that's not going to work for long. You can draw a line in 2023, and say "Every paper before this isn't AI". But in the future, you're going to have AI generated papers citing other AI slop papers that slipped through the cracks, because of the cost of doing reseach vs the cost of generating AI slop, the AI slop papers will start to outcompete the real research papers.
[1] https://news.ycombinator.com/item?id=46787165
I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?
You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.
They're of course free to choose this name. I'm just also surprised they would do so.
Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.
Altso, nazism. But different context, years ago, so whatever I guess?
Hell, let's just call it Hitler. Different context!
Given what they do it is an insidious name. Words matter.
Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).
Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098
Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.
[1]: https://openai.com/index/openai-appoints-retired-us-army-gen...
Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.
The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.
Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.
There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.
(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)
We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill
It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.
I personally associate Prism with [Silverlight - Composite Web Apps With Prism](https://learn.microsoft.com/en-us/archive/msdn-magazine/2009...) due to personal reasons I don't want to talk about ;))
If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
[0] https://crixet.com
[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...
[2] https://news.ycombinator.com/item?id=42009254
[3] https://news.ycombinator.com/item?id=46394937
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
Any plans of having typst integrated anytime soon?
To end up with yet another shitty (because running inside a browser, in particular its interface) web app ?
Why not focus efforts into making a proper program (you know, with IBM menu bars and keyboard shortcuts), but with collaborative tools too ?
[0]: https://typst.app
They’re quite open about Prism being built on top of Crixet.
Also yes, LaTeX being source code it's much easier to get an AI to genere LaTeX than integrate into MS Word.
Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.
Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.
And you think the indians will not hand write the output of LLMs ?
Not that I have a better suggestion myself..
Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse
a little joke on Prism)
This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.
Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM
"Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."
The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."
Bullish on verification miners. Bearish on whatever remains of our collective attention span.
I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.
> The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
https://en.wikipedia.org/wiki/Brandolini%27s_law
I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.
It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.
I've seen this complaint a lot of places, but the solution to me seems obvious. Massive PRs should be rejected. This was true before AI was a thing.
HN Search: curl AI slop - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).
Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.
OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.
When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]
[0] https://news.ycombinator.com/item?id=40295661
[1] https://news.ycombinator.com/item?id=22368323
Or the providers of the models are capable of providing accepted/certified guarantees as to the quality of the output that their models and systems produce.
"which is really not the point of these journals at all"- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they're agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?
I also don't understand your second paragraph at all.
That is an interesting philosophical question, but not the question we are confronted with. A lot of LLM assisted materials have the _signals_ of novel research without having its _substance_.
Plenty of researchers hate writing and will only do it at gunpoint. Or rather, delegate it all to their underlings.
I don't see an issue with generative writing in principle. The Devil is in the details, but I don't see this as much different from "hey grad student, write me this paper". And generative writing already exists as copy-paste, which makes up like 90% of any random paper given the incrementality of it all.
I was initially a little indignated by the "find me some plausible refs and stick them in the paper" section of the video but, then again, isn't this what most people already do? Just copy-paste the background refs from the colleague's last paper introduction and maybe add one from a talk they saw in the meantime, plus whatever the group & friends produced since then.
My experience is most likely skewed (as all are), but I haven't met a permanent researcher that wrote their own papers yet, and most grad students and postdocs hate writing. Literally the only times I saw someone motivated to write papers (in a masochistic way) were just before applying to a permanent position or while wrapping up their PhD.
Onto your point, though, I agree this is somewhat worrisome in that, by reaction, the barrier to entry might rise by way of discriminating based on credentials.
I also am not sure why so many people are vehemently against this. I would bet that at least 90% of researchers would agree that the writing up is definitely not the part of the work they prefer (to stay polite). As you mentioned, work is usually relegated to students, and those students already had access to LLMs if they wanted to generate the work.
In my opinion, most of those tools become problematic when people use them without caution. Unfortunately, even in sciences, people are not as careful and pragmatic as we would like to imagine they are and a lot of people are cutting corners, especially in those "lesser" areas like writing and presenting your work.
Overall, I think this has the potential to reshape the publication system, which is long overdue.
A good tool would encourage me, help me while I am writing, and maybe set up barriers that keep me from taking shortcuts (e.g. pushing me to re-read the relevant paragraphs of a paper that I cite).
Prism does none of these things - instead it pushes me towards sloppy practices, such as sprinkling citations between claims. Why won't ChatGPT tell me how to build a bomb but Prism will happily fabricate fake experimental results for me?
This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
> > who are looking to 'boost' their CV
Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.
For whom? For OpenAI these tools are definitely the solutions. They are developing by throwing various AI-powered stuff at the wall to see what sticks. These tools also demonstrate to the investors that innovation did not stall and to show that AI usage is growing.
Same with Microsoft: none of the AI stuff they are shoving down the users' throats were actually designed for the users. All this stuff is only for the token usage to grow for the shareholders to see.
Similar with Google although no one can deny real innovation happening there.
These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.
Deleted Comment
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
It was already a problem 25 years ago when I did my Ph.D., and I don't think things changed that much since then.
This encourages researchers to publish barely valuable results, or to cut one articles into multiple ones with small variations to increase their number of publications. Also publishers creating more conferences and more journals to respond to the need that researchers have to publish.
I remember many experienced professors telling me cynically about this, about all the techniques they had to blow up one small finding into many articles.
Anyway - research slop started way before AI. It's probably going to make the problem worse, but the root issue have been there for a long time.
If I can't have that, the next best thing is a helper while I'm at the keyboard my damn self.
>Why LaTeX is the bottleneck: scientists spend hours aligning diagrams, formatting equations, and managing references—time that should go to actual science, not typesetting
This is supposed to be only a temporary situation until people recover from the cutbacks of the 1970's, and a more comprehensive number of scientists once again have their own secretary.
Looks like the engineers at Crixet were tired of waiting.
They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.
The target audience of this tool is not academics; it's OpenAI investors.
So yes, you use it to write the paper but soon it is public knowledge anyway.
I am not sure if there is much to learn from the draft of the authors.
I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.
Deleted Comment
[1]: https://statmodeling.stat.columbia.edu/2026/01/26/machine-le...