In short, a Wikimedia Foundation account was doing some sort of test which involved loading a large number of user scripts. They decided to just start loading random user scripts, instead of creating some just for this test.
The user who ran this test is a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account, which has permissions to edit the global CSS and JS that runs on every page.
One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast. This triggered tons of alerts, until the decision was made to turn the Wiki read-only.
Or just restore from backup across the board. Assuming they do their backups well this shouldn't be too hard (especially since its currently in Read Only mode which means no new updates)
Are you sure?
Are you $150 million ARR sure?
Are you $150 million ARR, you'd really like to keep your job, you're not going to accidentally leave a hole or blow up something else, sure?
I agree, mostly, but I'm also really glad I don't have to put out this fire. Cheering them on from the sidelines, though!
> One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast.
It's a mediawiki feature: there's a set of pages that get treated as JS/CSS and shown for either all users or specifically you. You do need to be an admin to edit the ones that get shown to all users.
Yes, you can have your own JS/CSS that’s injected in every page. This is pretty useful for widgets, editing tools, or to customize the website’s apparence.
Wow. This worm is fascinating. It seems to do the following:
- Inject itself into the MediaWiki:Common.js page to persist globally, and into the User:Common.js page to do the same as a fallback
- Uses jQuery to hide UI elements that would reveal the infection
- Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru
- If an admin is infected, it will use the Special:Nuke page to delete 3 random articles from the global namespace, AND use the Special:Random with action=delete to delete another 20 random articles
EDIT! The Special:Nuke is really weird. It gets a default list of articles to nuke from the search field, which could be any group of articles, and rubber-stamps nuking them. It does this three times in a row.
As someone on the Wikipediocracy forums pointed out, basemetrika.ru does not exist. I get an NXDomain response trying to resolve it. The plot thickens.
> Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru
Note while this looks like its trying to trigger an xss, what its doing is ineffective, so basemetrika.ru would never get loaded (even ignoring that the domain doesnt exist)
I wouldn't be surprised either. But the original formatting of the worm makes me think it was human written, or maybe AI assisted, but not 100% AI. It has a lot of unusual stylistic choices that I don't believe an AI would intentionally output.
I would. AI designed software in general does not include novel ideas. And this is the kind of novel software AI is not great at, because there's not much training data.
Of course it's very possible someone wrote it with AI help. But almost no chance it was designed by AI.
> Cleaning this up is going to be an absolute forensic nightmare for the Wikimedia team since the database history itself is the active distribution vector.
Well, worm didn't get root -- so if wikimedia snapshots or made a recent backup, probably not so much of a nightmare? Then the diffs can tell a fairly detailed forensic story, including indicators of motive.
Snapshotting is a very low-overhead operation, so you can make them very frequently and then expire them after some time.
Even if they reset to several days ago and lose, say, thousands of edits, even tens of thousands of minor edits, they're still in a pretty good place. Losing a few days of edits is less-than-ideal but very tolerable for Wikipedia as a whole
At $work we're hosting business knowledge databases. Interestingly enough, if you need to revert a day or two of edits, you're better off to do it asap, over postponing and mulling over it. Especially if you can keep a dump or an export around.
People usually remember what they changed yesterday and have uploaded files and such still around. It's not great, but quite possible. Maybe you need to pull a few content articles out from the broken state if they ask. No huge deal.
If you decide to roll back after a week or so, editors get really annoyed, because now they are usually forced to backtrack and reconcile the state of the knowledge base, maybe you need a current and a rolled-back system, it may have regulatory implications and it's a huge pain in the neck.
Nah, you can snapshot every 15 minutes. The snapshot interval depends on the frequency of changes and their capacity, but it's up to them how to allocate these capacities... but it's definitely doable and there are real reasons for doing so. You can collapse deltas between snapshots after some time to make them last longer. I'd be surprised if they don't do that.
As an aside, snapshotting would have prevented a good deal of horror stories shared by people who give AI access to the FS. Well, as long as you don't give it root.......
A theory on phab: "Some investigation was made in Russian Wikipedia discord chat, maybe it will be useful.
1. In 2023, vandal attacks was made against two Russian-language alternative wiki projects, Wikireality and Cyclopedia. Here https://wikireality.ru/wiki/РАОрг is an article about organisators of these attacks.
I remember someone mass-defacing the ruwiki almost exactly a year ago (March 3 2025) with some immature insults towards certain ruwiki admins. If I'm not mistaken it was a similar method.
- There are constant deface incidents caused by editing of unprotected / semiprotected templates
- There were incidents of UI mistranslation (because MediaWiki translation is crowdsourced)
- The attack that was applied is well know in Russian community, it is pretty much standard "admin-woodpecker". The standard woodpecker (some people call it neo-woodpecker) renamed all pages with a high speed (I know this since 2007, the name woodpecker appeared many years later); then MediaWiki added throttling for renames; then neo-woodpecker reappeared in different years (usually associated with throttling bypass CVEs). Early admin-woodpeckers were much more destructive (destroyed a dozens of mediawiki websites due to lack of backups). Nuking admin woodpecker it quite a boring one, but I think (I hope) there are some AbuseFilter guardrails configured to prevent complex woodpeckers.
- The attack initiator is 100% a well known user; there are not too many users who applied woodpecker in the first place; not too many "upyachka" fans (which indicates that user edited before 2010 - back then active editors knew each other much better). But it is quite pointless to discuss who exactly the initiator is.
- Wikireality page is hijacked by a small group and does not represent the reality.
Also, I’m also surprised an XSS attack like hasn’t yet been actually used to harvest credentials like passwords through browser autofill[0].
It seems like the worm code/the replicated code only really attacks stuff on site. But leaking credentials (and obviously people reuse passwords across sites) could be sooo much worse.
I think autofill-based credential harvesting is harder than it sounds because browsers and password managers treat saved credentials as a separate trust boundary, and every vendor implements different heuristics. The tricky part is getting autofill to fire without a real user gesture and then exfiltrating values, since many browsers require exact form attributes or a user activation and several managers ignore synthetic events.
If an attacker wanted passwords en masse they could inject fake login forms and try to simulate focus and typing, but that chain is brittle across browsers, easy to detect and far lower yield than stealing session tokens or planting persistent XSS. Defenders should assume autofill will be targeted and raise the bar with HttpOnly cookies, SameSite=strict where practical, multifactor auth, strict Content Security Policy plus Subresource Integrity, and client side detection that reports unexpected DOM mutations.
"The incident appears to have been a cross-site scripting hack. The origin of rhe malicious scripts was a userpage on the Russian Wikipedia. The script contained Russian language text.
During the shutdown, users monitoring [https://meta.wikimedia.org/wiki/special:RecentChanges Recent changes page on Meta] could view WMF operators manually reverting what appeared to be a worm propagated in common.js
Hopefully this means they won't have to do a database rollback, i.e. no lost edits. "
Interesting to note how trivial it is today to fake something as coming "from the Russians".
Why do you think it was faked? It is a well known Russian tech (woodpecker), the earliest version I can find now was created in 2013 (but I personally saw it in 2007), it is a well known Russian damocles sword against misconfigured MediaWiki websites.
The Wikipedia community takes a cavalier attitude towards security. Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review. They added mandatory 2FA only a few years ago...
Prior to this, any admin had that ability until it was taken away due to English Wikipedia admins reverting Wikimedia changes to site presentation (Mediaviewer).
But that's not all. Most "power users" and admins install "user scripts", which are unsandboxed JavaScript/CSS gadgets that can completely change the operation of the site. Those user scripts are often maintained by long abandoned user accounts with no 2 factor authentication.
Based on the fact user scripts are globally disabled now I'm guessing this was a vector.
The Wikimedia foundation knows this is a security nightmare. I've certainly complained about this when I was an editor.
But most editors that use the website are not professional developers and view attempts to lock down scripting as a power grab by the Wikimedia Foundation.
> Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review.
True, but there aren't very many interface administrators. It looks like there are only 137 right now [0], which I agree is probably more than there should be, but that's still a relatively small number compared to the total number of active users. But there are lots of bots/duplicates in that list too, so the real number is likely quite a bit smaller. Plus, most of the users in that list are employed by Wikimedia, which presumably means that they're fairly well vetted.
> Based on the fact user scripts are globally disabled now I'm guessing this was a vector.
Disabled at which level?
Browsers still allow for user scripts via tools like TamperMonkey and GreaseMonkey, and that's not enforceable (and arguably, not even trivially visible) to sites, including Wikipedia.
As I say that out loud, I figure there's a separate ecosystem of Wikipedia-specific user scripts, but arguably the same problem exists.
In short, a Wikimedia Foundation account was doing some sort of test which involved loading a large number of user scripts. They decided to just start loading random user scripts, instead of creating some just for this test.
The user who ran this test is a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account, which has permissions to edit the global CSS and JS that runs on every page.
One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast. This triggered tons of alerts, until the decision was made to turn the Wiki read-only.
Dead Comment
That makes the fix pretty easy. Write a regex to detect the evil script, and revert every page to a historic version without the script.
I agree, mostly, but I'm also really glad I don't have to put out this fire. Cheering them on from the sidelines, though!
So, like the Samy worm? (https://en.wikipedia.org/wiki/Samy_%28computer_worm%29)
"Claude> Yes, you're absolutely right! I'm sorry!"
this is both really cool and really really insane
https://www.mediawiki.org/wiki/Manual:Interface/JavaScript
On the other hand,
>a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account
seriously?
- Inject itself into the MediaWiki:Common.js page to persist globally, and into the User:Common.js page to do the same as a fallback
- Uses jQuery to hide UI elements that would reveal the infection
- Vandalizes 20 random articles with a 5000px wide image and another XSS script from basemetrika.ru
- If an admin is infected, it will use the Special:Nuke page to delete 3 random articles from the global namespace, AND use the Special:Random with action=delete to delete another 20 random articles
EDIT! The Special:Nuke is really weird. It gets a default list of articles to nuke from the search field, which could be any group of articles, and rubber-stamps nuking them. It does this three times in a row.
Note while this looks like its trying to trigger an xss, what its doing is ineffective, so basemetrika.ru would never get loaded (even ignoring that the domain doesnt exist)
Of course it's very possible someone wrote it with AI help. But almost no chance it was designed by AI.
Well, worm didn't get root -- so if wikimedia snapshots or made a recent backup, probably not so much of a nightmare? Then the diffs can tell a fairly detailed forensic story, including indicators of motive.
Snapshotting is a very low-overhead operation, so you can make them very frequently and then expire them after some time.
People usually remember what they changed yesterday and have uploaded files and such still around. It's not great, but quite possible. Maybe you need to pull a few content articles out from the broken state if they ask. No huge deal.
If you decide to roll back after a week or so, editors get really annoyed, because now they are usually forced to backtrack and reconcile the state of the knowledge base, maybe you need a current and a rolled-back system, it may have regulatory implications and it's a huge pain in the neck.
As an aside, snapshotting would have prevented a good deal of horror stories shared by people who give AI access to the FS. Well, as long as you don't give it root.......
It also never effected wikipedia, just the smaller meta site (used for interproject coordination)
1. In 2023, vandal attacks was made against two Russian-language alternative wiki projects, Wikireality and Cyclopedia. Here https://wikireality.ru/wiki/РАОрг is an article about organisators of these attacks.
2. In 2024, ruwiki user Ololoshka562 created a page https://ru.wikipedia.org/wiki/user:Ololoshka562/test.js containing script used in these attacks. It was inactive next 1.5 years.
3. Today, sbassett massively loaded other users' scripts into his global.js on meta, maybe for testing global API limits: https://meta.wikimedia.org/wiki/Special:Contributions/SBasse... . In one edit, he loaded Ololoshka's script: https://meta.wikimedia.org/w/index.php?diff=prev&oldid=30167... and run it."
- There are constant deface incidents caused by editing of unprotected / semiprotected templates
- There were incidents of UI mistranslation (because MediaWiki translation is crowdsourced)
- The attack that was applied is well know in Russian community, it is pretty much standard "admin-woodpecker". The standard woodpecker (some people call it neo-woodpecker) renamed all pages with a high speed (I know this since 2007, the name woodpecker appeared many years later); then MediaWiki added throttling for renames; then neo-woodpecker reappeared in different years (usually associated with throttling bypass CVEs). Early admin-woodpeckers were much more destructive (destroyed a dozens of mediawiki websites due to lack of backups). Nuking admin woodpecker it quite a boring one, but I think (I hope) there are some AbuseFilter guardrails configured to prevent complex woodpeckers.
- The attack initiator is 100% a well known user; there are not too many users who applied woodpecker in the first place; not too many "upyachka" fans (which indicates that user edited before 2010 - back then active editors knew each other much better). But it is quite pointless to discuss who exactly the initiator is.
- Wikireality page is hijacked by a small group and does not represent the reality.
I’ve always thought the fact that MediaWiki sometimes lets editors embed JavaScript could be dangerous.
It seems like the worm code/the replicated code only really attacks stuff on site. But leaking credentials (and obviously people reuse passwords across sites) could be sooo much worse.
[0] https://varun.ch/posts/autofill/
If an attacker wanted passwords en masse they could inject fake login forms and try to simulate focus and typing, but that chain is brittle across browsers, easy to detect and far lower yield than stealing session tokens or planting persistent XSS. Defenders should assume autofill will be targeted and raise the bar with HttpOnly cookies, SameSite=strict where practical, multifactor auth, strict Content Security Policy plus Subresource Integrity, and client side detection that reports unexpected DOM mutations.
https://wikipediocracy.com/forum/viewtopic.php?f=8&t=14555
https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(techni...
https://old.reddit.com/r/wikipedia/comments/1rllcdg/megathre...
Apparent JS worm payload: https://ru.wikipedia.org/w/index.php?title=%D0%A3%D1%87%D0%B...
The Wikipedia community takes a cavalier attitude towards security. Any user with "interface administrator" status can change global JavaScript or CSS for all users on a given Wiki with no review. They added mandatory 2FA only a few years ago...
Prior to this, any admin had that ability until it was taken away due to English Wikipedia admins reverting Wikimedia changes to site presentation (Mediaviewer).
But that's not all. Most "power users" and admins install "user scripts", which are unsandboxed JavaScript/CSS gadgets that can completely change the operation of the site. Those user scripts are often maintained by long abandoned user accounts with no 2 factor authentication.
Based on the fact user scripts are globally disabled now I'm guessing this was a vector.
The Wikimedia foundation knows this is a security nightmare. I've certainly complained about this when I was an editor.
But most editors that use the website are not professional developers and view attempts to lock down scripting as a power grab by the Wikimedia Foundation.
True, but there aren't very many interface administrators. It looks like there are only 137 right now [0], which I agree is probably more than there should be, but that's still a relatively small number compared to the total number of active users. But there are lots of bots/duplicates in that list too, so the real number is likely quite a bit smaller. Plus, most of the users in that list are employed by Wikimedia, which presumably means that they're fairly well vetted.
[0]: https://en.wikipedia.org/w/api.php?action=query&format=json&...
https://en.wikipedia.org/wiki/Wikipedia:Interface_administra...
https://en.wikipedia.org/wiki/Special:ListUsers/interface-ad...
Unfortunately, Wikipedia is run on insecure user scripts created by volunteers that tend to be under the age of 18.
There might be more editors trying to resume boost if editing Wikipedia under your real name didn't invite endless harassment.
Browsers still allow for user scripts via tools like TamperMonkey and GreaseMonkey, and that's not enforceable (and arguably, not even trivially visible) to sites, including Wikipedia.
As I say that out loud, I figure there's a separate ecosystem of Wikipedia-specific user scripts, but arguably the same problem exists.
You can also upload scripts to be shared and executed by other users.
As in, user can upload whatever they wish and it will be shown to them and ran, as JS, fully privileged and all.
>There are currently 15 interface administrators (including two bots).
https://en.wikipedia.org/wiki/Wikipedia:Interface_administra...
Dead Comment