AWS account root access on a language package registry for 11 days. Not EC2 root - AWS account root. Complete control over IAM, S3, CloudTrail, every-damn-thing.
They're claiming "no evidence of compromise" based on CloudTrail logs that AWS root could have deleted or modified. They even admit they "Enabled AWS CloudTrail" after regaining control - meaning CloudTrail wasn't running during the compromise window.
You cannot verify supply chain integrity from logs on a system where root was compromised, and you definitely can't verify it when the logs didn't exist (they enabled them during remediation?).
So basically, somebody correct me here if I'm wrong but ... Every gem published Sept 19-30 is suspect. Production Ruby applications running code from that window have no way to verify it wasn't backdoored. The correct response is to freeze publishing, rebuild from scratch (including re-publishing any packages published at the time? Ugh I don't even know how to do this! ) , and verify against offline backups. Instead they rotated passwords and called it done.
Isn't the subtext of this post pretty clearly that the unauthorized actor was Andre Arko, who had until days prior all the same access to RubyGems.org already?
The impression I have reading this is that they're going out of their way to make it clear they believe it was him, but aren't naming him because doing so would be accusing him of a criminal act.
Let's say that they are 100% correct, we parse the subtext as text, it was totally him.
We still do not know the critical details of how (and when) he stored the root password he copied out of their password manager (encrypted in his own password manager? on his pwned laptop? in dropbox? we'll never know!) therefore the whole chain of custody is still broken.
CloudTrail logs for the last 90 days are enabled by default, cannot be turned off, and are immutable, even by root. If you view this “event” as starting when Arko was supposed to have their access terminated, that’s within the 90 day window and you can indeed trust the logs from that period.
CloudTrail's 90-day immutable Event History only logs management events (IAM changes, instance launches, bucket creation). It does NOT log:
* S3 object reads/writes (GetObject, PutObject) - these are "data events" requiring explicit configuration[0]
* SSH/RDP to EC2 instances - CloudTrail only captures AWS API calls, not OS-level activity[1]
With root access for 11 days, someone could modify gem files in S3, backdoor packages, SSH into build servers - none of it would appear in the logs they reviewed. Correct?
Thinking about this a bit more... it sure is interesting that around the time of a competing project launch that something just happens which might reasonably completely compromise trust in the previous incumbent, isn't it? Odd!
>> Why did Joel give so little time of advance notice before publishing his post revealing Andre’s production access? That struck me as irresponsible disclosure, but I may have missed something.
> I decided to publish when I did because I knew that Ruby Central had been informed and I wanted the world to be informed about how sloppy Ruby Central were with security, despite their security posturing as an excuse to take over open source projects.
> What I revealed changed nothing about Ruby Central’s security, since André had access whether I revealed that he did or not. When you have security information that impacts lots of people, you publish it so they can take precautions. That is responsible disclosure.
how can you trust gem.coop isn't already mining request logs + IPs to try and monetize lists of companies using specific packages & versions — besides the privacy/ethical concerns it is super useful data for hackers looking for vulnerable apps
no single person should have Github owner + AWS root password for a major language's package manager and ecosystem just sitting around on their laptop while they fly around to different conferences (as Andre seems to have done while showing off he still had the login to rubygem's AWS root account while in Japan)
Not going to bother reading the article, but will chime in here that the recommendation from AWS is to have a separate security account within your organization that only holds your CloudTrail logs. This does potentially double your cost, as you only get one CloudTrail for free, and it's very useful to have an in-account trail for debugging purposes.
Organizations are also useful because you can attach SCPs to your accounts that deny broad classes of activities even to the root user.
You can set up EC2 instances in a way that that just having AWS root access doesn't give you ssh/console access to the instances. You can still do things like Run Command but that leaves a very obvious trail (although even this is preventable with enough effort).
Also you can enable cloudtrail log validation which can ensure you know if you're looking at tampered logs or not.
Really it all depends on how their accounts are set up. Unless you know the operational details you can't make a call here.
I've run a multi-million dollar/year AWS Org for the last decade or so and setting things up this way is kind of brass tacks.
I believe this is a scenario where AWS recommends multiple accounts.
1. Create another "management" AWS account, and make your other AWS account a child to that.
2. Ensure no one ever logs in to the "management" account, as there shouldn't be any business purpose in doing so. For example, you should require a hardware key to log in.
3. Configure the "management" account to force children account to enable AWS Config, AWS CloudTrail, etc. Also force them to duplicate logs to the "management" account.
Step 2 is important. At the end of the day, an organization can always find a way to render their security measures useless.
IMO the only way to avoid doing a total rebuild is to have Andre Arko:
1. Admit that he was the unauthorized actor (which means he's probably admitting to a crime?)
2. Have him attest he didn't exfil or modify the integrity of service while committing a crime.
If I was Ruby Central I would give clemency on #1 in exchange for #2 and I think #2 helps Andre Arko.
So you would expect people to accept that the entire root chain of custody for the Ruby supply chain is attested by ... A guy saying he didn't do anything bad? I have a cool cryptocurrency you might wanna check out that I definitely don't have a backdoor to!
Ruby Central isn't capable of giving clemency. They could refuse to testify in any prosecution, but they don't get to pick whether a relevant attorney general or district attorney decides to prosecute.
In the US, at least, private parties can not grant immunity from prosecution for a crime (only public prosecutors of the jurisdiction against whose laws the crime was committed can do that), and they may face legal jeopardy in agreeing, or even offering, not to report a crime in exchange for some good or service of value, as that is the definition of blackmail.
- Has a strong well formed opinion about "it's a crime", but didn't? read the content where the subject of the accusations has.... Already disclosed they had access in both private and public.
My account is also very new, because I have opted to discard my previous ones. I have used it to comment predominantly on this topic, as I sympathise with the maintainers.
So in the interests of making a similar disclosure is there any chance you are affiliated with RubyCentral through a business relationship with them, their legal counsel, a marketing or PR agency or anything of that nature?
Given the context of the post, it seems like "Enabled AWS CloudTrail, GuardDuty, and DataDog alerting" means "enabled alerts via CloudTrail, GuardDuty, and Datadog", not "enabled Cloudtrail logging". Otherwise the comment about reviewing Cloudtrail wouldn't make sense.
So the attacker turns logging off (was log file validation enabled? usually isn't in Terraform ) which does not fire an alert because there is no alerting. Then does their bad stuff ... Then modifies the logs (which are in an S3 bucket on the compromised account, remember!) Then they turn logging on? The whole point is alerts go outside AWS. They go to like, your inbox or pagerduty or whatever. If they had no alerts then what use are their logs, which could have been modified? Do you think they set up cross-account logging or had enable_log_file_validation set to true?
This was my understanding as well, but earlier I couldn't find any documentation to prove this so I never wrote a comment.
CloudTrail can be configured to save logs to S3 or CloudWatch Logs, but I think that even if you were to disable, delete, or tamper with these logs, you can still search and download unaltered logs directly from AWS using the CloudTrail Events page.
Arko wanted a copy of the HTTP Access logs from rubygems.org so his consultancy could monetize the data, after RC determined they didn't really have the budget for secondary on-call.
Then after they removed him as a maintainer he logged in and changed the AWS root password.
In a certain sense this post justifies why RC wanted so badly to take ownership - I mean, here you have a maintainer who clearly has a desire to sell user data to make a buck - but the way it all played out with terrible communication and rookie mistakes on revoking access undermines faith in RC's ability to secure the service going forward.
Not to mention no explanation here of who legally "owned" the rubygems repo (not just the infra) and why they thought they had the right to claim it, which is something disputed by the "other" side.
Just a mess all around, nobody comes off looking very good here!
I can give benefit of the doubt that making a proposal to monetize user data is a poorly-considered, bottom-scraping effort to find a replacement funding source for the on call work. Most of us would not consider it, but I think it should be ok to occasionally pitch some bad ideas, all else being equal and lacking full context.
But messing with the credentials crosses an ethical line that isn't excused no matter how much you disagree with the other party's actions.
really disappointing. it's such a huge security concern and privacy/ethical lapse, i am super disappointed in him, despite his contributions to the world of Ruby package management
he's now started a competing gem.coop package manager, and while they haven't released a privacy policy it does make me suspicious about how they were planning to fund it
no single person should have Github owner + AWS root password for a major language's package manager and ecosystem just sitting around on their laptop while they fly around to different conferences in Japan e.g. (as Andre did while hacking rubygem's AWS root account to show off)
“Following these budget adjustments, Mr. Arko’s consultancy, which had been receiving approximately $50,000 per year for providing the secondary on-call service, submitted a proposal offering to provide secondary on-call services at no cost in exchange for access to production HTTP access logs, containing IP addresses and other personally identifiable information (PII).”
I'd recommend to people to wait for a response - RubyCentral spins up a gazillion accusations right now and has been in the last days (and, it is also incomplete, because why did they fire every dev here and placed Marty Haught in charge specifically? They never were able to logically explain this; plus, why didn't they release this write-up before? It feels very strange to wait here; they could have clarified things before, but to me it seems they kind of waited and then tried to come up with some explanation that, to me, makes no real sense).
I also highly recommend to not accept RubyCentral's current strategy to post very isolated emails and insinuate that "this is the ultimate, final proof". We all know that email conversation often requires lots of emails. So doing a piecemail release really feels strange. Plus, there also were in-person meetings - why does RubyCentral not release what was discussed here? Was there a conflict of interest due to financial pressure?
Also, as was already pointed out, RubyCentral went lawyering up already - see discussions on reddit. Is this really the transparency we as users and developers want to see? This is blowing up by the day and no matter from which side you want to look at it, RubyCentral sits at the center; or, at the very least, made numerous mistakes, tries to cover past mistakes by ... making more mistakes. I think it would be better to dissolve RubyCentral. Let's start from a clean state here; let's find rules of engagement that doesn't put rich corporations atop the whole ecosystem.
Last but not least - this tactical slandering is really annoying. If they have factual evidence, they need to bring the matter to a court; if they don't, they need to stop slandering people. To my knowledge RubyCentral hasn't yet started a court case, and I have a slight suspicious that they also will not, because we, as the general public, would then demand COMPLETE transparency, including ALL of RubyCentral's members and their activities here. So my recommendation is: wait for a while, let those accused respond.
Yeah, this is incredibly confusing. The stance that Ruby Central has stated since the takeover of the RubyGems (offline) tooling on Github was that it was necessary for supply chain security, but if this happened literally within a couple of weeks of when they tried (and apparently failed?) to remove all of the previous maintainers, how does this add any amount of confidence in their ability to secure things going forward? If they can't even properly remove the people they already knew had access that they went out of their way to try to remove, it's hard to feel like consolidating their ownership over all of the tooling is going to be an improvement.
This makes Ruby Central look even worse. TFA is only concerned with the root user, and the timeline ends at September 30, but Arko was able to confirm as late as October 5 that he had access to _other_ accounts with production access. Ruby Central doesn't seem interested in the article to mention that even after being notified about unauthorized access they still hadn't rotated all relevant credentials almost a week later.
Welp, now that there is confirmation that lawyers are involved, the chances there will be any of sort of open and transparent reconciliation process have plummeted.
The rogue maintainers have apparently been been successful enough with their stewardship for years to the point that people still use and care about the tools they had maintained today. On the other hand, the new maintainers sponsored by the rich corporation have managed to draw scrutiny immediately about how they became the new maintainers and apparently failed to effectively protect their new assets from a major breach within two weeks of acquiring them despite security being their main argument for why they should be in charge in the first place.
This is a pretty hilarious and long-winded way to say "we have no idea how to lock someone out of a web service:"
> 1. While Ruby Central correctly removed access to shared credentials through its enterprise password manager prior to the incident, our staff did not consider the possibility that this credential may have been copied or exfiltrated to other password managers outside of Ruby Central’s visibility or control.
> 2. Ruby Central failed to rotate the AWS root account credentials (password and MFA) after the departure of personnel with access to the shared vault.
Right?! Did nobody there think to actually disable the accounts? These are the people who are harping about "security" being the reason for the ham-fisted takeover of the source repos, but they didn't secure the production infrastructure?
No matter how you slice it this is miserable root password security. Why do maintainers need root access? No one in my org has root access but me and all those creds are tied to hardware MFA locked in my MDF.
If they really have ethical concerns regarding sharing data with third parties, maybe they should update their privacy policies accordingly?
"We collect information related to web traffic such as IP addresses and geolocation data for security-relevant events and to analyze how and where RubyGems.org is used."
"We may share aggregate or de-identified information with third parties for research, marketing, analytics, and other purposes, provided such information does not identify a particular individual."
> “Following these budget adjustments, Mr. Arko’s consultancy, (…), submitted a proposal offering to provide secondary on-call services at no cost in exchange for access to production HTTP access logs, containing IP addresses and other personally identifiable information (PII). The offer would have given Mr. Arko’s consultancy access to that data, so that they could monetize it by analyzing access patterns and potentially sharing it with unrelated third-parties.”
WTF. This is the same guy that is launched gems.coop, a competing index for Ruby gems recently.
On the other hand, RubyCentral actions were truly incompetent, I don’t know anymore who is worse
> failed to rotate the AWS root account credentials ... stored in a shared enterprise password manager
Unfortunately, many enterprises follow the poor practice of storing shared credentials in a shared password manager without rotating them when an employee with prior access leaves the company.
You might be surprised/horrified at the number of shops I run into that use shared creds without a password manager, still use creds from ex-employees because changing them smells too much like work, and ask "why would I do that?" when you ask about rotation.
Presuming, as a group full of security peers kibitzing about this in a chat right now all do, that the "unauthorized actor" here is Andre Arko, this is Ruby Central pretty directly accusing Arko of having hacked Rubygems.org; it depicts what seems to be a black letter 18 USC 1030 violation.
Any part of this narrative could be false, but I don't see a way to read it and take it as true where Arko's actions would be OK.
Putting myself in Arko’s shoes, I can imagine (charitably!) the following choice, realizing that I still have access and shouldn’t:
1. Try to get in touch, quickly, with someone with the power to fix it and explain what needs to be rotated.
2. Absent 1, especially if it cannot be done quickly, rotate the credentials personally to get them back to a controlled state (by someone who actually
understands the security implications) with the intent to hand them off. Especially if you still _think_ of yourself as responsible for the infrastructure, this is a no-brainer compared to letting anyone else who might be in the same “should have lost access but didn’t, due to negligence” maintain access.
Not a legal defense, but let’s not be too hasty to judge.
I hadn't yet seen it when I wrote this, but 2 is pretty much exactly what Arko says:
> Worried about the possibility of hacked accounts or some sort of social engineering, I took action as the primary on-call engineer to lock down the AWS account and prevent any actions by possible attackers.
They're claiming "no evidence of compromise" based on CloudTrail logs that AWS root could have deleted or modified. They even admit they "Enabled AWS CloudTrail" after regaining control - meaning CloudTrail wasn't running during the compromise window.
You cannot verify supply chain integrity from logs on a system where root was compromised, and you definitely can't verify it when the logs didn't exist (they enabled them during remediation?).
So basically, somebody correct me here if I'm wrong but ... Every gem published Sept 19-30 is suspect. Production Ruby applications running code from that window have no way to verify it wasn't backdoored. The correct response is to freeze publishing, rebuild from scratch (including re-publishing any packages published at the time? Ugh I don't even know how to do this! ) , and verify against offline backups. Instead they rotated passwords and called it done.
The impression I have reading this is that they're going out of their way to make it clear they believe it was him, but aren't naming him because doing so would be accusing him of a criminal act.
We still do not know the critical details of how (and when) he stored the root password he copied out of their password manager (encrypted in his own password manager? on his pwned laptop? in dropbox? we'll never know!) therefore the whole chain of custody is still broken.
Deleted Comment
* S3 object reads/writes (GetObject, PutObject) - these are "data events" requiring explicit configuration[0]
* SSH/RDP to EC2 instances - CloudTrail only captures AWS API calls, not OS-level activity[1]
With root access for 11 days, someone could modify gem files in S3, backdoor packages, SSH into build servers - none of it would appear in the logs they reviewed. Correct?
[0] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/l...
[1] https://repost.aws/questions/QUVsPRWwclS0KbWOYXvSla3w/cloud-...
https://www.reddit.com/r/ruby/comments/1o2bxol/comment/ninly...
>> Why did Joel give so little time of advance notice before publishing his post revealing Andre’s production access? That struck me as irresponsible disclosure, but I may have missed something.
> I decided to publish when I did because I knew that Ruby Central had been informed and I wanted the world to be informed about how sloppy Ruby Central were with security, despite their security posturing as an excuse to take over open source projects.
> What I revealed changed nothing about Ruby Central’s security, since André had access whether I revealed that he did or not. When you have security information that impacts lots of people, you publish it so they can take precautions. That is responsible disclosure.
no single person should have Github owner + AWS root password for a major language's package manager and ecosystem just sitting around on their laptop while they fly around to different conferences (as Andre seems to have done while showing off he still had the login to rubygem's AWS root account while in Japan)
Organizations are also useful because you can attach SCPs to your accounts that deny broad classes of activities even to the root user.
How can they ensure that nobody else did any tampering?
It seems RubyCentral did not think this through completely.
this is the problem when you fire all the maintainers who do anything
Also you can enable cloudtrail log validation which can ensure you know if you're looking at tampered logs or not.
Really it all depends on how their accounts are set up. Unless you know the operational details you can't make a call here.
I've run a multi-million dollar/year AWS Org for the last decade or so and setting things up this way is kind of brass tacks.
1. Create another "management" AWS account, and make your other AWS account a child to that.
2. Ensure no one ever logs in to the "management" account, as there shouldn't be any business purpose in doing so. For example, you should require a hardware key to log in.
3. Configure the "management" account to force children account to enable AWS Config, AWS CloudTrail, etc. Also force them to duplicate logs to the "management" account.
Step 2 is important. At the end of the day, an organization can always find a way to render their security measures useless.
1. Admit that he was the unauthorized actor (which means he's probably admitting to a crime?) 2. Have him attest he didn't exfil or modify the integrity of service while committing a crime.
If I was Ruby Central I would give clemency on #1 in exchange for #2 and I think #2 helps Andre Arko.
I have been waiting to hear if there would be any civil action on it since it's not at all clear they had any rights to do most of what they did.
- Account created 14 hours ago.
- Posts article crammed full of accusations
- Has a strong well formed opinion about "it's a crime", but didn't? read the content where the subject of the accusations has.... Already disclosed they had access in both private and public.
My account is also very new, because I have opted to discard my previous ones. I have used it to comment predominantly on this topic, as I sympathise with the maintainers.
So in the interests of making a similar disclosure is there any chance you are affiliated with RubyCentral through a business relationship with them, their legal counsel, a marketing or PR agency or anything of that nature?
You can enable the persistent storage of trails. But you can always access 90 days of events regardless of that being enabled
CloudTrail can be configured to save logs to S3 or CloudWatch Logs, but I think that even if you were to disable, delete, or tamper with these logs, you can still search and download unaltered logs directly from AWS using the CloudTrail Events page.
Deleted Comment
Arko wanted a copy of the HTTP Access logs from rubygems.org so his consultancy could monetize the data, after RC determined they didn't really have the budget for secondary on-call.
Then after they removed him as a maintainer he logged in and changed the AWS root password.
In a certain sense this post justifies why RC wanted so badly to take ownership - I mean, here you have a maintainer who clearly has a desire to sell user data to make a buck - but the way it all played out with terrible communication and rookie mistakes on revoking access undermines faith in RC's ability to secure the service going forward.
Not to mention no explanation here of who legally "owned" the rubygems repo (not just the infra) and why they thought they had the right to claim it, which is something disputed by the "other" side.
Just a mess all around, nobody comes off looking very good here!
But messing with the credentials crosses an ethical line that isn't excused no matter how much you disagree with the other party's actions.
Deleted Comment
Deleted Comment
he's now started a competing gem.coop package manager, and while they haven't released a privacy policy it does make me suspicious about how they were planning to fund it
no single person should have Github owner + AWS root password for a major language's package manager and ecosystem just sitting around on their laptop while they fly around to different conferences in Japan e.g. (as Andre did while hacking rubygem's AWS root account to show off)
Deleted Comment
“Following these budget adjustments, Mr. Arko’s consultancy, which had been receiving approximately $50,000 per year for providing the secondary on-call service, submitted a proposal offering to provide secondary on-call services at no cost in exchange for access to production HTTP access logs, containing IP addresses and other personally identifiable information (PII).”
I also highly recommend to not accept RubyCentral's current strategy to post very isolated emails and insinuate that "this is the ultimate, final proof". We all know that email conversation often requires lots of emails. So doing a piecemail release really feels strange. Plus, there also were in-person meetings - why does RubyCentral not release what was discussed here? Was there a conflict of interest due to financial pressure?
Also, as was already pointed out, RubyCentral went lawyering up already - see discussions on reddit. Is this really the transparency we as users and developers want to see? This is blowing up by the day and no matter from which side you want to look at it, RubyCentral sits at the center; or, at the very least, made numerous mistakes, tries to cover past mistakes by ... making more mistakes. I think it would be better to dissolve RubyCentral. Let's start from a clean state here; let's find rules of engagement that doesn't put rich corporations atop the whole ecosystem.
Last but not least - this tactical slandering is really annoying. If they have factual evidence, they need to bring the matter to a court; if they don't, they need to stop slandering people. To my knowledge RubyCentral hasn't yet started a court case, and I have a slight suspicious that they also will not, because we, as the general public, would then demand COMPLETE transparency, including ALL of RubyCentral's members and their activities here. So my recommendation is: wait for a while, let those accused respond.
https://andre.arko.net/2025/10/09/the-rubygems-security-inci...
Literally all we've heard so far is from the other side...
> If they have factual evidence, they need to bring the matter to a court
I'd be surprised if they aren't. This post feels very much like the amount of disclosure a lawyer would recommend to reassure stakeholders.
> rules of engagement that doesn't put rich corporations atop the whole ecosystem
Right now the only thing stopping us all from being held hostage by rogue maintainers is a rich corporation.
Dead Comment
> 1. While Ruby Central correctly removed access to shared credentials through its enterprise password manager prior to the incident, our staff did not consider the possibility that this credential may have been copied or exfiltrated to other password managers outside of Ruby Central’s visibility or control.
> 2. Ruby Central failed to rotate the AWS root account credentials (password and MFA) after the departure of personnel with access to the shared vault.
Something they also failed to consider, reading between the lines.
"We collect information related to web traffic such as IP addresses and geolocation data for security-relevant events and to analyze how and where RubyGems.org is used."
(https://rubygems.org/policies/privacy)
"We may share aggregate or de-identified information with third parties for research, marketing, analytics, and other purposes, provided such information does not identify a particular individual."
(https://rubycentral.org/privacy-notice/)
Deleted Comment
WTF. This is the same guy that is launched gems.coop, a competing index for Ruby gems recently.
On the other hand, RubyCentral actions were truly incompetent, I don’t know anymore who is worse
Unfortunately, many enterprises follow the poor practice of storing shared credentials in a shared password manager without rotating them when an employee with prior access leaves the company.
Any part of this narrative could be false, but I don't see a way to read it and take it as true where Arko's actions would be OK.
1. Try to get in touch, quickly, with someone with the power to fix it and explain what needs to be rotated.
2. Absent 1, especially if it cannot be done quickly, rotate the credentials personally to get them back to a controlled state (by someone who actually understands the security implications) with the intent to hand them off. Especially if you still _think_ of yourself as responsible for the infrastructure, this is a no-brainer compared to letting anyone else who might be in the same “should have lost access but didn’t, due to negligence” maintain access.
Not a legal defense, but let’s not be too hasty to judge.
> Worried about the possibility of hacked accounts or some sort of social engineering, I took action as the primary on-call engineer to lock down the AWS account and prevent any actions by possible attackers.
https://andre.arko.net/2025/10/09/the-rubygems-security-inci...