Author here. I've been thinking about how AI coding changes the relationship between engineers and the code they ship.
If a large portion of the implementation is generated, ownership can't really mean "I wrote this" anymore. It starts to look more like stewardship of a system: understanding how it behaves, watching how it fails, and designing it so mistakes are contained and recoverable.
That's a fair question. Mozilla is a large organization and I don't work on Firefox itself, so the blog post wasn't meant as commentary on that specific change.
That said, the Firefox policy actually reflects something similar to what I'm describing in the post: even if AI tools are used to generate code, a human must ultimately take responsibility for what gets merged. AI can't own a change or be accountable for it.
If a large portion of the implementation is generated, ownership can't really mean "I wrote this" anymore. It starts to look more like stewardship of a system: understanding how it behaves, watching how it fails, and designing it so mistakes are contained and recoverable.
Curious how others are thinking about this shift.
That said, the Firefox policy actually reflects something similar to what I'm describing in the post: even if AI tools are used to generate code, a human must ultimately take responsibility for what gets merged. AI can't own a change or be accountable for it.