"An email is your copy, and the sender can’t revise it later."
Sort of. They can't change plain text, but modern emails often include vast swaths of remote content. When you open the message, it retrieves the relevant assets directly from whoever sent the email. That remote content is not permanently stored. It's cached for a bit and will not be re-used if the email is opened months or years later.
If those assets disappear or are changed, there's very little any email provider can do about that.
Gmail keeps editing mails. They have a concept of "dynamic emails" people can send now. Like if you get a mail notification about something in Docs, they will keep updating the mail in your inbox together with modified / added comments in the document.
Microsoft has this now too with Loop components. If you put any text content around it that doesn't change, but the "Loop" component is a live doc and will update on the email client's end of the remote doc changes.
I don't think that's really the case, is it? At least, not in any formally-specified way. Modern email clients will extract metadata for things like airline reservations, shipping trackers, ICS calendar invites, etc, and give you live tiles specific to that time-sensitive info, but it's very clearly supplementary and at least in GMail none of it is pretending to be part of the message itself.
The provider could create a snapshot at receive and/or open (fetching these potentially mutable asset dependencies within a message), similar to what https://github.com/karakeep-app/karakeep and https://github.com/gildas-lormeau/SingleFile do with url bookmarks, and attach it (or otherwise associate it) to the message. Optional of course.
The benefit of this is senders couldn't treat it as a read receipt, because the provider can state "Our infra performs this operation for the user for immutability purposes" similar to other email operations that proxy these requests for privacy purposes.
When it's a silly marketing email - sure. But you'd be surprised how hard you need to work as a sender to ensure that your content will render correctly if your business is actually to deliver information via email. Remote content is ignored by default by almost all modern email clients (since developers got sneaky and started using it for tracking) so a good email with rich content is usually embedding all that content into a multi-part email and leveraging static styling rules to provide as much formatting as possible.
This isn't entirely true. While HTML email sometimes does have html tags in it, and can remotely download embedded images, it doesn't necessarily retrieve the asset from the person who sent it.
It could be anywhere, which is another knock against HTML email.
Which is why text only email is still king, and used in a lot of places still.
If this was correct, you wouldn’t be able to read those messages with remote content loading disabled or when in airplane mode. It’s pretty uncommon for me to get messages where that’s the case, and those are almost always marketing spam so, as they say, nothing of value is lost.
Apple’s private loading feature also shows how that could be fixed: the mail server can retrieve the referenced content once and save it so you’d always know what was served at the time the message was sent.
Buried lede: Fastmail is using AI-generated code / words / decision-making systems, just like everyone else and following the same meaningless "principles" as everyone else.
> For our staff, we encourage understanding the tools that exist in the world, and how to use them safely. Our policy makes it clear that any use of tools, including tools with AI in them, must follow clear privacy-preserving principles:
Data Protection: All data protection, confidentiality, and privacy policies must be followed (our vendors for things like anti-abuse and support are moving towards using AI for translation, categorization, abuse detection – and we are ensuring that their policies continue to provide protection for our customers)
Accountability for work: Any AI generated writing or code must be reviewed and understood by a human being, and go through our regular second-set-of-eyes processes before being used
Bias awareness: Actively look for biases or hallucinations in AI output
Human authority: Always have a path for appeal to a human from any decision that is made by automated tools
Until retired as a professor, I used Thunderbird and the GPG plug-in to sign emails. That makes them immutable no matter who hosts the email server you use. I encrypted the emails holding grades, if the recipient said they were able to decrypt. Setup was non-trivial but very doable. I also used (and still use) a plug-in that clearly shows if any email fails DKIM or SPF (I think DMarc too).
>In a world where there’s enough AI capability to process the entire web and rewrite every page to remove something, the cost of “changing history” is much reduced, so we can expect more of it.
I gotta be honest, this scenario is not a concern that impacts my choice of email provider.
The immutability of documentation tech matters more in a world with AI.
The cameras used to document "news" will need to be watermarked, fingerprinted and authenticated, like what Canon and Nikon are already doing (and which AFP has already adopted).
It may have seemed gimmicky at first, but in a year or two, you'll probably only be able to trust visuals from companies that do this (wire agencies like AFP, AP and Reuters are heavily disincentivised to create fake news anyway but that's another topic).
At a certain level, I imagine social media apps will also encourage direct camera-to-post for documentation/videos of reality, since this will be the only end-to-end method to verify an image was created unaltered. I can imagine a world where, if you film a protest through the Instagram app, you'd get some kind of "this is real" badge on it, whereas if you upload a video, it gets treated as "could be AI" like 99% of all future content.
The problem with this approach is that it is easily bypassed. Simply point your camera at a high quality monitor playing an AI generated video, and there you go, and authenticated AI video. In the future, video evidence is going to be as convincing as it was for 99.9999...% of human history. We survived with out it in the past. We'll survive without it in the future.
I doubt it will be that easy to bypass. A fake would still have to withstand pixel-level analysis on the level of methods that already detect tampering in regular video. For one thing, that will have to be a very high quality monitor indeed to leave no detectable trace of e.g. moire patterns.
Interestingly, I think Apple has inadvertently positioned themselves very well to be able to authenticate various activity as being done by an actual human. What if anything they decide to do with that capability remains to be seen.
I think it’s already irrelevant: cryptographic proofs of video evidence is difficult to communicate to audiences while watermarks will be learned by AI as trusted and injected into AI videos anyway. Also, in between the lens and your eyeball is usually a pipeline of editing applied anyway so either the cryptographic signature ends up with every layer signing the modifications applied + the previous layer or you stack watermarks. But ultimately the original problem is how to communicate the cryptographic chain validity.
By the time the video reaches the end user (i.e. on tiktok and the likes), it will have been re-compressed, edited, meme-ed, voiced over a dozen time. So not sure how you preserve trust in that chain.
Also, one thing HNers get fundamentally wrong is that anybody cares about trust/authenticity. And I don't see what's so special about photo/video.
One of the most common forms of submissions on Reddit/Twitter is an image with text, or a screenshot of a tweet, or a screenshot of a headline that makes a claim, and everyone takes it dead seriously.
Almost nobody is going "hmm let me look this up first to see if it even exists or accurately represents the facts".
So if all you need is an image of text for people to believe it, what does it even matter if you have this sophisticated system where you require photos to be signed by camera hardware or whatever? You aren't even putting a dent in how bullshit spreads.
I imagine a new type of bluetick would emerge. There will always be those who can't distinguish between a tick emoji next to a username and the actual thing, but that's a UX problem. Something shot and verified on-app could get a special, clickable tick on it when it's shared.
This removes the possibilities for bad actors to just one - the platform itself.
In any case, the audience will have to learn new ways to "trust" and tech alone won't be the solution. But I've less hope in people and more hope in new social contracts
I think LIDAR sensors would be useful to verify depth information in an image, on a side note.
You don't, the only reliable source will be the source that has signed the content. It basically takes us back to the times when the only footage available was curated and broadcast by TV.
I don't think this would accomplish anything. For one thing, quite a bit of misinformation these days comes from official government sources that can just compel the manufacturers to turn over authentic signing keys. Remember that Trump just posted an AI-generated video of himself shilling medbeds; when it was pointed out as AI-generated, he deleted it. If Truth Social checked the cryptographic signature, he'd order his staff to sign it. They wouldn't dare say no.
The next flaw is that cameras are happy to record screens playing AI-generated videos and mark them as authentic. Perhaps you can tell today because the screen pixels aren't perfectly 1:1 mapped to the image sensor pixels, but as soon as elections depend on being able to do that, those screens will exist.
People are saying to add LIDAR to prevent this "record the screen" hack, but a mirror over the LIDAR sensor and me sitting at a desk motionless looks to LIDAR exactly like the world leader I'm deepfaking sitting motionless at a desk. People are not using AI to generate amazing action shots.
At the end of the day, people will have to take some personal responsibility. Migrants probably aren't killing and eating pets. Pets taste terrible and grocery stores that you can just walk into and steal whatever you want exist. There isn't a bed that can cure any disease. If someone says they do, even a world leader, test them out on something non-critical. Break off a fingernail and see if the magic bed can regrow it overnight. If not, maybe stick to traditional cancer treatments until there is some clearer evidence.
It’s already possible. See the Stagecraft studio they built for the production of TV series The Mandalorian.
> shooting the series on a stage surrounded by massive LED walls displaying dynamic digital sets, with the ability to react to and manipulate this digital content in real time during live production
> The StageCraft process involves shooting live-action actors and sets surrounded by large, very high-definition LED video walls. These walls display computer-generated imagery backdrops, once traditionally composited primarily in post-production after shooting with chroma key screens. These facilities are known as "volumes". When shooting, the production team is able to realign the background instantly based on moving camera positions. The entire CGI background can be manipulated in real-time.
Your own emails are immutable, if you trust nobody's modified your copy.
But proving to others that an email hasn't been modified is a more difficult task. As I understand it, you'd need to retain DKIM keys for the signing server, to check that historical DKIM signatures verify correctly and the old message was not forged or altered.
Are DKIM signing keys issued in some kind of Certificate Transparency log, where you can verify whether a particular DKIM key existed for a particular domain in the past, in order to do this in general?
They at least were not historically archived. This came up during the Hunter Biden laptop investigation where people were able to verify some of the messages only because the Gmail key was archived in many places because that service is so popular. I’m not aware of anyone making a comprehensive archive but I’d be unsurprised if someone did based on news like that.
people are trying to do the opposite - publish DKIM private keys regularly so everyone knows that old DKIM signatures can be forged, so that they can't be used against you.
Interesting take. I have decades worth of email archived, so it does ring true for me at least. I doubt anything in there is more interesting to Big Brother but who knows?
Email is only part of my electronic memory. Over time it's become more important to me to maintain my own copies of my memory on devices I control. The forms and formats are many, and they all need a commitment to maintain control. So yes, use email over more mutable media. And avoid remotely mutable extensions to emails. And keep a local copy of your email. And maintain date-stamped archives of stuff you work on, and keep your codebases easy to run from any point in their history, and write good notes. Constant vigilance.
Sort of. They can't change plain text, but modern emails often include vast swaths of remote content. When you open the message, it retrieves the relevant assets directly from whoever sent the email. That remote content is not permanently stored. It's cached for a bit and will not be re-used if the email is opened months or years later.
If those assets disappear or are changed, there's very little any email provider can do about that.
Absolutely bonkers.
"Because of the dynamic nature of AMP messages, the content displayed in Gmail messages can change as time passes." https://support.google.com/a/answer/9709409?hl=en
And on the one hand, it's cool as hell to see your email update itself to show tracking progress
On the other hand, just send me a new email. It's fine, I promise.
The benefit of this is senders couldn't treat it as a read receipt, because the provider can state "Our infra performs this operation for the user for immutability purposes" similar to other email operations that proxy these requests for privacy purposes.
It could be anywhere, which is another knock against HTML email.
Which is why text only email is still king, and used in a lot of places still.
Apple’s private loading feature also shows how that could be fixed: the mail server can retrieve the referenced content once and save it so you’d always know what was served at the time the message was sent.
> For our staff, we encourage understanding the tools that exist in the world, and how to use them safely. Our policy makes it clear that any use of tools, including tools with AI in them, must follow clear privacy-preserving principles:
I gotta be honest, this scenario is not a concern that impacts my choice of email provider.
The cameras used to document "news" will need to be watermarked, fingerprinted and authenticated, like what Canon and Nikon are already doing (and which AFP has already adopted).
It may have seemed gimmicky at first, but in a year or two, you'll probably only be able to trust visuals from companies that do this (wire agencies like AFP, AP and Reuters are heavily disincentivised to create fake news anyway but that's another topic).
At a certain level, I imagine social media apps will also encourage direct camera-to-post for documentation/videos of reality, since this will be the only end-to-end method to verify an image was created unaltered. I can imagine a world where, if you film a protest through the Instagram app, you'd get some kind of "this is real" badge on it, whereas if you upload a video, it gets treated as "could be AI" like 99% of all future content.
A lot depends on watermarking at source and the social media platform using that to make a clickable/hard watermark
This is a bigger threat than phony AI videos.
One of the most common forms of submissions on Reddit/Twitter is an image with text, or a screenshot of a tweet, or a screenshot of a headline that makes a claim, and everyone takes it dead seriously.
Almost nobody is going "hmm let me look this up first to see if it even exists or accurately represents the facts".
So if all you need is an image of text for people to believe it, what does it even matter if you have this sophisticated system where you require photos to be signed by camera hardware or whatever? You aren't even putting a dent in how bullshit spreads.
This removes the possibilities for bad actors to just one - the platform itself.
In any case, the audience will have to learn new ways to "trust" and tech alone won't be the solution. But I've less hope in people and more hope in new social contracts
I think LIDAR sensors would be useful to verify depth information in an image, on a side note.
On Instagram? The website owned by that guy who loves AI slop and wants to fill your feed with it? That Instagram? Yeah, doesn’t seem likely.
https://techcrunch.com/2025/09/25/meta-launches-vibes-a-shor...
https://fortune.com/2024/10/30/mark-zuckerberg-ai-generated-...
https://futurism.com/zuckerberg-lonely-friends-create-ai
The next flaw is that cameras are happy to record screens playing AI-generated videos and mark them as authentic. Perhaps you can tell today because the screen pixels aren't perfectly 1:1 mapped to the image sensor pixels, but as soon as elections depend on being able to do that, those screens will exist.
People are saying to add LIDAR to prevent this "record the screen" hack, but a mirror over the LIDAR sensor and me sitting at a desk motionless looks to LIDAR exactly like the world leader I'm deepfaking sitting motionless at a desk. People are not using AI to generate amazing action shots.
At the end of the day, people will have to take some personal responsibility. Migrants probably aren't killing and eating pets. Pets taste terrible and grocery stores that you can just walk into and steal whatever you want exist. There isn't a bed that can cure any disease. If someone says they do, even a world leader, test them out on something non-critical. Break off a fingernail and see if the magic bed can regrow it overnight. If not, maybe stick to traditional cancer treatments until there is some clearer evidence.
It’s already possible. See the Stagecraft studio they built for the production of TV series The Mandalorian.
> shooting the series on a stage surrounded by massive LED walls displaying dynamic digital sets, with the ability to react to and manipulate this digital content in real time during live production
https://www.unrealengine.com/fr/blog/forging-new-paths-for-f...
> The StageCraft process involves shooting live-action actors and sets surrounded by large, very high-definition LED video walls. These walls display computer-generated imagery backdrops, once traditionally composited primarily in post-production after shooting with chroma key screens. These facilities are known as "volumes". When shooting, the production team is able to realign the background instantly based on moving camera positions. The entire CGI background can be manipulated in real-time.
https://en.wikipedia.org/wiki/StageCraft
But proving to others that an email hasn't been modified is a more difficult task. As I understand it, you'd need to retain DKIM keys for the signing server, to check that historical DKIM signatures verify correctly and the old message was not forged or altered.
Are DKIM signing keys issued in some kind of Certificate Transparency log, where you can verify whether a particular DKIM key existed for a particular domain in the past, in order to do this in general?
https://github.com/robertdavidgraham/hunter-dkim#but-gmails-...
EDIT: this one exists but is incomplete: https://archive.prove.email/about