What does that accomplish? You can just read the web page as-is...
Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.
Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
That's reminiscent of a (possibly apocryphal?) method I once read about to get "clean" images of normally crowded public places - take multiple photos over time, then median each pixel. Never had the opportunity to try it myself, but I thought it sounded plausible as a way to get rid of transient "noise" from an otherwise static image.
Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
This was a pseudo-3D game and on an ordinary display it used perspective to simulate 3D like most games. If you had 3D goggles it could use them, but I didn't.
However, it could do a true 3D display on a 2D monitor using a random-dot stereogram.
If you have depth perception and are able to see RDS autostereograms, then Magic Carpet did an animated one. It was a wholly remarkable affect, but for me anyway, it was really hard to watch. It felt like it was trying to rotate my eyeballs in their sockets. Very impressive, but essentially unplayable and I could only watch for a minute or two before I couldn't stand the discomfort any more.
This one is actually more sophisticated because it doesn't rely on scrolling pixels like the OP. So the object doesn't just disappear in screenshots, but also when the animation stops moving! So you can't actually display text that stands still, like the "hello" in the OP.
Yep. He tries text in another video by flipping pixels for one or more frames, so the words disappear very quickly. Definitely harder to read, especially longer words: https://youtu.be/EDQeArrqRZ4
I'm not sure I follow. Couldn't you display text that stands still by (re)drawing the outline of the text repeatedly? It would essentially be a two frame animation
If anybody implements that to antiscrenshot some sensitive data, somebody else will use another phone, a tablet or a camera to record a video of it. Nice idea though.
If I had another camera then yes, that would have been easier. In my case I only had the one mobile device and I don’t think screenshots support long exposure.
Lighten, Screen, Addition, Darken, Multiply, Linear burn, Hard Mix, Difference, Exclusion, Subtract, Grain Extract, Grain Merge, or Luminance.
https://ibb.co/DDQBJDKR
You actually don't need any image editing skill. Here is a browser-only solution:
1. Take two screenshots.
2. Open these screenshots in two separate tabs on your browser.
3. Switch between tabs very, very quickly (use CTRL-Tab)
Source: tested on Firefox
Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.
Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...
Thank you forever for this, I ever used Ctrl-Page up/down for that.
Deleted Comment
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
BLIT protection. https://www.infinityplus.co.uk/stories/blit.htm
Only if your rendering libraries are crap.
Here it is in Pixelmator Pro: https://i.moveything.com/299930fb6174.mp4
Deleted Comment
They even provide the source code for the effect:
https://github.com/brantagames/noise-shader
It reminds me of the mid-1990s video game Magic Carpet.
https://en.wikipedia.org/wiki/Magic_Carpet_(video_game)
This was a pseudo-3D game and on an ordinary display it used perspective to simulate 3D like most games. If you had 3D goggles it could use them, but I didn't.
However, it could do a true 3D display on a 2D monitor using a random-dot stereogram.
https://en.wikipedia.org/wiki/Random_dot_stereogram
If you have depth perception and are able to see RDS autostereograms, then Magic Carpet did an animated one. It was a wholly remarkable affect, but for me anyway, it was really hard to watch. It felt like it was trying to rotate my eyeballs in their sockets. Very impressive, but essentially unplayable and I could only watch for a minute or two before I couldn't stand the discomfort any more.
Also playable in the browser: https://playclassic.games/games/action-dos-games-online/play...
https://silverspaceship.com/static/
Really clever use of a TV remote as controller.
https://www.youtube.com/watch?v=Bg3RAI8uyVw
The effect is disrupted by introducing rendering artifacts, by watching the video in 144p or in this case by zooming out.
I'd love to know the name of this effect, so I can read more about the fMRI studies that make use of it.
What I've found so far:
Random Dot Kinematogram
Perceptual Organization from Motion (video of Flounder camouflage)
https://www.youtube.com/watch?v=2VO10eDIyiE
https://upload.wikimedia.org/wikipedia/en/a/ab/AnyMinuteNow....
Sometimes friction is enough.
While a screencap image hides the message, a screencap video shows it perfectly well.
Dead Comment
On iPhone: screenrecord. Take screenshots every couple seconds. Overlay images with 50% transparency (I use Procreate Pocket for this part)