He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
Good explanation. One detail though: it is one pixel at a time, not one line at a time. Basically does the whole sequence for one pixel, adjusts mirror to next one, and does it again. The explanation is around the 8 minutes mark.
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
And the reason it matters that this is a single pixel at two billion times per second is that we can hypothetically stack many of these assemblies on top of each other and get video of a single event that is not repeatable.
Yes and no. The mirror is continually scanning horizontally because the mechanics simply can't position that accurately. This is fine, really because the amount the mirror moves in 33uS is essentially zero. He's still capturing single pixels, just while the mirror is moving. Any artifacts from the motion are probably entirely drowned out by the noise in the rest of the system
The author explained that he originally attempted to pulse the laser at 30 KHz, but for the actual experiment used a slower rate of 3 KHz. The rate at which the digital data can be read out from the oscilloscope to the computer seems to be the main bottleneck limiting the throughput of the system.
Overall, recording one frame took approximately an hour.
Thanks for the explanation. Honestly, your explanation is better than the entire video. - I watched it in full and got really confused. I completely missed the part where he said the light is pulsing at 30kHZ and was really puzzled at how he is able to move the mirror so fast to cover the entire scene.
Huh. I watched a lot, but not all, of the video, and I thought he made it clear early on that he was stitching together 1px videos & repeating the event for each pixel (about a million times for that 720p result)
We already know why and the double slit experiment has been by thousands if not millions of students and researchers.
But as sibling said, this is still a measurement and will collapse the quantum system. You can't use this to peek under the hood and look at the quantum mechanics.
The author uses "real time sampling" to acquire evolution of light intensity for one pixel at 2 GSps rate. The signal is collected for approximately one microsecond at each firing of the laser, and corresponding digital data is sent from the oscilloscope to the computer.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
This video has brough warm and fuzzy memories from my other life. When I was a scientist back in USSR my research subject required measuring ridiculously low amounts of light and I used photomultiplier tube in photon counting mode for that. I needed current preamp that can amplify nanosecond long pulses and have concocted one out of arsenide-gallium logic elements pushed to work in a linear mode. The tube was cooled by Peltier elements and data fed to a remote Soviet relative of Wang computer [0].
It’s super cool that AlphaPhoenix is able to get comparable results on his garage. These academic versions use huge lab bench optic setups. They wind up with technically higher quality results, but AlphaPhonix’s video is more compelling.
2. Increase the precision of the master clock. There's some time smearing along the beam. It's not that hard to make clocks with nanosecond resolution, and picosecond resolution is possible, although it's a bit of a project.
3. As others have said, time-averaging multiple runs would reduce the background noise.
At the point where the light is getting reflected on the mirror, it is unfocused - those galvos look too small. But a pair of larger mirrors in the same arrangement could work.
The triggering scheme is completely brilliant. One of those cases where not knowing too much made it possible, because someone who does analog debug would never do that (because they would have a 50k$ scope!.
Honestly I think if we each wrote a nice personal letter to Keysight they’d probably gift him one in exchange for the YouTube publicity. Several other electrical engineers on YT get free $20-50k keysight scopes not just for themselves, but once a year or so to give away to their audience members.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
Hmm, it's a clever hack, but they would use an oscilloscope with an "External trigger" input, like most of the older Rigols. That would let you use the full sample rate without needing to trigger from CH2
Even modern entry level Rigol scopes have external trigger inputs. I've got like one step up from the cheapest model and it has an external trigger input. I think the idea there is that you'd use a bunch of these scopes for the QA on an assembly line. there's a bunch of pass/fail features I've never once touched too.
The view from one end of a laser going between two mirrors (timestamp 1:37) is a fairly good demonstration of the camera having to wait for light to get to it.
The video is definitely more interesting than 28 fps but it's also not really 2B fps.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
Nah, it's definitely 2B fps, the frames are just 1x1 wide and a lot of the interesting output comes from the careful synchronization, camera pointing, and compositing of nearly a million 1x1 videos of effectively identical events.
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
Others say that you're wrong, but I think you're describing it approximately perfectly.
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
I would probably call it 2 billion fps* (with an asterisk), with a footnote to explain how it’s different than video is typically captured. Especially the fact that the resulting video is almost a million discrete events composited together as opposed to a single event. All of which the video is transparent about, and the methodology actually makes the result more interesting IMO.
I would say that everyone - you, other commenters disagreeing with you, and the video - are all technically correct here, and it really comes down to semantics and how we want to define fps. Not really necessary to debate in my opinion since the video clearly describes their methodology, but useful to call out the differences on HN where people frequently go straight to the comments before watching the video.
Each pixel was captured at 2 billion frames per second, even if techinically they were separate events. Why not call it (FPS / pixels) frames per second?
A frame is by definition flexible in how many pixels tall or how many pixels wide it is, and there is nothing in the definition that says it can't be 1x1.
I thought his method of multiplexing the single channel was very smart. I guess it's more common on 2 channel or high end 4 channel scopes to have a dedicated trigger input, which I've checked this one doesn't have. That said, there're digital inputs that could've been used. Presumably from whatever was controlling the laser.
I was confused by that part of the video exactly because I wondered why he wasn’t using the trigger input. Or, would it normally be possible to use a different channel as the trigger for the first channel?
I guess I'm used to it. My main scope is an SDS1204 which doesn't have one (and when I inherited it the digital channels were reportedly blown up) despite being fairly capable for its combination of age and price
He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
Overall, recording one frame took approximately an hour.
The downside is it only works with repeative signal.
But as sibling said, this is still a measurement and will collapse the quantum system. You can't use this to peek under the hood and look at the quantum mechanics.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
https://www.tek.com/en/documents/application-note/real-time-...
OMG this was back in 1979-1981.
0. - https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%B5%D0%BA%D1%82...
He mentions this as the inspiration in his previous video (https://youtu.be/IaXdSGkh8Ww).
It’s super cool that AlphaPhoenix is able to get comparable results on his garage. These academic versions use huge lab bench optic setups. They wind up with technically higher quality results, but AlphaPhonix’s video is more compelling.
Some possible improvements.
1. Replace the big heavy mirror with a pair of laser galvos. They're literally designed for this and will be much faster and more precise.
Example:
https://miyalaser.com/products/miya-40k-high-performance-las...
2. Increase the precision of the master clock. There's some time smearing along the beam. It's not that hard to make clocks with nanosecond resolution, and picosecond resolution is possible, although it's a bit of a project.
3. As others have said, time-averaging multiple runs would reduce the background noise.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
I would say that everyone - you, other commenters disagreeing with you, and the video - are all technically correct here, and it really comes down to semantics and how we want to define fps. Not really necessary to debate in my opinion since the video clearly describes their methodology, but useful to call out the differences on HN where people frequently go straight to the comments before watching the video.