Photonic computing, to accelerate both logic gates and data transfer, is an incredibly broad and exciting field. While a lot of the promise is still in the lab, real advances are currently being commercialized.
I always had the idea that before jumping to quantum, it would make sense to use photons for as many components as possible instead of the relatively slower, heavier, and much hotter electron.
I don't know enough about computing hardware to know how feasible each component is to be refactored this way, but it is indeed exciting. You could almost imagine such a "photon computer" as a computer which uses little to no energy (at least for the actual computing part), is extremly lightweight due to lightweight components, and never gets hot!
It's a misconception that electrons are slower than photons. In a vacuum? Maybe. But you need a medium to use photons and in fiber photons go at 0.5-0.75c.
Electronic signals in copper propagate at somewhere from 0.66-0.8c.
The big benefit of photons is that they don't experience electrical interference, so you can often get a lot more bandwidth out of a arbitrarily sized photonic medium than an electronic one.
The actual latency of photons vs electrons is generally not relevant.
On a larger scale, the Meta Quest 2 uses a USB cable to plug into the computer so you can play VR games on your PC. The max length of the cable is something like 3 feet over copper. The link cable they sell switches from electric signals over copper to light over fiber, and then back to copper to get around the length limitations.
Not only, that, modern CPUs have transistors that switch in 0.1 ns. So even if they got to that speed, it would be 100,000x, not 1,000,000x.
And, if they only got to switching in 10 femtoseconds, it would be 10,000x, not 1,000,000x.
You might ask, what's two orders of magnitude between friends? But a job that takes a minute is quite a lot different from one that takes going on two hours.
Even 0.1ns is way slow. A modern silicon cmos gate will switch under 10ps, which is how we can fit 25+ gates in a single cycle at >3GHz. Everyone should remember that cpu frequency is not the same as the frequency a single gate can switch. Also keep in mind we are mostly wire limited anyway, as resistivity of copper at <50nm line widths is quite unlike its bulk resistivity, and scales super-linearly. This prevents us from further shrinking wires at all.
> But a job that takes a minute is quite a lot different from one that takes going on two hours
Though in your terms the promise would be to have a job that takes going on two hours in a second. Feasibility not discussed, one would not cry over those two orders of magnitude "that could not make it".
You can multiplex optical signals, though im not sure exactly how that would be harnessed. If nothing else, at least a few times more possible throughput?
> > The team says that other technological hurdles would arise long before optoelectronic devices reach the realm of PHz.
Yup... just the memory access (even if "instant", ram is so "far away" (physically) that the transmission delay will be many multiples of the clock... Currently this is a pain to implement correctly by the CPU manufacturers, but atleast with caches you don't run out of data to calculate while waiting for something new from RAM.
If speed was held back by gate time, then sure, but i'd have thought that propagation delays between gates will be kind of relevant.
Making the clock 1,000,000 times faster would mean the silicon would be 1,000,000 times shorter (in each dimension) so I guess such designs would support some super high clock rates for some specialist applications for small gate arrays, but for general purpose computing, hmm, i'm not so sure.
Propagation delay isn't purely about distance: it's about the time needed for the output to settle in reaction to inputs. That includes capacitive delays: containers of electrons having to fill up.
Say we are talking about some gate with a 250 picosecond propagation delay.
But light can travel 7.5 cm in that time; way, way larger than the chip on which that gate is found, let alone that gate itself. That tells you that the bottleneck in the gate isn't caused by the input-to-output distance, which is tiny.
Ya the article focuses on computing but I think it could enable totally new electronic devices like frequency/phase controllable leds, light field displays and cameras, ultra fast ir based wifi etc...
If I understood the logic correctly, if you think in terms of transistors, they had a laser on the gate and used that to control an electric charge.
> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
This is not what you typically call a "logic gate", where the control and the output have the same type of energy (either both electric or both photonic), this is more like a fast light sensor?
There are plenty of good applications for fast light sensors, why this article tries to spin it into a logic gate (which it is not) is incomprehensible to me.
> A logic gate is an idealized or physical device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device
As long as it implements a boolean function, which this clearly does, it sure sounds like a logic gate. What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?
> What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?
A logic gate itself doesn't do much useful computation, you have to chain them together.
But how do you chain them, if they use a laser beam as input and an electrical charge as output? You have to use the electrical charge to drive a laser... which is much slower and more energy intensive than a classical logic gate in a modern integrated circuit.
> What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?
Just thinking out loud, but it might break common assumptions about being able to (easily) compose a individual gates into a more complicated logic function.
Anecdotally, when the switch from Windows 98 to XP happened I missed it.
I went from an old 386sx-33 to a Pentium 4 and brought my software along with me. The previous owner had borked the hard drive and gave the box to me for free.
I got a hard drive and installed DOS on it (which was the only OS that I had at the time) and tried to play some games.
That was a bewildering experience. Almost nothing worked, I had no drivers and no way to get them, but I did find a few games that would load and ran them. Text games were ridiculously snappy, it felt like I would press the enter key and the next section would already be up before my finger left the key.
But the real mindblower was graphical games. I got (I think) Commander Keen or some other graphic-based platformer to load and it would start the level and everything moved in super-high speed. If I pressed an arrow key I was instantly as far in that direction as the character could move. When I pressed Jump the character would twitch, completing the jump instruction before screen could fully update.
The new system running a barebones OS was so fast that the software could not operate normally. Now computers are scores of times faster than that and yet seem so much slower because of both software bloat (bad) and decoupling software clocks from processor clocks (good).
> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
> “It will probably be a very long time before this technique can be used in a computer chip..."
So this is interesting, but largely irrelevant for most HN folks. We'll be retired before it is productized.
This is not a logic gate. The inputs are not even the same physics as the output. Light in, charge out. In addition, the light uses phase relationship to change the output. So it's an interesting device, but a logic gate it is not.
At a size on the order of 1um, it's going to be a long, long while before this becomes a commercially viable competitor to bulk cmos. Doesn't matter much for a CPU if your transistor can switch 1000000X faster if you can only have 1/1000th of them on a die. Your speed would ultimately be limited by the physical wire delays anyways.
Not to mention that it's using "exotic" process steps which means capacity is, at minimum, decades away from being meaningful.
Don't get me wrong, the research is cool, but it's not going to make "computers a million times faster".
What if it ends up in a USB scenario--fewer wires, but running at a higher speed? 4-8x smaller word size to get +10e6 sounds like a good trade. Just think, Z80s & 6502s coming back into fashion. This time, turbo-charged!
Chuck Moore was kind of on that beat already with his GreenArrays chips.
It will definitely be a while, but maybe not such a long one.
Key sizes are generally chosen so that brute force is infeasable even with enormous speed advancements. You cannot increment a counter to 2^256. There isn't enough energy in our solar system. So you cannot brute force 256 bit symmetric key encryption using traditional computers. Not at any speed.
https://spie.org/news/photonics-focus/marapr-2022/harnessing...
https://www.nextplatform.com/2022/03/17/luminous-shines-a-li...
I don't know enough about computing hardware to know how feasible each component is to be refactored this way, but it is indeed exciting. You could almost imagine such a "photon computer" as a computer which uses little to no energy (at least for the actual computing part), is extremly lightweight due to lightweight components, and never gets hot!
Electronic signals in copper propagate at somewhere from 0.66-0.8c.
The big benefit of photons is that they don't experience electrical interference, so you can often get a lot more bandwidth out of a arbitrarily sized photonic medium than an electronic one.
The actual latency of photons vs electrons is generally not relevant.
Not really the same thing but still cool!
Dead Comment
> The team says that other technological hurdles would arise long before optoelectronic devices reach the realm of PHz.
And, if they only got to switching in 10 femtoseconds, it would be 10,000x, not 1,000,000x.
You might ask, what's two orders of magnitude between friends? But a job that takes a minute is quite a lot different from one that takes going on two hours.
Though in your terms the promise would be to have a job that takes going on two hours in a second. Feasibility not discussed, one would not cry over those two orders of magnitude "that could not make it".
Yup... just the memory access (even if "instant", ram is so "far away" (physically) that the transmission delay will be many multiples of the clock... Currently this is a pain to implement correctly by the CPU manufacturers, but atleast with caches you don't run out of data to calculate while waiting for something new from RAM.
I can see this technology being made into a super computer type setup one day, but as far as home computing, I have my doubts.
Making the clock 1,000,000 times faster would mean the silicon would be 1,000,000 times shorter (in each dimension) so I guess such designs would support some super high clock rates for some specialist applications for small gate arrays, but for general purpose computing, hmm, i'm not so sure.
Say we are talking about some gate with a 250 picosecond propagation delay.
But light can travel 7.5 cm in that time; way, way larger than the chip on which that gate is found, let alone that gate itself. That tells you that the bottleneck in the gate isn't caused by the input-to-output distance, which is tiny.
> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
This is not what you typically call a "logic gate", where the control and the output have the same type of energy (either both electric or both photonic), this is more like a fast light sensor?
There are plenty of good applications for fast light sensors, why this article tries to spin it into a logic gate (which it is not) is incomprehensible to me.
> A logic gate is an idealized or physical device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device
As long as it implements a boolean function, which this clearly does, it sure sounds like a logic gate. What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?
A logic gate itself doesn't do much useful computation, you have to chain them together.
But how do you chain them, if they use a laser beam as input and an electrical charge as output? You have to use the electrical charge to drive a laser... which is much slower and more energy intensive than a classical logic gate in a modern integrated circuit.
Just thinking out loud, but it might break common assumptions about being able to (easily) compose a individual gates into a more complicated logic function.
I went from an old 386sx-33 to a Pentium 4 and brought my software along with me. The previous owner had borked the hard drive and gave the box to me for free.
I got a hard drive and installed DOS on it (which was the only OS that I had at the time) and tried to play some games.
That was a bewildering experience. Almost nothing worked, I had no drivers and no way to get them, but I did find a few games that would load and ran them. Text games were ridiculously snappy, it felt like I would press the enter key and the next section would already be up before my finger left the key.
But the real mindblower was graphical games. I got (I think) Commander Keen or some other graphic-based platformer to load and it would start the level and everything moved in super-high speed. If I pressed an arrow key I was instantly as far in that direction as the character could move. When I pressed Jump the character would twitch, completing the jump instruction before screen could fully update.
The new system running a barebones OS was so fast that the software could not operate normally. Now computers are scores of times faster than that and yet seem so much slower because of both software bloat (bad) and decoupling software clocks from processor clocks (good).
> “It will probably be a very long time before this technique can be used in a computer chip..."
So this is interesting, but largely irrelevant for most HN folks. We'll be retired before it is productized.
Don't get me wrong, the research is cool, but it's not going to make "computers a million times faster".
Chuck Moore was kind of on that beat already with his GreenArrays chips.
It will definitely be a while, but maybe not such a long one.