I've long been enamored with the idea of learning from analog computers to build the next generation of digital ones. In some perspective all our computers are analog, of a sort - today's computer chips are effectively leveraging electron flow through a carefully arranged metal/silicon substrate, with self-interference via electromagnetic fields used to construct transistors and build up higher order logic units. We're now working on photonic computers, presumably with some new property leading to self interference, and allowing transistors/logic above that.
"Wires" are a useful convenience in the electron world, to build pathways that don't degrade with the passing of the elections themselves. But if we relax that constraint a bit, are there other ways we can build up arrangements of "organized flow" sufficient to have logic units arise? E.g. imagine pressure waves in a fluid -filled container, with mini barriers throughout defining the possible flow arrangement that allows for interesting self-reflections. Or way further out, could we use gravitational waves through some dense substance with carefully arranged holes, self-interfering via their effect on space-time, to do computations for us? And maybe before we get there, is there a way we could capitalize on the strong or weak nuclear force to "arrange" higher frequency logical computations to happen?
Physics permits all sorts of interactions, and we only really use the simple/easy-to-conceptualize ones as yet, which I hope and believe leaves lots more for us to grow into yet :).
Electricity is also a wave. The wires are essentially waveguides for particles/waves traveling at near luminal speeds. So in theory anything done with electricity could be replicated using other waves, but to make it faster you would need waves that travel faster than electrons through a wire. Photons through a vacuum might be marginally faster, but pressure waves though a fluid would not.
If bitflips are a problem in a modern chip, imagine the number of problems if your computer ran on gravity waves. The background hum of billions of star collisions cannot be blocked out with grounded tinfoil. There is no concept of a faraday cage for gravity waves.
Nitpick: gravity waves [1] pretty universally refer to waves in fluid media in which the restoring force is buoyancy. Ripples in spacetime are usually called _gravitational_ waves.
You're right that the speed of light remains a constant limitation on propagation delay, but the defining limitation on the speed of computation is rather the clock speed - how long it takes for each round of computation. Electrons are comparatively slow due to the time it takes to fill and stabilize a transistor. Our hypothetical new type of computer will have to be faster to converge, rather than faster to propagate.
You're right about the bit flips though. I don't know if a gravitational wave computer is actually ever going to be feasible, just an interesting dream for the far future. Hopefully there are more options to consider in the meantime :).
Gravity is a poor source of computation because it is incredibly weak - 10^-43 vs electron force. Even if you add several powers of 10 for all the metal wire harness and battery chemistry around the electrons, you still get far more usable force per gram from electricity and metal than you do from gravity.
Is it even theoretically possible to waveguide gravity? The electric field can be positive and negative, but gravity is unsigned -- there is no anti-gravity. This is probably related to what you're saying about faraday cages.
Electricity travels faster than the speed of electrons (which only travel at ~3 cm/s!), it travels proportional to the speed of light, it’s speed is instead described by the Poynting vector, an energy wave.
> So in theory anything done with electricity could be replicated using other waves
I sort of get this in a discrete digital logic scenario but out of curiosity as someone not big on Photonics, what would be the light 'equivalent' of an electrical AC signal? I'm kind of struggling to visual that.
> It employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable.
This reminds me of the method of calculating Fourier transform by refracting light through a prism and reading off the different frequencies. You get the "calculation" for free.
Say you take a standard slide rule with two log scales, and want to do a division problem, x/y. There's more than one way to do it. I can think of at least 3. One of them won't just compute x/y for your particular x, but will compute x/y for ANY x.
Accuracy is always the issue with analog stuff, but they sure are neat.
Another fun one to contemplate is spaghetti sort. With an analog computer of sufficient resolution, you can sort n elements in O(n). You represent the numbers being sorted by lengths of spaghetti. Then you put them on the table straight up and bring a flat object down until it hits the first and largest piece of spaghetti. You set that down and repeat the process, selecting the largest element of the set every time.
I've always liked the idea of hybrid systems. I envision one where you feed the analog part of your problem with a DAC, then get a really close answer up to the limit of your precision from the analog component, then pass that back out to an ADC and you have a very very close guess to feed into a digital algorithm to clean up the precision a bit. I bet you could absolutely fly through matrix multiplication that way. You could also take the analog output and adjust the scale so it's where it needs to be on the ambiguous parts, then feed it back into your analog computer again to refine your results.
Where does a doctor’s stethoscope fit in? Other examples: Mechanic’s stethoscope for diagnosing an engine, airplane vibrations to foretell maintenance, bump oscillations to grade quality of a roadway.
Isn’t this how very old sorting machines with punch cards worked? I’m thinking of the kinds used by the census or voting machines in the late 1800s or early 1900s.
An even better one - holding an image at the focal point of a lens produces its Fourier transform at the focal point on the other side of the lens[0]. It is used for "analog" pattern matching[1]. There is an interesting video explaining this on the Huygen Optics Youtube channel[2].
oh god, I can see it coming. elaborated analogue music player for a special price. it's using nothing but light. the fuzzy output will be it's feature; sought after by misdirected audiophiles...
Not much difference here, Calculation (or, more generally, Computation) is the manipulation of abstract symbols according to pure rules that may or may not represent concrete entities, e.g. the simplification of polynomials according to the rule of adding like powers.
Simulation is when we manipulate things (concrete or abstract) according to the rules that govern other concrete things, e.g. pushing around balls in circles to (highly inaccurately) represent the orbit of planets around a star.
Not all calculation is simulation, and not all simulation is calculation, but there exists an intersection of both.
The key trick you can do with that last category is that when the physical system you're simulating is controllable enough, you can use the correspondence in the other direction: Use the concrete things to simulate the abstract things. It's simulation, because you're manipulating concrete entities according to the rules that govern other entities (who happen to be abstract),but what you're doing also amounts to doing a calculation with those abstract entities.
This perspective fits nicely with the simulation theory.
If we accept it, for argument's sake, then what's happening is essentially delegating the computation to the ultra-meta-computer that runs the simulation.
Whether the universe is a simulation is unknowable, but the universe could consist of thought. If so, this research is dangerous; like the Trinity nuclear test, the conflagration could alter our neighborhood of the universe.
I had a pretty convincing revelation last night that the simulation was run by insects. I could only get back to sleep by ridiculing myself for such a derivative thought. Or is there a reason it's universal?
I believe one of the earliest applications incorporating this line of thought was MONIAC, the Monetary National Income Analogue Computer, which used water levels to model the economy [0]. There's a short youtube documentary on its history and operation. [1]
Analog computers are from the 19th century; they were used to decompose signals using the Fourier transform, since it's easy(ish) to get a bunch of different frequency oscillators. They used them for tides and differential equations. https://en.m.wikipedia.org/wiki/Analog_computer
A better title would have been: how to make the universe do (a whole lot of) math for us [1]. What so-called neural networks do should not be confused with thinking, at least not yet.
And the fact that we can get the universe to do math for us should not be surprising: we can model the universe with math, so of course that mapping works in the other direction as well. And this is not news. There were analog computers long before there were digital ones.
---
[1] ... using surprisingly small amounts of hardware relative to what a digital computer would require for certain kinds of computations that turn out to be kind of interesting and useful in specific domains. But that's not nearly as catchy as the original.
> we can model the universe with math, so of course that mapping works in the other direction as well.
This is not so obvious as you make it appear. For instance, we can model the weather for the next couple of days using math. But letting the weather of the next couple of days calculate math for us doesn't work very well. The reason is that we can't set the inputs for the weather.
This problem comes up in various forms and shapes in other 'nature computers' as well. Quantum computers are another example where the model works brilliantly but setting the pre- and side conditions in the real world is a major headache.
> What so-called neural networks do should not be confused with thinking, at least not yet.
I disagree:
I think neural networks are learning an internal language in which they reason about decisions, based on the data they’ve seen.
I think tensor DAGs correspond to an implicit model for some language, and we just lack the tools to extract that. We can translate reasoning in a type theory into a tensor DAG, so I’m not sure why people object to that mapping working the other direction as well.
This internal language, if I'm not mistaken, is exactly what the encoder and decoder parts of the neural networks do.
> in which they reason about decisions
I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”. NNs are just very complex math equations and calling this intelligence is, in my opinion, muddying the waters of how far away we are from actual AI.
We can model the universe with math because math is what we have to model the universe with. The fact that it can talk back to us in math is amazing because to me it means that math is not a dead end cosmically, which means we might be able to use it to communicate with other intelligences after all.
assertion: thinking is synonymous with computation (composed operations on symbolic systems).
computation is boolean algebra.
-> therefore, doing math is to think.
I'm not trying to be pedantic, I just don't think using intuitive associations with words helps clarifying things. If your definition for thought diverges here, please try to specify how exactly: what is thought, then? Semi-autonomous "pondering"? Because the closer I look at it, that, too, becomes boolean algebra, calling eval() on some semantic construct, which boils down to symbolic logic.
What you may mean is that "neural" networks are performing statistics instead of algebra, but that's not what the article is about, is it?
> I don't think using intuitive associations with words helps clarify things
Sincere question: do you think that "think using intuitive associations with words" can be safely translated to "compute using intuitive associations with words"?
I don't think so. Therefore, even if thinking is also computing, reducing thinking to boolean algebra is a form of reductionism that ignores a number of emergent properties of (human) thinking.
> If your definition for thought diverges here, please try to specify how exactly: what is thought, then?
This is a burden-shifting reply of "so prove me wrong!" to anyone who feels that your assertion lacks sufficient justification for it to be taken as an axiom.
This instantly reminded me of the paper "pattern recognition in a bucket"[0], which I've seen referenced a lot when I first started reading about AI in general. I only have surface-level knowledge about the field, but how exactly does what's described in the article differ from reservoir computing? (The article doesn't mention that term, so I assume there must be a difference)
In this PNN approach you are solving for what additional stimuli, when applied to the system alongside the inputs, produce the desired result for a given input. In reservoir computing (RC) you don’t bother to provide any additional stimuli, and find the linear combination of reservoir outputs that gives the desired result. Training the former is more demanding and analogous to a NN (thus the name), but directly produces your answer from the system. The latter is very easy to train (one regression) but requires post processing for inference.
When I first came across machine learning it reminded me of control theory. And sure enough if you search around you get to articles like this [1] saying that neural networks were very much inspired by control theory. The bit of control theory that I was taught way back was about analog systems. I have no idea if the electronic circuit mentioned at the end is even like a classical control system but it does feel a bit like something coming around full circle.
"Wires" are a useful convenience in the electron world, to build pathways that don't degrade with the passing of the elections themselves. But if we relax that constraint a bit, are there other ways we can build up arrangements of "organized flow" sufficient to have logic units arise? E.g. imagine pressure waves in a fluid -filled container, with mini barriers throughout defining the possible flow arrangement that allows for interesting self-reflections. Or way further out, could we use gravitational waves through some dense substance with carefully arranged holes, self-interfering via their effect on space-time, to do computations for us? And maybe before we get there, is there a way we could capitalize on the strong or weak nuclear force to "arrange" higher frequency logical computations to happen?
Physics permits all sorts of interactions, and we only really use the simple/easy-to-conceptualize ones as yet, which I hope and believe leaves lots more for us to grow into yet :).
If bitflips are a problem in a modern chip, imagine the number of problems if your computer ran on gravity waves. The background hum of billions of star collisions cannot be blocked out with grounded tinfoil. There is no concept of a faraday cage for gravity waves.
[1] https://en.wikipedia.org/wiki/Gravity_wave
[2] https://en.wikipedia.org/wiki/Gravitational_wave
You're right about the bit flips though. I don't know if a gravitational wave computer is actually ever going to be feasible, just an interesting dream for the far future. Hopefully there are more options to consider in the meantime :).
I sort of get this in a discrete digital logic scenario but out of curiosity as someone not big on Photonics, what would be the light 'equivalent' of an electrical AC signal? I'm kind of struggling to visual that.
https://en.wikipedia.org/wiki/Topological_quantum_computer
Deleted Comment
Replace "data" with "computation", and "formula" with physical, less expensive processes.
Say you take a standard slide rule with two log scales, and want to do a division problem, x/y. There's more than one way to do it. I can think of at least 3. One of them won't just compute x/y for your particular x, but will compute x/y for ANY x.
Accuracy is always the issue with analog stuff, but they sure are neat.
Another fun one to contemplate is spaghetti sort. With an analog computer of sufficient resolution, you can sort n elements in O(n). You represent the numbers being sorted by lengths of spaghetti. Then you put them on the table straight up and bring a flat object down until it hits the first and largest piece of spaghetti. You set that down and repeat the process, selecting the largest element of the set every time.
I've always liked the idea of hybrid systems. I envision one where you feed the analog part of your problem with a DAC, then get a really close answer up to the limit of your precision from the analog component, then pass that back out to an ADC and you have a very very close guess to feed into a digital algorithm to clean up the precision a bit. I bet you could absolutely fly through matrix multiplication that way. You could also take the analog output and adjust the scale so it's where it needs to be on the ambiguous parts, then feed it back into your analog computer again to refine your results.
Isn’t this how very old sorting machines with punch cards worked? I’m thinking of the kinds used by the census or voting machines in the late 1800s or early 1900s.
[0] - https://en.wikipedia.org/wiki/Fourier_optics
[1] - https://en.wikipedia.org/wiki/Optical_correlator
[2] - https://www.youtube.com/watch?v=Y9FZ4igNxNA
Simulation is when we manipulate things (concrete or abstract) according to the rules that govern other concrete things, e.g. pushing around balls in circles to (highly inaccurately) represent the orbit of planets around a star.
Not all calculation is simulation, and not all simulation is calculation, but there exists an intersection of both.
The key trick you can do with that last category is that when the physical system you're simulating is controllable enough, you can use the correspondence in the other direction: Use the concrete things to simulate the abstract things. It's simulation, because you're manipulating concrete entities according to the rules that govern other entities (who happen to be abstract),but what you're doing also amounts to doing a calculation with those abstract entities.
Deleted Comment
If we accept it, for argument's sake, then what's happening is essentially delegating the computation to the ultra-meta-computer that runs the simulation.
I had a pretty convincing revelation last night that the simulation was run by insects. I could only get back to sleep by ridiculing myself for such a derivative thought. Or is there a reason it's universal?
Deleted Comment
[0] https://en.wikipedia.org/wiki/MONIAC
[1] https://www.youtube.com/watch?v=rAZavOcEnLg&ab_channel=Reser...
https://youtu.be/rAZavOcEnLg?t=101 (shows operation of MONIAC)
And the fact that we can get the universe to do math for us should not be surprising: we can model the universe with math, so of course that mapping works in the other direction as well. And this is not news. There were analog computers long before there were digital ones.
---
[1] ... using surprisingly small amounts of hardware relative to what a digital computer would require for certain kinds of computations that turn out to be kind of interesting and useful in specific domains. But that's not nearly as catchy as the original.
This is not so obvious as you make it appear. For instance, we can model the weather for the next couple of days using math. But letting the weather of the next couple of days calculate math for us doesn't work very well. The reason is that we can't set the inputs for the weather.
This problem comes up in various forms and shapes in other 'nature computers' as well. Quantum computers are another example where the model works brilliantly but setting the pre- and side conditions in the real world is a major headache.
https://en.wikipedia.org/wiki/Reservoir_computing
I disagree:
I think neural networks are learning an internal language in which they reason about decisions, based on the data they’ve seen.
I think tensor DAGs correspond to an implicit model for some language, and we just lack the tools to extract that. We can translate reasoning in a type theory into a tensor DAG, so I’m not sure why people object to that mapping working the other direction as well.
> in which they reason about decisions
I'm in awe of what the latest neural networks can produce, but I'm wary to call it “reasoning” or “deciding”. NNs are just very complex math equations and calling this intelligence is, in my opinion, muddying the waters of how far away we are from actual AI.
computation is boolean algebra.
-> therefore, doing math is to think.
I'm not trying to be pedantic, I just don't think using intuitive associations with words helps clarifying things. If your definition for thought diverges here, please try to specify how exactly: what is thought, then? Semi-autonomous "pondering"? Because the closer I look at it, that, too, becomes boolean algebra, calling eval() on some semantic construct, which boils down to symbolic logic.
What you may mean is that "neural" networks are performing statistics instead of algebra, but that's not what the article is about, is it?
This is a burden-shifting reply of "so prove me wrong!" to anyone who feels that your assertion lacks sufficient justification for it to be taken as an axiom.
- what is the question?
- what is the answer?
[0] https://www.researchgate.net/publication/221531443_Pattern_R...
[1] https://scriptedonachip.com/ml-control