Boeing deserves a 9-figure fine though, and its shareholders should lose massively to make sure this doesn't happen again.
1: Physicalism is true. Nothing exists that is not part of the physical world.
2: The physical world obeys mathematical laws, and those laws can be learned in time.
2.1: The physical contents of the human body can eventually be learned with arbitrary/sufficient fidelity.
3: Any mathematical rule can be computed by a sufficiently advanced computer. (Edit: or maybe a better assumption: the mathematical laws that underlie the universe are all computable.)
4: Computational power will continue to increase.
Subject to these assumptions, we will eventually gain the ability to simulate full physical human beings within computers. Perhaps with some amount of slowdown, but in the end, these simulated humans would be able to converse with entities outside the computer. In all likelihood, computers will pass the Turing test long before this. But if they don't, simulated humans seem like something that is certainly possible or even probable, and therefore the result of this paper is likely incorrect.
The problem then becomes finding an approach to general AI that avoids hitting incompleteness/undecidability[3] issues. My feeling is that this would be difficult. One way to try to avoid these issues is to avoid notions of self-reference, since self-reference spawns a lot of undecidable stuff (eg, "this statement is false" is neither true nor false). It seems to me, though, that the notions of the self and self-awareness are central to human consciousness, and so unavoidable when developing a complete simulation of human consciousness. The self is probably not computable.
Obviously there could be approaches that avoid these pitfalls, but every year that goes by without much progress towards general AI makes me feel more confident in this intuition. I do think there will be lots of useful progress in specialized AIs, but I see this as analogous to developing algorithms to decide the halting problem for special classes of algorithms. General AI is a whole different beast.
But if general AI is physically impossible, how does the human brain "compute" general intelligence at all? It could be that your assumption #1 ("Physicalism is true. Nothing exists that is not part of the physical world.") is not correct. Maybe reality has "layers" and our world is some kind of simulation in another layer. Or maybe there is only one consciousness like many spiritual people and Boltzmann[4] suggest. Or maybe the human experience could be a process of trying to solve an undecidable problem and failing...
1. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
2. https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
Or another alternative would be fund insurance through carbon tax dollars and return the rest as refund.
Prior to that, I had considered Wikileaks a brave experiment in radical transparency. Since then, I've considered Wikileaks a somewhat biased source. The truth is the truth, yes, but every truth is partial, and context matters.
"Asset" is perhaps too strong a word, but "useful idiot" may apply, or "the enemy of my enemy".
Homomorphic encryption promises a hidden and verifiable online voting system that does not rely on trusting third party.
https://www.chaum.com/publications/AccessibleVoterVerifiabil...
The major problem with online voting is that people can be coerced into voting against their wishes outside the watchful eye of election authorities. This may be worth the increase in voting ease, but it's where the real debate is.