I'd assumed until recently that early robots and other forms of artificial being were friendly and regarded as a good thing in early science fiction. Now it seems they were symbols of arrogance and guilt right from the start!
The cancelled Catholic intellectual E Michael Jones points out that Mary Wollstonecraft Shelley may have been afflicted by guilt over the suicide of Harriet, Percy Shelley's estranged wife. Frankenstein's monster became an object that guilt. Similarly, in Aliens, the monsters represent guilt over growing sexual licence and abortion, he suggests.
If true this sort of thing may explain why the monsters are typically extremely powerful, can break through steel doors, etc. One cannot escape from them just as one cannot escape from a guilty conscience.
So our ideas about robots may say more about us than about robots. It would be a shame if psychological baggage were to hamper the development of artificial general intelligence (AGI). It would mean that science fiction has primed us to reject the sub-creation of life merely because we project easily onto anything that resembles ourselves: even before it has come into existence.
There is an old story of a Prague rabbi named Loew, who made a golem of clay; an artifical servant of sorts. Needless to say, the golem, through an error of the rabbi, escaped his control and started behaving violently. Fortunately the rabbi was able to incapacitate him (it?).
This story is, I think, 400 years old?
It seems that humans have been distrustful about artificial beings since a long time, before they actually could have met them.
> I'd assumed until recently that early robots and other forms of artificial being were friendly and regarded as a good thing in early science fiction. Now it seems they were symbols of arrogance and guilt right from the start!
I believe Asimov has written that when he started writing his robot tales, one of his goals was to have robot stories that were not just Frankenstein stories, which he said were prevalent at the time.
Another favourite is of course Lieutenant Commander Data. His goal, rather than to destroy humanity, was to become more human himself. This is finally realised when he is presented with an 'emotion chip'.
Yet if neuropsychoanalyst Mark Solms is correct it turns out that emotions are the foundation of all minds, i.e. Data could not have been emotionless to begin with.
> I'd assumed until recently that early robots and other forms of artificial being were friendly and regarded as a good thing
See also Metropolis and many other works of art and fiction.
But I don't see one speculative theory about one author of one story (from centuries ago) as significant to a serious contemporary issue. It provides no evidence of the risks and the level of risk, nor an argument about them. Dismissing Mary Shelley, even if accepted, tells us nothing about whether AI can safely drive a car, make judicial decisions, decide whether you get a loan, or drop bombs.
The risk is perceived far beyond that story or older sci-fi, but widely anticipated by leading scientists and technologists of our day. What about their evidence and arguments? Nor is it hard to imagine risks with little thought.
> cancelled
Do we really need to inject reactionary politics into a discussion of AI? Does that further the conversation?
Also “howitzer” is the English pronunciation of a Czech invention, houfnice, from the Hussite wars. They were the first to use firearms on a mass scale, arming peasants with pistols, long guns, and dragging artillery they designed in wagon carts. Was a real shocker to the medieval knights they were fighting.
I actually read the play, it's very good. A "robot" is a simplified human, assembled in a factory, with no reproductive capability. There was an implied aspect of sex slavery, as well. By the end, a pseudo Adam and Eve have emerged.
In Czech, 'robota' is an archaic word for 'work'. I remember my great-grandaunt use it when I was a kid. I think Karel Capek credited his brother Josef, poet/painter, for coining the term 'robot'.
To me it has negative connotations. I'd be way more likely to use the word "robota" (as opposed to just "praca") when I'm not too happy about what is involved.
I have fond memories of his short stories and wonderful kind fairy tales. I used to read them as a child, some I remember almost word for word being 50 now.
The excellent book Ariel Like A Harpy, an analysis of the Shelleys' Frankenstein and Prometheus Unbound, includes a chapter on R.U.R., and draws explicit lines of influence linking them.
Believe it or not I was briefly a Czech major in college. Fantastic literature. I never quite understood why the whole connection with the word robot was always so strongly emphasized though, even then.
I think for small nations, external validation is important, and robot took off. So Czechs themselves will be quick to point that out among listing their achievements. There is much better Czech literature out there.
Apparently Gene Roddenberry was inspired to create Star Trek by the Czech sci-fi film Ikarie X-B1. The resemblance is there. Also there is a Czech sci-Fi film with the first known selfie stick.
The cancelled Catholic intellectual E Michael Jones points out that Mary Wollstonecraft Shelley may have been afflicted by guilt over the suicide of Harriet, Percy Shelley's estranged wife. Frankenstein's monster became an object that guilt. Similarly, in Aliens, the monsters represent guilt over growing sexual licence and abortion, he suggests.
If true this sort of thing may explain why the monsters are typically extremely powerful, can break through steel doors, etc. One cannot escape from them just as one cannot escape from a guilty conscience.
So our ideas about robots may say more about us than about robots. It would be a shame if psychological baggage were to hamper the development of artificial general intelligence (AGI). It would mean that science fiction has primed us to reject the sub-creation of life merely because we project easily onto anything that resembles ourselves: even before it has come into existence.
This story is, I think, 400 years old?
It seems that humans have been distrustful about artificial beings since a long time, before they actually could have met them.
I believe Asimov has written that when he started writing his robot tales, one of his goals was to have robot stories that were not just Frankenstein stories, which he said were prevalent at the time.
Another favourite is of course Lieutenant Commander Data. His goal, rather than to destroy humanity, was to become more human himself. This is finally realised when he is presented with an 'emotion chip'.
Yet if neuropsychoanalyst Mark Solms is correct it turns out that emotions are the foundation of all minds, i.e. Data could not have been emotionless to begin with.
See also Metropolis and many other works of art and fiction.
But I don't see one speculative theory about one author of one story (from centuries ago) as significant to a serious contemporary issue. It provides no evidence of the risks and the level of risk, nor an argument about them. Dismissing Mary Shelley, even if accepted, tells us nothing about whether AI can safely drive a car, make judicial decisions, decide whether you get a loan, or drop bombs.
The risk is perceived far beyond that story or older sci-fi, but widely anticipated by leading scientists and technologists of our day. What about their evidence and arguments? Nor is it hard to imagine risks with little thought.
> cancelled
Do we really need to inject reactionary politics into a discussion of AI? Does that further the conversation?
Deleted Comment
- The word "robot" (with its modern meaning) comes from a Czech play
- Tradition of Czech animation
- Tradition of puppetry
All these are about "animating" in the classic sense of the word, to breathe life into something.
Just kidding, but Golem might as well be one of the early Robot prototypes.
https://en.wiktionary.org/wiki/dollar#Etymology
"Kaj ideš?" - "Do roboty." (standard Czech would be "Kam jdeš?" - "Do práce.")
Robota is more common in Slovak than in Czech.