I can't tell if I'm just getting old, but the last 2 major tech cycles (cryptocurrency and AI) have both seemed like net negatives for society. I wonder if this is how my parents felt about the internet back in the 90s.
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
This parallel is something that I've been mulling over for the better part of this year.
Are we simply getting old and bitter?
Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.
Are we really better or worse off than a few decades ago?
No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."
For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.
The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.
It depends, maybe 20 years ago — a couple of years after the dot com bubble — we thought we were not gonna repeat the same mistakes as we did before, and I do believe we blindly drank the kool-aid thinking we were gonna solve all problems with tech.
Now, we're another year old another year wiser times 20. I don't think having one's eyes open is synonymous with bitterness... but, it is what we do with the information and knowledge we have acquired that defines that trait: do we sit and grumble and shake our fists at the cloud (providers?), or do we seek others to try to prevent problems from escalating.
> Are we really better or worse off than a few decades ago?
While technologic progress in several fields has been amazing, it would be naïve of us to not recognize the areas where we have regressed.
Looking back, I think we should have normalized caution, not moving fast and breaking things; normalized interoperability, and not walled gardens; and we should have been more wary about the dangers of not having solved business models instead of normalizing tracking and targeted advertising, which enabled personalized propaganda...
... we should have also paid more attention at the unchecked power of monopolies and media conglomerates and done more to foster a healthier economy as well as improve the quality of life and rights protection of people, including access to education and the strengthening of institutions.
So, to finally answer your question, I think we are in general a bit worse off. Why? Well, I look back to 20 years ago when our outlook on the future was that the sky was the limit if you worked and studied hard; and now the outlook on the future 20 years from now... seems uncertain.
I think the progression of sentiment is basically the same. There were lots of folks pushing the agenda that connecting us all would somehow bring about the evolution of the human race by putting information at our fingertips that was eventually followed by concern about kids getting obsessed/porn-saturated.
The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.
The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.
A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication, but economically and culturally got in the habit of looking for new and exciting improvements to daily life.
The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.
But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.
> A large part of it is that we maxed out a lot of how communication tech can impact daily life, at least in terms of communication,
Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.
I'm not generally anti-capitalist, but what capitalism has become at this point in history means that technology is no longer for helping people or helping society.
Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
I am generally anti-capitalist, and a big reason is because I don't think capitalism, inherently and fundamentally, can become anything other than what it is now. The benefit its provided is rarely accurately weighed against the harms, and for people who disproportionately benefit, like most here on HN, it's even harder to see the harms.
Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it.
If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.
> Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.
You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.
But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.
Yes, at some point mainstream technology turned on the users. So much modern tech seems to be about exerting control or "monetizing" instead of empowering.
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
Do you understand how they chose the two groups? And why show one group one video, and the other group the other video? Shouldn’t both groups be shown the same video, then check whether the group division method had any impact on the results? E.g. if group one was dance lovers and group two were dance haters, you wouldn’t get any data on the haters since they were shown the parkour video instead of the dance video.
Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"
Apparently you do not understand how they chose the two groups. Group identity was not based on a survey or any attribute of the participating individuals.
Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.
To the point of the paper, it has been a somewhat disturbing experience to see otherwise affable superiors in the workplace "prompt" their employees in ways that are obviously downstream of their (very frequent) LLM usage.
I started noticing this behavior a few months ago and whew. Easy to fix if the individual cares to, but very hard to ignore from the outside.
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
One very new behavior is the dismissal of someone's writing as the work of AI.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
Unfortunately it's the correct thing to do. Just like in the past where you shouldn't have believed any stories told on the internet, it's now reasonable to assume any image/text you come across wasn't created by a human, or in the case of images is simply an event that never happened.
The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.
1. A bot-generated argument is still an argument. I can't make claims about the truth or falsity based on the enunciator, that's simply ad hominem.
2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.
I think you can't rationally apply the same standard to these 2 things.
My partner has become tiresome about this - even if I was to tell them that I responded to your comment on HN, they'd go "You probably just responded to a bot".
Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".
I've seen chatgpt output here as comments for sure. In some cases obvious, in other cases borderline. I wouldn't guess that it's a major fraction of comments, but it's there.
So far (as of 15 or so minutes after your comment) we have only one top-level comment that really indicates that the poster has started trying to read the paper seriously, Kohsuke’s post.
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).
On the opposite side (i.e. the side of what Bender called meatbags), there are a lot of jobs where judgment and empathy are not allowed. E.g. TSA agents examinining babies for bombs in case they're terrorists -- they were told "You must do this to every passenger, no questions asked" and making a decision means deviating from their job description and risking losing it.
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
Are we simply getting old and bitter?
Personally, I would add a previous cycle to this: social media. Although people were quick to point at the companies which were sparked and empowered by having unprecedented distribution.
Are we really better or worse off than a few decades ago?
No, we are getting wiser. It's not bitterness to look at a technology with a critical eye and see the bad effects as well as the good. It's not foolish to judge that the negative effects outweigh the positive. It's a mark of maturity. "But strong meat belongeth to them that are of full age, even those who by reason of use have their senses exercised to discern both good and evil."
For crypto, no. It's basically only useful for illegal actions, so if you live in a society where illegal is well correlated with "bad", you won't see any benefit from it.
The case for LLM is more complicated. There are positives and negatives. And the case for social networks is even more complicated, because they are objectively not what they used to be anymore.
Maybe, but it has nothing to do with change itself.
Change can be either positive or negative. Often it is objectively negative and can stay that way for decades.
It depends, maybe 20 years ago — a couple of years after the dot com bubble — we thought we were not gonna repeat the same mistakes as we did before, and I do believe we blindly drank the kool-aid thinking we were gonna solve all problems with tech.
Now, we're another year old another year wiser times 20. I don't think having one's eyes open is synonymous with bitterness... but, it is what we do with the information and knowledge we have acquired that defines that trait: do we sit and grumble and shake our fists at the cloud (providers?), or do we seek others to try to prevent problems from escalating.
> Are we really better or worse off than a few decades ago?
While technologic progress in several fields has been amazing, it would be naïve of us to not recognize the areas where we have regressed.
Looking back, I think we should have normalized caution, not moving fast and breaking things; normalized interoperability, and not walled gardens; and we should have been more wary about the dangers of not having solved business models instead of normalizing tracking and targeted advertising, which enabled personalized propaganda...
... we should have also paid more attention at the unchecked power of monopolies and media conglomerates and done more to foster a healthier economy as well as improve the quality of life and rights protection of people, including access to education and the strengthening of institutions.
So, to finally answer your question, I think we are in general a bit worse off. Why? Well, I look back to 20 years ago when our outlook on the future was that the sky was the limit if you worked and studied hard; and now the outlook on the future 20 years from now... seems uncertain.
The same cycle happened (is happening) with crypto and AI, just in more compressed timeframes. In both cases the initial period of optimism that transitioned into growing concerns about the negative effects on our societies.
The optimistic view would be that the cycle shortens so much that the negatives of a new technology are widely understood before that tech becomes widespread. Realistically, we'll just see the amorality and cynicism on display and still sweep it under the rug.
Similar for internet back in the 90s Nigerian princes were provided a means to reach expinentially more people faster.
Deleted Comment
The 19th and 20th centuries saw a huge shift in communication. We went from snail mail to telegrams to radio to phones to television to internet on desktops to internet on every person wherever they are. Every 20-30 years some new tech made it easier, cheaper, and faster to get your message to an intended recipient. Each of these was a huge social shift in terms of interpersonal relationships, commerce, and diminishing cycle times, and we've grown to expect these booms and pivots.
But there isn't much of where to go past "can immediately send a message to anyone anywhere." It's effectively an endstate. We can no longer take existing communication services and innovate on them by merely offering that service using the new revolutionary tech. By tech sectors are still trying to recreate the past economic booms by pushing technologies that aren't as revolutionary or aren't as promising and hyping them up to get people thinking they're the next stage of the communication technology cycle.
Perhaps for uneducated casual communications, lacking in critical analysis. The majority of what passes for "communications" are misunderstood, misstated, omit key critical aspects, and speak from an uninformed and unexamined position... the human race may "communicate" but does so very poorly, to the degree much of the human activity in our society is placeholder and good enough, while being in fact terrible and damaging.
No it has regressed now. We are probably back to the level of 1950s before telephones became common.
People don't answer unknown numbers and are not listed in the telephone book.
When I was a kid in the 90s I could call almost anyone in my town by looking them up in the phone book.
Crypto was a way that people who think they’re brilliant can engage in gambling.
AI is a way for “smart” people to create language to make their opinions sound “smarter”
Dead Comment
Imagine the DVR being invented today. A commercial device that helps you skip ads. It would never be allowed to happen.
Anti-capitalist sentiment was incredibly widespread in the US during the 19th century through the 1930s, because far more people were personally impacted, and most needed look no further than their own lives to see it. If nothing else, capitalism has become more sophisticated in disguising its harms, and acclimating people to them to such an extent that many become entirely incapable of seeing any harm at all, or even imagining any other way for a society to be structured, despite humanity having exited for 100,000+ years.
That's arguably what AI is - it compressed the internet so that you can extract StackOverflow answers without clicking through all the fucking ads that await you on the journey from search bar to the answer you were looking for.
You can of course expect it, over the next decade or so, to interpose ads between you and your goal in the same way that Google and StackOverflow did from 2010-now.
But for the moment I think it's the exact opposite of your thesis. The AI companies are in cut-throat capture-market-share mode so they're purposely skipping opportunities to cram ads down your throat.
For example, in one study, they divide participants into two groups, have one group watch https://www.youtube.com/watch?v=fn3KWM1kuAw (that highlights the high socio-emotional capabilities of a robot), while the other watches https://www.youtube.com/watch?v=tF4DML7FIWk (that highlights the low socio-emotional capabilities of a robot)
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
Deleted Comment
Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"
Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
The easiest way to protect myself these days is to assume the worst about all content. Why am I replying to a comment in that case? Consider it a case yelling into the void.
2. A bot-generated image is not a record of photon-emissions in the physical world. When I look at photos, they need to be records of the physical world, or they're a creative work.
I think you can't rationally apply the same standard to these 2 things.
Are bots really infiltrating HN and making constructive non-inflammatory comments? I don't find it at all plausible but "that's just what they want you to think".
https://news.ycombinator.com/item?id=44912783
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).