I think my bigger concern with AI in the short term isn't the immediate displacement of all jobs, but something more personal.
In much of the world today, we observe the fraying of human connection and relationships due to a variety of factors [0], and this could be further accelerated by AI that can seem very human-like [1]
I expect AI is going to take the lead for societal concerns of unintended consequences. The advancement pace is way ahead of our ability to reason about the potential effects of what we are building.
We will have completed many iterations before we have even a moment to review the feedback loop. Thus, we will likely compound many mistakes before we realize it.
I have no idea how it will slow down. Someone just figured out how to reduce the cost of building a multi million dollar model to around $600. That was supposed to take another decade.
I've spent a lot of time thinking about what the picture looks like in this advancement. I think we are going to trip over many landmines in this new tech gold rush. I've written my own perspectives on that here - https://dakara.substack.com/p/ai-and-the-end-to-all-things
The problem is this kind of discourse is coming almost entirely from people who haven't made a good-faith effort to understand the technology. You're engaging with Hollywood illusions that don't resemble reality. There are genuine concerns related to AI developments but they have nothing to do with "alignment", it's all that boring stuff like shifts in power dynamics and biases in decision making. Really important, but not exciting enough for a gripping pop narrative. Less basilisk, more... failing to assemble adequately debiased training datasets for your mortgage approval app, or realizing someone can actually identify certain individuals in an anonymized medical dataset.
If you find yourself feeling anxious about AI, spend time learning how modern transformer-based predictive and generative models actually work. From resources rooted in math, statistics, and computer science; not doomer bloggers.
I’ve always wondered if in my lifetime exponential growth of economic and technological developments would hit a velocity where one or both expands faster than I can even process the changes.
At this point all I have to offer is “strap the fuck in”
EDIT: I’m not even talking about some world ending thing - even GPT4 in its current form has me questioning how many comments I read are real or not - the thing can beat the Turing test in many cases!
> I have no idea how it will slow down. Someone just figured out how to reduce the cost of building a multi million dollar model to around $600. That was supposed to take another decade.
I don't think this is accurate. The Stanford team used LLaMA as base model and added a smaller model on top of it - training the joint model using data (generated from ChatGPT) is what cost $600. Nobody trained a GPT-like model from scratch for $600 - this experiment took advantage of the millions of USD used to train the larger models.
Am I the only one who thinks the AI doomsayers are even more ridiculous than the AI cheerleaders?
ChatGPT may be useful and so on, but dear god. It's still just a fucking chat bot.
We need to regulate AI and get better at building safe AI systems. Especially for things like facial recognition, self-driving cars, hyper-targeted advertising / engagement hacking, and the like. But "slow down" is such a ridiculous framing. The issues with these systems are obvious to everyone, and addressing those issues is a deeply technical and deeply difficult problem. We need to be throwing money at computer scientists to help build safety engineering tools and best practices. Hand-wringing over obvious faults by philosophers is not helpful, nor is burying our heads it the sand and going full Amish but with 2016 as our "just the right amount of technology" basline.
I hope in five year we remember that the non-technical journalist class lost their god damned minds over what ultimately amounted to be a mildly useful parlor trick.
> The issues with these systems are obvious to everyone, and addressing those issues is a deeply technical and deeply difficult problem. We need to be throwing money at computer scientists to help build safety engineering tools and best practices.
Doesn't this go hand-in-hand with slowing AI research down and focusing more on safety engineering and best practices?
I think everyone in the AI safety community is very very much onboard with "We need to be throwing money at computer scientists to help build safety engineering tools and best practices," but the current challenge is how to build the societal incentive structures to make that happen. Right now the overwhelming majority of prestige and money is in making AI more capable, not in safety engineering and best practices.
> Doesn't this go hand-in-hand with slowing AI research down and focusing more on safety engineering and best practices?
I don't see how this is given. It could very well be the opposite.
Consider self driving, for example.
Better object detection makes the system safer, full stop. It's on the AI safety folks to keep up with building good analyses for SoTA. Suspending all future improvements to object detection until we can better understand fault models of existing systems could very well make everyone less safe.
> but the current challenge is how to build the societal incentive structures to make that happen. Right now the overwhelming majority of prestige and money is in making AI more capable, not in safety engineering and best practices.
Slowing down doesn't address that problem at all. The answer here is regulation and accountability, which may or may not have the side-effect of slowing down deployments. But slowing down for the sake of slowing down is a non-solution unless you're a Philosophy major who watches too many movies.
You are underestimating the scale of the tectonic shift.
A computer program that can pass the god damned Turing test is not "a mildly useful parlor trick", it is the single most impressive computer program ever made. It exhibits reasoning, it can think with analogies. You can give it complicated requests in natural language. Given suitable prodding, it's creative. Everyone just woke up to the fact that we're a sneeze away from AGI.
>The issues with these systems are obvious to everyone
They are not. Already people routinely ask ChatGPT for factual info, and it doesn't bother them that it will simply make things up. It walks and quacks like a duck, so people assume it's a duck.
>addressing those issues is a deeply technical and deeply difficult problem
Alignment is a deeply difficult problem for philosophical reasons - we don't know how to reliably "align" humans either - but getting LMMs to output "roughly" what we want is fun and easy (they're like virtual humans!) and they're going to be deployed everywhere, problems notwithstanding.
Things are about to change, big time. In the short term we're only talking about something like the smartphone revolution, where the parameters of social interaction are fundamentally reshaped. In the long term it could get real weird...
I think it's pointless arguing about "slowing down", the cat is out of the bag. I would just like to see some transparency rules about weights and prompts - we're rapidly reaching a stage where a company hosting a highly used language model could do extreme violence to culture and business. Language models aren't paperclip maximizers - corporations are paperclip maximizers, and language models are the HypnoDrones.
People used to anthropomorphize Markov chains like this. I've been through this rodeo before, at least a few times.
I did a little lab on GPT with middle schoolers as part of a Science enrichment activity. Without prompting, the entire class was making it say nonsense inside of the one hour session.
Try using a GPT model to do something humans actually do, other than random bullshitting on the internet. Field customer service requests. Even carefully SFT'd models struggle to beat decision trees, and are sometimes worse, at actually solving the customer's problem. They sound more human / less robotic, but who cares if the customer's problem isn't solved and the dialog quickly diverges into insanity?
Just because you want to see God on a piece of toast doesn't mean that the average human has completely lost their capacity for critical thought.
I'd like to think this is just a fad (like Blockchain for instance) .. but there's such tangible application for LLMs that I think for once the hype is justified.
The blockchain hype was in pitching a solution to every problem in an effort to get rich quick. The technology still has application for verification and ultra-micro transactions. I expect it will be foundational within before the end of the decade.
This is more a statement about how bad technology hype has gotten of late. Having "tangible applications" is a very low bar. If we paused for every technology that has "tangible applications" we'd probably be entering the bronze age any moment now.
Thank you, you said it better than I did replying to another comment, but I feel about the same. It's disheartening how many outlets we have on social media to engage with big exciting narratives that are just detached from reality, and now here's one more.
The arguments against AI echo those against nuclear technology. While nuclear tech has indeed been misused, it has also revolutionized medicine, agriculture, archeology, transportation, space exploration, and energy infrastructure—demonstrating the immense potential of technological progress.
Similar to nuclear tech, the issue lies not with AI itself, but with its malicious application. For example, AI-driven medical breakthroughs will save lives, while AI-based disinformation and spam will harm society. The root of the problem isn't AI, but rather, human intent.
Ultimately, it's our responsibility to harness AI's transformative power for good and prevent its misuse, just as we've learned to do with nuclear technology. Or in other words, it's not an AI problem; it's a human problem.
I think the comparison with nuclear tech is missing one crucial aspect: ease of access. Nuclear tech has been misused, and a sufficiently funded and motivated malicious actor could get their hands on it in some way to cause harm, but it is, for the most part, out of reach.
AI, on the other hand, is already being used by every hustler looking to make a quick buck, by students who can't be bothered to write a paper, by teachers who can't be bothered to read and grade papers, by every company who can get it to avoid paying actual people do to certain jobs... Personally, my problem is not with AI tech in itself, it's with how easy it is to get your hands on it and make literally anything you fancy with it. This is what a lot of the "AI for everything" crowd can't seem to grasp.
"Personally, my problem is not with AI tech in itself, it's with how easy it is to get your hands on it and make literally anything you fancy with it. This is what a lot of the "AI for everything" crowd can't seem to grasp."
It's easy look at negatives of a technology to ignore its positives. Especially one like AI technology.
Great point. Though the issue still lies in human intent, not technology.
Shaking up traditional education methods, like paper writing and grading, can lead to more efficient learning and free time, as demonstrated by MOOCs and online universities. Exponentially growing online spam and disinformation might make it more obvious to people and recenter us as humanity to more credible information sources. We might need to adjust tax laws for companies that employ AI, but it could have positive effects. I think it's too early to catastrophize, even if I am sure the technology will be used with malicious intent by some.
In much of the world today, we observe the fraying of human connection and relationships due to a variety of factors [0], and this could be further accelerated by AI that can seem very human-like [1]
[0] https://thehill.com/blogs/blog-briefing-room/3868557-most-yo...
[1] https://www.thecut.com/article/ai-artificial-intelligence-ch...
Whether we should even pursue this direction of creating artificial companions to replace human connection is a whole another can of worms.
We will have completed many iterations before we have even a moment to review the feedback loop. Thus, we will likely compound many mistakes before we realize it.
I have no idea how it will slow down. Someone just figured out how to reduce the cost of building a multi million dollar model to around $600. That was supposed to take another decade.
I've spent a lot of time thinking about what the picture looks like in this advancement. I think we are going to trip over many landmines in this new tech gold rush. I've written my own perspectives on that here - https://dakara.substack.com/p/ai-and-the-end-to-all-things
If you find yourself feeling anxious about AI, spend time learning how modern transformer-based predictive and generative models actually work. From resources rooted in math, statistics, and computer science; not doomer bloggers.
My reference has very little to do with alignment. It is mostly about all of the societal implications, biases and how AI will actually be utilized.
Nonetheless, the doomer viewpoint derives from AI researchers themselves.
At this point all I have to offer is “strap the fuck in”
EDIT: I’m not even talking about some world ending thing - even GPT4 in its current form has me questioning how many comments I read are real or not - the thing can beat the Turing test in many cases!
That will never go away now. The people who compared pre-2023 internet content to low-background steel were on the money.
We are about to enter a very disturbing era of unverifiable truth and reality. It is an untenable situation for societal order and stability.
I don't think this is accurate. The Stanford team used LLaMA as base model and added a smaller model on top of it - training the joint model using data (generated from ChatGPT) is what cost $600. Nobody trained a GPT-like model from scratch for $600 - this experiment took advantage of the millions of USD used to train the larger models.
ChatGPT may be useful and so on, but dear god. It's still just a fucking chat bot.
We need to regulate AI and get better at building safe AI systems. Especially for things like facial recognition, self-driving cars, hyper-targeted advertising / engagement hacking, and the like. But "slow down" is such a ridiculous framing. The issues with these systems are obvious to everyone, and addressing those issues is a deeply technical and deeply difficult problem. We need to be throwing money at computer scientists to help build safety engineering tools and best practices. Hand-wringing over obvious faults by philosophers is not helpful, nor is burying our heads it the sand and going full Amish but with 2016 as our "just the right amount of technology" basline.
I hope in five year we remember that the non-technical journalist class lost their god damned minds over what ultimately amounted to be a mildly useful parlor trick.
Doesn't this go hand-in-hand with slowing AI research down and focusing more on safety engineering and best practices?
I think everyone in the AI safety community is very very much onboard with "We need to be throwing money at computer scientists to help build safety engineering tools and best practices," but the current challenge is how to build the societal incentive structures to make that happen. Right now the overwhelming majority of prestige and money is in making AI more capable, not in safety engineering and best practices.
I don't see how this is given. It could very well be the opposite.
Consider self driving, for example.
Better object detection makes the system safer, full stop. It's on the AI safety folks to keep up with building good analyses for SoTA. Suspending all future improvements to object detection until we can better understand fault models of existing systems could very well make everyone less safe.
> but the current challenge is how to build the societal incentive structures to make that happen. Right now the overwhelming majority of prestige and money is in making AI more capable, not in safety engineering and best practices.
Slowing down doesn't address that problem at all. The answer here is regulation and accountability, which may or may not have the side-effect of slowing down deployments. But slowing down for the sake of slowing down is a non-solution unless you're a Philosophy major who watches too many movies.
A computer program that can pass the god damned Turing test is not "a mildly useful parlor trick", it is the single most impressive computer program ever made. It exhibits reasoning, it can think with analogies. You can give it complicated requests in natural language. Given suitable prodding, it's creative. Everyone just woke up to the fact that we're a sneeze away from AGI.
>The issues with these systems are obvious to everyone
They are not. Already people routinely ask ChatGPT for factual info, and it doesn't bother them that it will simply make things up. It walks and quacks like a duck, so people assume it's a duck.
>addressing those issues is a deeply technical and deeply difficult problem
Alignment is a deeply difficult problem for philosophical reasons - we don't know how to reliably "align" humans either - but getting LMMs to output "roughly" what we want is fun and easy (they're like virtual humans!) and they're going to be deployed everywhere, problems notwithstanding.
Things are about to change, big time. In the short term we're only talking about something like the smartphone revolution, where the parameters of social interaction are fundamentally reshaped. In the long term it could get real weird...
I think it's pointless arguing about "slowing down", the cat is out of the bag. I would just like to see some transparency rules about weights and prompts - we're rapidly reaching a stage where a company hosting a highly used language model could do extreme violence to culture and business. Language models aren't paperclip maximizers - corporations are paperclip maximizers, and language models are the HypnoDrones.
I did a little lab on GPT with middle schoolers as part of a Science enrichment activity. Without prompting, the entire class was making it say nonsense inside of the one hour session.
Try using a GPT model to do something humans actually do, other than random bullshitting on the internet. Field customer service requests. Even carefully SFT'd models struggle to beat decision trees, and are sometimes worse, at actually solving the customer's problem. They sound more human / less robotic, but who cares if the customer's problem isn't solved and the dialog quickly diverges into insanity?
Just because you want to see God on a piece of toast doesn't mean that the average human has completely lost their capacity for critical thought.
Dead Comment
Deleted Comment
Dead Comment
Dead Comment
Similar to nuclear tech, the issue lies not with AI itself, but with its malicious application. For example, AI-driven medical breakthroughs will save lives, while AI-based disinformation and spam will harm society. The root of the problem isn't AI, but rather, human intent.
Ultimately, it's our responsibility to harness AI's transformative power for good and prevent its misuse, just as we've learned to do with nuclear technology. Or in other words, it's not an AI problem; it's a human problem.
AI, on the other hand, is already being used by every hustler looking to make a quick buck, by students who can't be bothered to write a paper, by teachers who can't be bothered to read and grade papers, by every company who can get it to avoid paying actual people do to certain jobs... Personally, my problem is not with AI tech in itself, it's with how easy it is to get your hands on it and make literally anything you fancy with it. This is what a lot of the "AI for everything" crowd can't seem to grasp.
It's easy look at negatives of a technology to ignore its positives. Especially one like AI technology.
Shaking up traditional education methods, like paper writing and grading, can lead to more efficient learning and free time, as demonstrated by MOOCs and online universities. Exponentially growing online spam and disinformation might make it more obvious to people and recenter us as humanity to more credible information sources. We might need to adjust tax laws for companies that employ AI, but it could have positive effects. I think it's too early to catastrophize, even if I am sure the technology will be used with malicious intent by some.
Dead Comment