> most artists love their work and do not want to outsource their passion to a machine that does this for them.
This was the money quote.
I love what I do (which tends to be programming, but I approach it as a craft), and often do things by hand, that others automate, because I love my work, and also the people that use it.
The mantra is "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I think that this quotation captures the essence of the problem.
It is nice to have the boring things automated - be it proofreading, generating mundane pieces of code, and stuff. The sad part
In programming (at least, for now), I feel that it is a net positive for my happiness - less stuff spent hunting simple omissions, checking StackOverflow, or checking (not always so complete) documentation of changes between version 0.4 and 0.5 of some library.
Yet, in other fields, it may be different. I am not a translator, but I did something for fun a few times (e.g. https://p.migdal.pl/blog/2019/10/the-statues-by-jacek-kaczma...). Now, with proper prompting, one can get very far (e.g. https://p.migdal.pl/blog/2023/05/genesis-az-by-gpt). For the first time, it gets me some existential chills, as it is taking away the joy of a linguistic puzzle of matching content, mood, atmosphere, and rhymes. It used to be an intellectual challenge only a real human being could do.
At some points, event sparks of genius in breakthrough scientific discoveries will be more like solving a crossword rather than something WE can add to the culture.
Laundry and dishes don't pay well. Art does, but only once in a great while, and AI gives you a ton of attempts for little cost. Since rapid iteration is the biggest strength of advanced computing, people just invented cultural pollution machines in the hope of striking it rich.
Sure, but many people who are bad at art and writing would like AI to make art and writing that's tailored to their tastes and are fine doing laundry and dishes since they're good at that and we have already automated 99% of it with dishwashers and dryers.
I think the missing piece is the generative AI being tied to a physical embodiment. My point is, we need the physical machine but we also need to work on this generative AI aspect to manipulate the machine. Unfortunately, making pictures and writing text seems to be the lowest hanging fruit of the Transformer AI era since we have significant datasets already available. Several companies are working on creating datasets to train transformers to apply to other tasks and I suspect we'll start seeing the results in a couple to a few years. Not sure how long before we have a consumer bot doing the laundry and dishes though.
Ai is ages away from being able to emulate the passion and intuition behind music, writing, and art that humans have cultivated for centuries. It can only emulate that passion even still, so it relies on a theft from human intellect & reasoning as it very basis. This is why in the long run it will become apparent that undermining the source of creativity, and by ruining the ability of artists and writers to succeed properly in innovating nw things, eventually Ai will become bereft of new ideas, and easily turned into a source for garbage, cheap, copycat, and undesired output.
There is also the looming risk that current creators may well turn towards corrupting their work specifically to hinder the corrupt idea theft currently occurring with Ai profiteers.
Well I don't think OP persuaded the AI start-up to pivot to laundry services. The start-up is going after the low hanging fruit, things easy to do, and no one likes to be thought of as easily replaceable least of all artists.
I wonder to what degree generative AI came before e.g. as in the article an AI that does your taxes because it's hard to be wrong, creatively. If I ask an AI for a painting of a dog commanding a fleet of starships, I'd be hard pressed to define, specifically, an incorrect result. There's wiggle room. Some results are more wrong or right than others, but I don't know that I'd ever hand back a grade of 0. And even if I did, so what? The stakes are low. Try again.
On the other hand, doing your taxes wrong is a thing. You can enumerate the possible mistakes. They're codified in law[0]. And these mistake have big stakes.
Generative AI is "safer" than an AI to do a specific task that has consequences.
[0] Yes, yes, I know, there's wiggle room here too and that's why we have courts
I think the problem is because AI is mostly created by engineers who have no clue about 'art', all they know is how to automate and optimise for the sole purpose of efficiency which has been the cause of AI eating creative fields including programming ironically as there an art to coding as well.
Most artists, like most programmers are working in a commercial capacity. I see no reason why "their passion" should overrule profit-seeking.
Do art for the sake of art on your own time. But expecting companies to not embrace AI to do their jobs is foolish just like it's foolish to think corporate programmers will be a thing at some point in the future.
I suspect most artists (and I used to be one, at one time[0 - 1]), would take advantage of AI tools. I suspect that the posting was really just exhibitionism, on the part of the site owners (I really wouldn't bother with that kind of thing, myself).
But I think that telling people to do a good job "on their own time," kinda sums up the zeitgeist of today's tech industry.
> I see no reason why "their passion" should overrule profit-seeking.
Companies can and will do whatever they want.
However, on the level of the individual, I see no reason why profit-seeking should overrule my passion. I don't code to make money, I code because I love doing it. I make money to allow myself to engage in an activity I find rewarding, I don't engage in that activity to make money.
If that activity is no longer rewarding, there is no amount of money that can compensate for that.
Perhaps you're right, and that the future has no place for people like me. We'll see. Regardless, though, I'm not going to participate (either as a dev or as a customer) with processes that dehumanize people to such a degree.
> Most artists, like most programmers are working in a commercial capacity. I see no reason why "their passion" should overrule profit-seeking.
The obsession that everything must go in a financial box or be relegated to the increasingly diminished "free time" available to oneself is a cancer that is destroying our society.
Things that are profitable =/= things that have value to people. They often coincide but it is not a direct correlation. Tons of things that make absolutely unethical amounts of profit have zero value to society. Conversely some of the most valuable things in human experience have zero monetization, in practice or theory.
> Most artists, like most programmers are working in a commercial capacity. I see no reason why "their passion" should overrule profit-seeking.
Ding ding ding.
I think that we're already experiencing all the downsides of having AI-generated content. Look at high-budget movies - they're completely bland, tell nothing, they're only optimized for getting money out of average uneducated customer. Ubisoft is known for making all games according to same template that works and makes money. Mobile gaming lasted three entire months before it was overtaken by those profit-driven.
Some time ago I went to an expensive concert, and I realized... those guys got loaded by playing literally the same songs they've been playing for twenty years. They found the golden formula and they keep on riding it.
Turning to AI only cuts the middle man.
I turned towards independent creators because those guys have a passion and want to share it. So not all hope is lost. It's just that true artists are going to be the minority, just like they always were, while the rest will keep slurping up the commercial slop, just like they always have.
Created work even in a commercial capacity is protected IP. having AI train on those models for a publicly consumed LLM may not even be in the best interests of profit.
Law hasn't even evolved either. What if in the future people bring special tradecraft to companies that they don't want leaking elsewhere? who knows. the door is still open.
> just like it's foolish to think corporate programmers will be a thing at some point in the future.
I don't think corporate programmers are obsolete. How will a sophisticated AI programmer in the future be any different than a human programmer in an outsourcing firm?
Curious what you think of intellectual property law in general. Trade dress, trade marks, written words, lyrics, distinctive visual styles, processes, melodies etc are all protected under various IP laws.
The laws did not anticipate AI models hoovering up all known IP without any permission from or compensation to its creators, but that does not mean that we cannot correct that and it does not mean those creators cannot or should not be able to assert rights and be compensated for violations.
I think your comment is emblematic of the very divide between Silicon Valley and the rest of the world that leads to frustration like what the artists here are expressing.
No one becomes an artist except because of passion for the work. It sure as hell isn’t for the money. That’s not so universally true in Silicon Valley, and I’m guessing you’re one of those people who views their job mostly as a means to fund the rest of their lifestyle.
The counterargument is that AI lets those without the skills take part too.
AI can help someone bring their musical ideas to fruition, draw pictures, or write stories.
There is still creative direction from a human - and now, rather than having to be a highly skilled human, it can now be almost anyone. In a way, AI brings creativity to the masses who aren't lucky enough to have the time to hone skills that would take thousands of hours to learn any other way.
More people would be "lucky enough to have the time to hone skills" if investment in AI were directed towards solving actual problems.
As a former professional musician, let me tell you: the skills are not the problem here. People don't not make art because they don't have the skills. They don't make art because they have nothing to say, nothing to express.
The skills are a byproduct of having a worldview, a perspective. And unfortunately (or perhaps fortunately) AI can't help anyone with that.
Alternative viewpoint - some creative people don't enjoy the actual physical act of making a creation a reality. For me, I don't particularly enjoy programming, but I love building things. I had hoped that AI would allow me to build things in a way that skips past or trivializes the stuff I don't like (it hasn't, because I'm still a much more competent programmer than "AI" is).
With art, though - I have some disabilities I don't feel the need to mention here, and am not particularly gifted at drawing or making art. I love making comics but I can't draw. AI allows me to make things like that now. It's been fantastic for someone like me, although I completely understand the arguments presented here by people that create for a living, I do think this is inevitably a losing battle because creating visual stuff is something this current iteration of generative AI seems to be pretty decent at.
I'd also like to see the results of hand-tracing of variable values done separately by the developer for each copy. Maybe have several developers do it as an extra check.
I think that the way people are mostly interacting with generative AI is not as a producer but as a _consumer_. They describe what they want, the AI makes it, and they get some enjoyment out of it, then discard it. Maybe they share it with a handful of people, but I think only a tiny percentage of people interacting with Chatgpt or MidJourney are doing anything creative or think of themselves as artists and _that's fine_.
I don't think AI has made me enjoy programming less. If anything it has made me love it even more. The biggest benefit for me is that it has helped me with procrastination. Instead of "I'll do it tomorrow" I will just ask the AI when I need to do something I don't feel like doing.
This feels like back in the day when photo editing and publishing software started getting big. "True artists" would shun anyone using those tools as amateurs. And for the most part they were right. It was mostly used by amateurs to create stuff that didn't look very good.
That is where we are with AI art right now. Most of it is garbage created by people who don't know what good art is.
But just like back in the day, a few professionals decided to adopt the new tools instead of complain about them. And all of a sudden they were creating good art much faster than their competitors. And then those tools simply became the tools of the trade.
This is where AI art is going. It will be a tool in the artists toolbox, just like Photoshop is. A great artist will use AI to do most of the work, and then add their professional taste and talent to make it great.
The smart artist is learning how to integrate AI into their workflows.
* And I'm including software engineers here as well. The smart engineer is incorporating copilots into their software development workflows.
The difference is that everyone thinks they're an artist now, skipping pesky things like talent or effort. GenAI is the ultimate tool for people who have the creativity of a potato.
Truth is that no one has the luxury to make that choice because everything is seen through the lens of productivity. It's the productivity that makes making art faster "good".
It sucks so hard that the choice is: either get on board or become irrelevant.
People have every right to complain about art, for many reasons. It's not comparable to Photoshop.
> Storytelling is personal. It is a connection between the writer and the reader. Without this personal connection, storytelling loses its purpose.
I read a book many years ago called If this is your land, where are your stories? One of the sections described how a group of native people from western Canada successfully reclaimed some of their land by telling stories in court. They lost their case in a lower court, because the court ruled that they were just telling stories. A higher court ruled that their people's stories, while not always factually correct, do lay claim to the land they lived on.
Everyone has a story. It's a confusing world where machine-generated stories can be indistinguishable from actual human stories.
Trust in authenticity is going to be a valuable asset going forward; I appreciate people sharing responses like this. It's easy to be cynical and say this response won't change anything, but if the people at Muse and companies like it keep getting these responses, some of them will realize the emptiness of what they're doing and move on to more meaningful work.
The suggestion you can do "storytelling without the need for personal content creation." says a lot about how that company views content. Content is a means to make money, it's not about telling a meaningful story or conveying information, it's about eyeballs and revenue.
It would be tricky do stop AI looking at content and making something inspired by it as humans have been doing that as a basic occupation for centuries. You'd have to differentiate between a human looking at stuff and then producing art and a machine doing it but what if a machine and human work together? I'm not sure how you'd differentiate legally.
Always worth remembering as we watch these cases go through the system that "legal" and "moral" (or even "benefits society") are not the same thing. They are usually intended to be the same thing, but the rule of law will never fully capture the intent, especially as time passes.
> Artists’ work has been harvested in order to train large language models
The idea of publishing something these days has become quite unappealing. For me, there's something dehumanizing about being used as LLM training data. Not that my thoughts are unique or interesting or special. But, man...
Years ago I went to a BBQ cooking class run by the owner of a small, one-location restaurant in Chicago. He said that Sysco and US Foods reps called him every single day to talk about outsourcing his pulled pork and brisket to them.
This feels oddly similar, except there's only a couple Sysco's and US Foods' and probably thousands of AI Content Farm startups at this point.
I'm curious how developers' sentiment towards code generation tools like Copilot compares to artists' sentiment towards art generation tools like MJ/SD/DallE.
As a developer myself, I want to see coding as a creative act and feel uneasy about the idea of relying on a tool that makes the code one tab away. But I also recognize most of the coding we do is in a commercial context where artistic expression is less relevant.
My main concern about Copilot use is that, unless you're in a race to quickly produce a minimum viable product, it doesn't optimize the hard part. The initial coding is a very small piece of the work.
Code has to be designed, maintained, and extended, and for supportable code we want to have someone on the team that understands it well. If we want the code to be flexible, we want design decisions to appear in as few places as possible, ideally one. Copilot as an auto-complete, where it just helps fill in function names and the like, is fine and helpful (but IDEs do most of that already). But if it's used to write larger code chunks we quickly have the problem that there are code sequences no one on the team understands (it passed the tests though, yay!?) and we wind up with bloat from a lot of copy-paste like code.
This is universally true of Generative AI's impact on all art and craft.
It's a Dunning-Kruger thing. It makes the stuff that a novice thinks is hard apparently easier, but does not ever tackle the difficult stuff. So that novice is actively disadvantaged compared to how they would be if they worked with an actual artist to make what they wanted.
I'm both an artist and a programmer, and indeed my opinions about them are very different. I work as sysadmin so both are activities are detached from my earnings, as a disclaimer. The artist opinion is a lot more abstract and feeling-based, while the programmer opinion is more pragmatic.
Also note that I condemn anyone using AI for "evil". Anyone using it to deceive or harm is despicable and just makes things difficult for everyone.
---
As an artist I see stuff like Stable Diffusion to be very interesting, a fun toy or a potential tool to unclog your brain when needing ideas, and much faster than something like pinterest, specially with niche topics. Its ability to randomly combine concepts can be an starting point for inspiration as well.
I usually like to draw robots, which aren't a common thing in popular media anymore (at least not in the aesthetic I favor, think Armored Core) so having the AI generate a bunch of random robot slop is a good way to get my brain "in the zone", maybe stuff like "oh that pose is cool" or "I can try something with this type of joint". Or even "wow this is a hot mess but the way it placed that odd shape in the arm gives me an idea for a weapon" and let my brain juices flow and do the rest. If nothing else it can help you discard things that don't work.
However, I wouldn't let it replace my work, because the process of designing and drawing is what's fun to me. My favorite subject also requires a degree of consistency the poor thing just can't pull off even with assistance, specially when animating said robots with all the moving parts and stuff that splits and folds, so I'm on my own anyway.
Double however, the idealist in me also believes that anyone that can produce a fine piece, by whatever means, by putting effort and finesse with a tool IS an artist. I have seen some people intentionally working with the tools to do stuff and the results can be impressive. This however disqualifies most AI art you might be familiar with, because most stuff spammed in the internet is stuff that anyone (on the know) can tell is just default settings slop, done in batches with no soul, or even intent to deceive.
The process I'm talking about involves multiple hand-made edits, knowing the quirks of the system and knowing what to do. I've seen it used, for example, by TTRPG nerds (not an insult) to generate images tailored for a scene in campaigns. Those people have imagination and a sense of aesthetics but no art skills, and the way they put together a full scene by editing, making multiple passes, repainting areas until it looks right, training the characters involved... is something I can respect because it's coming from a place of passion and is sincere.
I guess intent matters. Some people are just trying to do fun stuff, others are trying to scam you and putting out basic slop while asking you to subscribe to their Patreons like if they were real artists, or making deepfakes. I'll always side with the former out of principle, even if they are a minority overshadowed by the shenanigans of the later.
---
Now, as a programmer I see Copilot with utter disdain. I know for a fact that LLMs are extremely prone to mistakes, inconsistent, unreliable. Code is something that can easily have hidden gotchas that only an expert can notice. The consequences of people who have no idea what they are doing asking for code from a thing that has no idea what it's doing can have reaching effects that can, in the worst case scenarios, end up having consequences such as severe time, data or money loss. Or even death. I'd even question its use from seasoned programmers, because everyone can have a bad day and overlook a mistake.
I got hit up over several emails by a startup doing AI recruiting. I never replied back because it was hard to tell if it was actually the CEO reaching out to me because of my experience, or his AI system reaching out to me.
I thought, if I’m having this sort of issue figuring out if the contact is genuine, how will I feel working on this product? Even if it is genuine, is it ethical to have been recruited by a real person to work on a product that fools people into thinking they’ve been contacted by a real person?
A couple of years back I got a message on LinkedIn from Andrew Ng and I was blown away, then I quickly realized that this account was probably managed by a team of recruiters. I responded positively but mentioned some ML 101 concepts that could be applied to the problem, and the inability to engage confirmed my suspicion. So to some degree, deception has always been part of the game.
This was the money quote.
I love what I do (which tends to be programming, but I approach it as a craft), and often do things by hand, that others automate, because I love my work, and also the people that use it.
- Joanna Maciejewska (https://x.com/AuthorJMac/status/1773679197631701238)
It is nice to have the boring things automated - be it proofreading, generating mundane pieces of code, and stuff. The sad part
In programming (at least, for now), I feel that it is a net positive for my happiness - less stuff spent hunting simple omissions, checking StackOverflow, or checking (not always so complete) documentation of changes between version 0.4 and 0.5 of some library.
Yet, in other fields, it may be different. I am not a translator, but I did something for fun a few times (e.g. https://p.migdal.pl/blog/2019/10/the-statues-by-jacek-kaczma...). Now, with proper prompting, one can get very far (e.g. https://p.migdal.pl/blog/2023/05/genesis-az-by-gpt). For the first time, it gets me some existential chills, as it is taking away the joy of a linguistic puzzle of matching content, mood, atmosphere, and rhymes. It used to be an intellectual challenge only a real human being could do.
At some points, event sparks of genius in breakthrough scientific discoveries will be more like solving a crossword rather than something WE can add to the culture.
There is also the looming risk that current creators may well turn towards corrupting their work specifically to hinder the corrupt idea theft currently occurring with Ai profiteers.
On the other hand, doing your taxes wrong is a thing. You can enumerate the possible mistakes. They're codified in law[0]. And these mistake have big stakes.
Generative AI is "safer" than an AI to do a specific task that has consequences.
[0] Yes, yes, I know, there's wiggle room here too and that's why we have courts
Deleted Comment
Do art for the sake of art on your own time. But expecting companies to not embrace AI to do their jobs is foolish just like it's foolish to think corporate programmers will be a thing at some point in the future.
This viewpoint is what fucked up the world.
But I think that telling people to do a good job "on their own time," kinda sums up the zeitgeist of today's tech industry.
[0] https://littlegreenviper.com/art/Cavalier.png
[1] https://littlegreenviper.com/art/Sentinels.png
Companies can and will do whatever they want.
However, on the level of the individual, I see no reason why profit-seeking should overrule my passion. I don't code to make money, I code because I love doing it. I make money to allow myself to engage in an activity I find rewarding, I don't engage in that activity to make money.
If that activity is no longer rewarding, there is no amount of money that can compensate for that.
Perhaps you're right, and that the future has no place for people like me. We'll see. Regardless, though, I'm not going to participate (either as a dev or as a customer) with processes that dehumanize people to such a degree.
The obsession that everything must go in a financial box or be relegated to the increasingly diminished "free time" available to oneself is a cancer that is destroying our society.
Things that are profitable =/= things that have value to people. They often coincide but it is not a direct correlation. Tons of things that make absolutely unethical amounts of profit have zero value to society. Conversely some of the most valuable things in human experience have zero monetization, in practice or theory.
Money is not everything.
Ding ding ding.
I think that we're already experiencing all the downsides of having AI-generated content. Look at high-budget movies - they're completely bland, tell nothing, they're only optimized for getting money out of average uneducated customer. Ubisoft is known for making all games according to same template that works and makes money. Mobile gaming lasted three entire months before it was overtaken by those profit-driven.
Some time ago I went to an expensive concert, and I realized... those guys got loaded by playing literally the same songs they've been playing for twenty years. They found the golden formula and they keep on riding it.
Turning to AI only cuts the middle man.
I turned towards independent creators because those guys have a passion and want to share it. So not all hope is lost. It's just that true artists are going to be the minority, just like they always were, while the rest will keep slurping up the commercial slop, just like they always have.
Law hasn't even evolved either. What if in the future people bring special tradecraft to companies that they don't want leaking elsewhere? who knows. the door is still open.
How do we know this?
I don't think corporate programmers are obsolete. How will a sophisticated AI programmer in the future be any different than a human programmer in an outsourcing firm?
The laws did not anticipate AI models hoovering up all known IP without any permission from or compensation to its creators, but that does not mean that we cannot correct that and it does not mean those creators cannot or should not be able to assert rights and be compensated for violations.
No one becomes an artist except because of passion for the work. It sure as hell isn’t for the money. That’s not so universally true in Silicon Valley, and I’m guessing you’re one of those people who views their job mostly as a means to fund the rest of their lifestyle.
Deleted Comment
Dead Comment
AI can help someone bring their musical ideas to fruition, draw pictures, or write stories.
There is still creative direction from a human - and now, rather than having to be a highly skilled human, it can now be almost anyone. In a way, AI brings creativity to the masses who aren't lucky enough to have the time to hone skills that would take thousands of hours to learn any other way.
As a former professional musician, let me tell you: the skills are not the problem here. People don't not make art because they don't have the skills. They don't make art because they have nothing to say, nothing to express.
The skills are a byproduct of having a worldview, a perspective. And unfortunately (or perhaps fortunately) AI can't help anyone with that.
With art, though - I have some disabilities I don't feel the need to mention here, and am not particularly gifted at drawing or making art. I love making comics but I can't draw. AI allows me to make things like that now. It's been fantastic for someone like me, although I completely understand the arguments presented here by people that create for a living, I do think this is inevitably a losing battle because creating visual stuff is something this current iteration of generative AI seems to be pretty decent at.
Leaving us more time to focus on what matters : work.
Ha ha.
That is where we are with AI art right now. Most of it is garbage created by people who don't know what good art is.
But just like back in the day, a few professionals decided to adopt the new tools instead of complain about them. And all of a sudden they were creating good art much faster than their competitors. And then those tools simply became the tools of the trade.
This is where AI art is going. It will be a tool in the artists toolbox, just like Photoshop is. A great artist will use AI to do most of the work, and then add their professional taste and talent to make it great.
The smart artist is learning how to integrate AI into their workflows.
* And I'm including software engineers here as well. The smart engineer is incorporating copilots into their software development workflows.
Artists are petty.
Truth is that no one has the luxury to make that choice because everything is seen through the lens of productivity. It's the productivity that makes making art faster "good".
It sucks so hard that the choice is: either get on board or become irrelevant.
People have every right to complain about art, for many reasons. It's not comparable to Photoshop.
I read a book many years ago called If this is your land, where are your stories? One of the sections described how a group of native people from western Canada successfully reclaimed some of their land by telling stories in court. They lost their case in a lower court, because the court ruled that they were just telling stories. A higher court ruled that their people's stories, while not always factually correct, do lay claim to the land they lived on.
Everyone has a story. It's a confusing world where machine-generated stories can be indistinguishable from actual human stories.
Trust in authenticity is going to be a valuable asset going forward; I appreciate people sharing responses like this. It's easy to be cynical and say this response won't change anything, but if the people at Muse and companies like it keep getting these responses, some of them will realize the emptiness of what they're doing and move on to more meaningful work.
https://www.amazon.com/Where-Stories-Finding-Common-Ground/d...
It's all rather terrible.
https://www.theregister.com/AMP/2024/07/08/github_copilot_dm...
I do sympathise with the creatives on an emotional level but it’s looking like the laws won’t
you can’t be amateur hour in these cases. things like not requesting emails in the right way, or offering arguments the judge easily swats away.
The thrust of their case is important but if they’re fumbling the ball they’re going to get destroyed on procedure.
Deleted Comment
The idea of publishing something these days has become quite unappealing. For me, there's something dehumanizing about being used as LLM training data. Not that my thoughts are unique or interesting or special. But, man...
Deleted Comment
This feels oddly similar, except there's only a couple Sysco's and US Foods' and probably thousands of AI Content Farm startups at this point.
As a developer myself, I want to see coding as a creative act and feel uneasy about the idea of relying on a tool that makes the code one tab away. But I also recognize most of the coding we do is in a commercial context where artistic expression is less relevant.
Code has to be designed, maintained, and extended, and for supportable code we want to have someone on the team that understands it well. If we want the code to be flexible, we want design decisions to appear in as few places as possible, ideally one. Copilot as an auto-complete, where it just helps fill in function names and the like, is fine and helpful (but IDEs do most of that already). But if it's used to write larger code chunks we quickly have the problem that there are code sequences no one on the team understands (it passed the tests though, yay!?) and we wind up with bloat from a lot of copy-paste like code.
This is universally true of Generative AI's impact on all art and craft.
It's a Dunning-Kruger thing. It makes the stuff that a novice thinks is hard apparently easier, but does not ever tackle the difficult stuff. So that novice is actively disadvantaged compared to how they would be if they worked with an actual artist to make what they wanted.
It is dumb as rocks so it doesn't ever get anything complicated. But it is good for "autofilling" like excel, a bunch of stuff, given a pattern.
---
As an artist I see stuff like Stable Diffusion to be very interesting, a fun toy or a potential tool to unclog your brain when needing ideas, and much faster than something like pinterest, specially with niche topics. Its ability to randomly combine concepts can be an starting point for inspiration as well.
I usually like to draw robots, which aren't a common thing in popular media anymore (at least not in the aesthetic I favor, think Armored Core) so having the AI generate a bunch of random robot slop is a good way to get my brain "in the zone", maybe stuff like "oh that pose is cool" or "I can try something with this type of joint". Or even "wow this is a hot mess but the way it placed that odd shape in the arm gives me an idea for a weapon" and let my brain juices flow and do the rest. If nothing else it can help you discard things that don't work.
However, I wouldn't let it replace my work, because the process of designing and drawing is what's fun to me. My favorite subject also requires a degree of consistency the poor thing just can't pull off even with assistance, specially when animating said robots with all the moving parts and stuff that splits and folds, so I'm on my own anyway.
Double however, the idealist in me also believes that anyone that can produce a fine piece, by whatever means, by putting effort and finesse with a tool IS an artist. I have seen some people intentionally working with the tools to do stuff and the results can be impressive. This however disqualifies most AI art you might be familiar with, because most stuff spammed in the internet is stuff that anyone (on the know) can tell is just default settings slop, done in batches with no soul, or even intent to deceive. The process I'm talking about involves multiple hand-made edits, knowing the quirks of the system and knowing what to do. I've seen it used, for example, by TTRPG nerds (not an insult) to generate images tailored for a scene in campaigns. Those people have imagination and a sense of aesthetics but no art skills, and the way they put together a full scene by editing, making multiple passes, repainting areas until it looks right, training the characters involved... is something I can respect because it's coming from a place of passion and is sincere.
I guess intent matters. Some people are just trying to do fun stuff, others are trying to scam you and putting out basic slop while asking you to subscribe to their Patreons like if they were real artists, or making deepfakes. I'll always side with the former out of principle, even if they are a minority overshadowed by the shenanigans of the later.
---
Now, as a programmer I see Copilot with utter disdain. I know for a fact that LLMs are extremely prone to mistakes, inconsistent, unreliable. Code is something that can easily have hidden gotchas that only an expert can notice. The consequences of people who have no idea what they are doing asking for code from a thing that has no idea what it's doing can have reaching effects that can, in the worst case scenarios, end up having consequences such as severe time, data or money loss. Or even death. I'd even question its use from seasoned programmers, because everyone can have a bad day and overlook a mistake.
I thought, if I’m having this sort of issue figuring out if the contact is genuine, how will I feel working on this product? Even if it is genuine, is it ethical to have been recruited by a real person to work on a product that fools people into thinking they’ve been contacted by a real person?