While browsing YouTube, an AI-generated video appeared and I reflexively told my wife, “That’s AI—skip it.”
Yet I’m using AI-created illustrations for my graphic novel, fully aware of copyright and legal debates.
Both Copilots and art generators are trained on vast datasets—so why do we cheer one and vilify the other?
We lean on ChatGPT to rewrite blog posts and celebrate Copilot for “boosting productivity,” but AI art still raises eyebrows.
Is this a matter of domain familiarity, perceived craftsmanship, or simple cultural gatekeeping?
Personally, I have no time for gen-AI in pretty much any context, at least given the current landscape.
And plenty of people seem to accept, if not love, gen-AI art. I don't get it, but it's true.
> While browsing YouTube, an AI-generated video appeared and I reflexively told my wife, “That’s AI—skip it.”
My reflex whenever I encounter gen-AI output in any form: text, code, image, music, video, what have you. I find all off it mid in the best of cases, and usually think it's quite terrible. I regularly see posts of the form "you'll never believe this amazing AI generated picture/video/paper/program, and when I check it out I feel like I'm taking crazy pills because I just don't see the magic.
Just my $.02, not inflation adjusted. You (and many others) may well feel differently.
I get a kick out of generating photos with family and friends in different styles like Play-Doh, Simpsons, Ghibli, etc. All of them like it, too. Maybe that's what people like, a very relatable use of the technology.
Like I’m the weird one because I take exception to someone just uploading a photo of me to some random AI service that without my permission.
In general when friends and family say they like something you do, they might not like that but they say they do because they like interaction with you, or are being polite, or they want to encourage action or they dont fully understand what you do but appreciate you are including them and so on. Unless you have a candid person in this group you should be skeptic when they provide feedback about a piece of tech you are building (or an AI generated image).
Dead Comment
> I reflexively told my wife, “That’s AI—skip it.”
> Yet I’m using AI-created illustrations for my graphic novel
Aren't you worried people will skip your graphic novel?
I've used AI art a lot for tabletop RPGs. The level of actual creative control isn't great, even for what should be an easy case of one character in profile against a blank background. Even if you know how to use it well you're wrestling the systems involved to try and produce consistent output ot anything unusual. And that's fine for Orc #3 or Elf Lord Soandso, which are only going to be featured for fifteen minutes at a time and in contexts where you can crop out bad details or use low-effort color grading to get a unified tone.
But for a graphic novel? What? I can't imagine giving up that level of creative control, even as someone who sucks at actual drawing. You'll never be able to get the kind of framing, poses, and structuring you want, doubly so the second you want to include anything remotely original. It's about the absolute worst case for actually using these generation tools.
It's very obvious that they're AI generated, and the authors are typically upfront about it. I still feel a bit of an ick when I see them, and Patreon discussions for the creators I follow also have similar sentiments. Not sure if it's truly a tolerable use-case for AI, but thought I'd throw it out there.
Code assistants are used by programmers wanting to be more productive. Things that claim to replace programmers entirely get dissed. (But it's more "that won't work" rather than "that's not allowed", because, well, it doesn't work. Yet at least.)
AI-generated content is probably cheap spam, even though it in theory could be made by someone knowledgeable using the AI as a tool.
Things generated by an AI are lower quality than things made by someone competent... but depending on what you're doing, that might not matter.
Ultimately, it's me who's responsible for the end product and I accept that and review the code. But it's definitely been handy.
However, AI art generators in their current form may render all artistic jobs unlivable within 20 years. Learning to draw is one of the most time-intensive skills to master. A master's degree in CS is sufficient to secure a good job, but five years of experience in art makes you a "novice". AI art is just good enough to devalue art as a whole, making it an infeasible profession to pursue, as it's already near the minimum wage on average.
In 20 years, there may not be any new professional digital artists. All art will become AI art. Do we like that world? Cheap, corporate, lazy, with no sign of effort or dedication.
I want LLMS to go away as well, but at the very least, there will always be a market for real text, and always be people able to produce it.
Developers saying LLMs are not good enough to replace programmers but can replace artists.
An artist can say the same: LLMs are not good enough to replace artists but they seem good enough to replace programmers.
However, AI image generation is immensely helpful when I want to do a painting. Before I would find photos I liked and stitch them together, or try to imagine things. Now I can get an image much closer to reference.
With code, my feeling is that we have to write way too much of it right now to express what we want. I can write a small bit of text to the LLM and it will fill out 75%+ of the code over multiple files, which I then just need to shape. So much is structure that needs repeating in variations. I don't have an answer but it seems like there's something else missing from our tools and LLMs are providing a bad imitation of what it should be.