This was a weird list. I couldn’t make it through the whole thing.
Specifically the bullet about Spotify has nothing to do with AI. Spotify can optimize for longer songs now just as they could years ago. Data analysis and algorithms that alter human behavior is nothing new.
The other bullet about brand attachment being replaced with a personal AI agent that knows everything about you seems weird as well. I’ve not seen any LLM understand personal preferences beyond the exact words that were written. I don’t see a world where transformers based LLMs make consistent and intuitive judgment calls that will gain consumer trust over time. It’s Very AI maximalist, which is not a mindset I subscribe to (at least not with the current generation of statistics based AI)
There's a lot of predictions in this article I'd happily call dystopian, but this quote sticks out to me:
> As we gain trust in the guidance of agent-assisted experiences, will the impact of brand, referral, and relationships in purchase decisions be diminished?
I can't really put my finger on why, but thinking about the frame of mental reference that you'd need to write a question like that makes me very weary.
Because... I'm not honestly sure which option sounds more dystopian to me.
Follow our AI overlords sounds dystopian, but having a personalizable automated agent I can train and control that can eliminate guesswork and undesired influences sounds good.
Having trusted friends provide advice sounds great, but reality of brand and advertising impact, clueless people offering unexamined advice, and the fake relationships between an incentivized salesperson and a consumer, also sound (and are!) awful.
I know my brain is weird sometimes :), and I generally have no internal taboos to asking any kind of question; so I wonder what you find about mental reference that wrote the question, as it doesn't trigger anything to me; do you feel they're overly optimistic about AI's ability to provide helpful assistance, or that humans will rely on them, or they're overly optimistic about current methods of purchasing decisions? Or something completely different? :)
> having a personalizable automated agent I can train and control that can eliminate guesswork and undesired influences sounds good
How do you ensure it eliminates undesired influences though? Say for example you need to find a new brand of dish soap to buy because the one you were buying becomes unacceptable for whatever reason, how is your agent going to compare them? Unless you have carefully programmed it with pre-checked dish soap data it's going to need to go and get that from all the manufacturers, and they are going to try extremely hard to work out ways to game the system. Maybe brand X describes their soap as "sumptuous and delicious" and the AI goes "hey that sounds great! Those are the kind of words my human loves!"
Until LLMs have evolved to the point that they're not susceptible to the grandma exploit, trusting them to remain sane and safe in the presence of outside input is a really bad idea in my view
Can you elaborate? What is the mental frame of reference you refer to? I’m not saying I’d disagree with you recreationally, I’m just curious to understand your meaning.
The proliferation of component content systems(CCS), aka, the idea of re-using modules of prefabricated content among technical writers, will finally, hopefully, get murdered by large-scale adoption of LLMs in the technical publication space.[0]
Unfortunately, training LLMs on CCS modules goes really, really badly, because it's not natural language. It's a milkshake of formal language - with keys described in external and secret places - and natural language. Often with some fun SGML character artifacts[1] thrown in to scramble the brains of my text digesters.
The real crap of it is, for these reasons and more, the LLMs will find the CCS content more or less useless. At the end of the day, we'll just train 'em on the outputs - the actual usable documents. Which makes aaaaalllllll that CCS effort not just wasted, but an active impediment.
It's a bit funny, in that I get most of my money from implementing these CCS systems, but at heart I feel like, 95% of the time, they're a very terrible bad terrible idea. Of course, the way the winds are blowing from USN CIO and other bodies, it's entirely possible that ALL technologies classified as "AI" (by someone?) could be banned not just from USN metal, but from being installed on prem by any contractor. Now . . if THAT is the final policy decision, well, I dunno. I literally have no idea what they're thinking in that case. It would be pretty dumb, even for the defense business. It's also kinda useless, people have been using backpropagating neural networks for a while already in fault rees, image analysis, a million other things . . how they're gonna roll that back is a mystery.
[0] Weirdly, I don't think tech writers will. Not all of them. They're deciphering new systems a lot of the time, and I don't see the models as being more than 70% accurate in a lot of those cases, because often no one really knows what the system even is when the writer starts writing. But yeah, a lot of writers will get canned quite easily. That . . that might be my fault, a little bit.
[1] Or worse. "Oh, good, null characters everywhere! I am so glad PTC loved these so much they put them literally everywhere". That's been a fun little excursion.
I copied the article into ChatGPT and told it to give me all of the insights in a concise format.
Ironically, my use fits into one of the insightful points from the article, as per ChatGPT:
"Reduced Brand Influence: AI will reduce the impact of brand and marketing as AI agents provide more objective purchase advice, challenging traditional marketing strategies."
The ChatGPT took away the potential brand influence Strange Ways may have earned had I read the entire article.
Although, if there was something specifically insightful about Strange Ways AI included, I think GPT would have picked it up and included it, but it didn't.
Specifically the bullet about Spotify has nothing to do with AI. Spotify can optimize for longer songs now just as they could years ago. Data analysis and algorithms that alter human behavior is nothing new.
The other bullet about brand attachment being replaced with a personal AI agent that knows everything about you seems weird as well. I’ve not seen any LLM understand personal preferences beyond the exact words that were written. I don’t see a world where transformers based LLMs make consistent and intuitive judgment calls that will gain consumer trust over time. It’s Very AI maximalist, which is not a mindset I subscribe to (at least not with the current generation of statistics based AI)
> As we gain trust in the guidance of agent-assisted experiences, will the impact of brand, referral, and relationships in purchase decisions be diminished?
I can't really put my finger on why, but thinking about the frame of mental reference that you'd need to write a question like that makes me very weary.
Because... I'm not honestly sure which option sounds more dystopian to me.
Follow our AI overlords sounds dystopian, but having a personalizable automated agent I can train and control that can eliminate guesswork and undesired influences sounds good.
Having trusted friends provide advice sounds great, but reality of brand and advertising impact, clueless people offering unexamined advice, and the fake relationships between an incentivized salesperson and a consumer, also sound (and are!) awful.
I know my brain is weird sometimes :), and I generally have no internal taboos to asking any kind of question; so I wonder what you find about mental reference that wrote the question, as it doesn't trigger anything to me; do you feel they're overly optimistic about AI's ability to provide helpful assistance, or that humans will rely on them, or they're overly optimistic about current methods of purchasing decisions? Or something completely different? :)
How do you ensure it eliminates undesired influences though? Say for example you need to find a new brand of dish soap to buy because the one you were buying becomes unacceptable for whatever reason, how is your agent going to compare them? Unless you have carefully programmed it with pre-checked dish soap data it's going to need to go and get that from all the manufacturers, and they are going to try extremely hard to work out ways to game the system. Maybe brand X describes their soap as "sumptuous and delicious" and the AI goes "hey that sounds great! Those are the kind of words my human loves!"
Until LLMs have evolved to the point that they're not susceptible to the grandma exploit, trusting them to remain sane and safe in the presence of outside input is a really bad idea in my view
Unfortunately, training LLMs on CCS modules goes really, really badly, because it's not natural language. It's a milkshake of formal language - with keys described in external and secret places - and natural language. Often with some fun SGML character artifacts[1] thrown in to scramble the brains of my text digesters.
The real crap of it is, for these reasons and more, the LLMs will find the CCS content more or less useless. At the end of the day, we'll just train 'em on the outputs - the actual usable documents. Which makes aaaaalllllll that CCS effort not just wasted, but an active impediment.
It's a bit funny, in that I get most of my money from implementing these CCS systems, but at heart I feel like, 95% of the time, they're a very terrible bad terrible idea. Of course, the way the winds are blowing from USN CIO and other bodies, it's entirely possible that ALL technologies classified as "AI" (by someone?) could be banned not just from USN metal, but from being installed on prem by any contractor. Now . . if THAT is the final policy decision, well, I dunno. I literally have no idea what they're thinking in that case. It would be pretty dumb, even for the defense business. It's also kinda useless, people have been using backpropagating neural networks for a while already in fault rees, image analysis, a million other things . . how they're gonna roll that back is a mystery.
[0] Weirdly, I don't think tech writers will. Not all of them. They're deciphering new systems a lot of the time, and I don't see the models as being more than 70% accurate in a lot of those cases, because often no one really knows what the system even is when the writer starts writing. But yeah, a lot of writers will get canned quite easily. That . . that might be my fault, a little bit.
[1] Or worse. "Oh, good, null characters everywhere! I am so glad PTC loved these so much they put them literally everywhere". That's been a fun little excursion.
Now I kind of want a startup called strangeways.ai
Ironically, my use fits into one of the insightful points from the article, as per ChatGPT:
"Reduced Brand Influence: AI will reduce the impact of brand and marketing as AI agents provide more objective purchase advice, challenging traditional marketing strategies."
The ChatGPT took away the potential brand influence Strange Ways may have earned had I read the entire article.
Although, if there was something specifically insightful about Strange Ways AI included, I think GPT would have picked it up and included it, but it didn't.
Deleted Comment