The primary audience here isn't average people, or even engineers at the company. The primary audience is investors - the idea is to trick idiots into dumping their money into this overvalued company rather than the infinite other ones that also claim to solve groundbreaking problems with AI.
In the past, it was hiring (or at least the appearance of hiring). But that's an expensive signal and not sustainable for long in a post-ZIRP period. On the other hand, bullshitting about AI (and before that, blockchain) is much cheaper in comparison, and seems to do just as good without actually needing to hire anyone or even pretend to.
Whether AI is actually used, or helps the bottom-line, is irrelevant (it's not possible to conclusively say whether a piece of code was authored by AI or not, so the subsequent narrative will be tweaked as necessary to fit the market conditions at the time).
Meanwhile all of your technical employees discover you are an absolute clown of an executive and the entire middle management corps is feckless in the face of the slightest amount of pressure.
Yup I realized tobi is a massive dum dum when he had Tobi.eth in his Twitter profile. Even Gary tan had it and as an aside that is why I'm scared of YC's future. All of those folks were on an easy blacklist for me on twitter. Lacks original thought and is sheep like in their thinking.
Their actual AI is trash, it just there to pad their resume for investors.
I tried few days ago their AI assistant in help center and admin and it hallucinated answer to my question (pointed me to non-existing setting), when I told it it doesn't exist it just returned a bit reworded answer (still pointing me to non-existing settings section).
> We will add AI usage questions to our performance and peer review questionnaire
Not kidding, but I'm actually afraid people will check AI usage and start nagging us that:
> "You are slow because you don't use enough AI. Look at Tom (single 28 yo guy working every weekend on Adderall), he uses AI and he is fast. Gonna add a note to check up on your AI usage metrics in a month, hopefully it will improve".
Our company has Cursor, which I sometimes use, but 1. for lots of tasks, the most precise language is the programming language, 2. I don't love it, prefer other editors, and I go for in-browser search + AI.
If this letter was published by my CEO, I would either 1. ignore it, as CEOs are often out of touch when it comes to actual day to day work or they need to jump on the AI train to show how visionary they are even if they are aware of the limitations, 2. start looking for a job, because honestly, today it's a letter like this, in 3 months, you get a negative performance review because you didn't send enough queries to Cursor.
> for lots of tasks, the most precise language is the programming language
This is my problem with AI, or "vibe coding" or whatever you want to call it.
We already have many language(s) for expressing what we want the computer to do, and I find that natural language is the most difficult one to accomplish that goal.
Sure, LLMs might be able to whip up a fancy landing page and some basic API really quick, but when it comes to solving harder problems I find it incredibly frustrating and difficult to try and prompt in English, when the programming language I'm working in is already well suited to that task and is much more explicit in what I want.
Maybe I'm using it wrong, but it's way less cognitive overhead for me to just type for for x,y := range something {} than it is to try and prompt "Iterate over this list of values...etc."
I've found that the only prompts that actually work for generating code reliably are the ones where you already know exactly what code it will output -- where nearly every part of the "how" is precisely clear, it just needs to be compiled from human language into code. Writing this perfect prompt often turns out to be a waste of time. It's a classic specification problem. Human languages will let you under-specify a problem, but the result will not be pleasant.
Maybe I'm using it wrong, but it's way less cognitive overhead for me to just type for for x,y := range something {} than it is to try and prompt "Iterate over this list of values...etc."
I'd say that prompting "Iterate over this list of values...etc." is definitely using it very wrong (autocomplete should more or less handle that sort of thing anyway). Prompts should be more in line with "write a C++ function that can parse XML files that look like this (upload a few sample files) and return the data in this struct (copy and paste struct from your header file)" followed by "write a set of unit tests for this function". You then check that the unit tests look reasonable, add any other things you feel you should test for, make sure the generated code passes the unit tests, and then check it in.
Use AI to generate AI prompts while you continue on with your normal work. How will they know if your code was not written by AI?
The "look at Tom" thing isn't new, there have always been sloppy engineers that crank out bad code faster than their peers and got praised for it while slowing everyone else down because they had to actual debug and integrate the garbage.
I stopped using AI completely but it also feels like 80% of startups now are building AI focused products. It's a huge red flag for me now that the company is just riding a trend instead of building a well thought out product. Maybe that's how startups always were but it feels so nakedly cynical now.
I'm also not a fan of how productivity expectations seem to getting worse because the people in the business side read that this makes programmers 150% more productive. They probably do write more code but if the shelf life of that code is worse and there's less knowledge in the organization about that code because the engineer was leaning on a stochastic tool, how much more are you gonna spend maintaining and rewriting that software? Just seems like we're all super excited to make crappier software faster.
I briefly worked for an “AI” company that was literally just a wrapper over Anthropic’s API. They also claimed to be HIPAA compliant despite sending 100% of their user data to these 3rd parties…
> Gonna add a note to check up on your AI usage metrics in a month
Oh, oh and I almost forgot, I'm also going to need you to go ahead and come in on Monday/Friday too. Okay. We lost some people this week so we need to play catch up. Thanks!
> all of us who did have been in absolute awe of the new capabilities and tools that AI can deliver to augment our skills, crafts, and fill in our gaps.
Am I fundamentally missing something about the experience upper management has with AI versus the experience of a mid/senior level developer in the trenches? I also have access to Cursor and a few other LLMs.
They're handy for bouncing ideas around and for some simple tasks, but I've hardly felt like it was a force multiplier.
When it comes to coding, the amount of times it's hallucinated a function that didn't exist or used a deprecated implementation has felt like a net neutral at best.
> We will add AI usage questions to our performance and peer review questionnaire.
> AI must be part of your GSD Prototype phase.
I can understand asking your devs and other employees to try out AI in their workflows in an attempt to get some productivity gains. Sounds like a responsible CEO trying to milk some value out of his existing headcount. But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
> Am I fundamentally missing something about the experience upper management has with AI
Read the post again. Does this look like something a LLM would struggle to write?
A lot of people in upper management spend an awful lot of time writing meaningless word salad. They take a single sentence like "20% is getting fired" and turn it into a four-page essay about "strategic realignment", "streamlining growth", and "enabling the future of the $corp family". It doesn't really mean anything, so there's no way for a LLM to get it wrong.
If you haven't used AI to do complicated tasks where the details actually matter, I'm not surprised you'd get the idea that it is the greatest thing since sliced bread.
You've hit the nail on the head imo. The reason business leadership is stoked about this stuff is because they don't actually know anything about how their product gets built.
LLMs are really good at this kind business-centric writing.
The style contains little actual content,
so anything that can produce a lot of words with limited content can do it. Also, meaning in business communication is almost an afterthought.
> Am I fundamentally missing something about the experience upper management has with AI versus the experience of a mid/senior level developer in the trenches?
Yes, the latter group have tried using it to get work done.
> But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
I don't think you can justify the use of "AI-centric" based on his letter. Maybe it's just a projection of fears? But nothing you quoted suggests 50% is more likely than 5%.
I don't want to be rude, but it feels like this is written by ramblingwordsaladGPT.
This message should be 10 to 20x shorter, to the point and clearly actionable. Instead it feels like we got the output of prompting "can you turn these few bulletpoints into a 3 season telenovella "?
or, as he would say: Clearly the CEO is demonstrating advanced implementation of his own AI-first philosophy by leveraging transformer-based language models to exponentially increase word count while simultaneously decreasing information density—a masterclass in modern corporate communication techniques that validates his thesis about AI integration being not merely optional but fundamentally requisite in today's rapidly evolving digital landscape. The medium is indeed the message.
> AI is now a fundamental expectation at Shopify: Effective use of AI is required for all employees, and it will be integrated into performance and peer reviews. AI is essential for accelerating learning, improving productivity, and advancing business outcomes.
> AI must be integrated into the GSD prototype phase: AI should dominate the prototype phase of projects, enabling faster creation and collaboration. Teams must demonstrate how they can use AI before requesting additional resources.
> Commitment to continuous learning and sharing: Shopify emphasizes self-directed learning of AI tools, sharing knowledge and experiences with the team, and fostering a culture of experimentation to stay ahead in the fast-evolving AI landscape.
There's nothing explicit in what he said, he's just ordering around like a big proud boss: "Use AI now, for reasons" and everyone there must be looking at each other rolling their eyes, trying to find ways to tick the box without slowing down too much their real job...
We joke about AI at work a lot but man if our CEO told me I had to start using it and that my performance would be judged on that - yeah I'm out. Why don't you let your developers decide what and how much AI they want or need to use.
Was talking to my buddy who works there and they have a section in the performance review for how eagerly you're using AI. It honestly sounds like that organization is run by cargo cult clowns though.
The mandate has been set, anyone who isn't pasting LLM word salad into their performance review is wasting time and falling behind, as is anyone who directly reads the performance reviews which were submitted.
I strongly suspect this is now the case at most companies, and it's just a question of how much people admit it. I know multiple people who say they do it and a lot more who make plausibly deniable "jokes".
In a sane organization, if someone is producing, there is no reason to care whether they used LLMs constantly, not at all, somewhere in the middle, or only when it's a full moon on a Tuesday.
Wow Shopify has fallen so far down now it's just a trashy dumb company. No longer a tech company doing elite things. I wish companies didn't become mid so soon. Tobi really needs to step down and resign. It's a horrible look and tremendously bad for morale. And he seems to be feeling the pressure. When you have to expect people will spread it in bad faith, you really have some skeletons in your closet.
In the past, it was hiring (or at least the appearance of hiring). But that's an expensive signal and not sustainable for long in a post-ZIRP period. On the other hand, bullshitting about AI (and before that, blockchain) is much cheaper in comparison, and seems to do just as good without actually needing to hire anyone or even pretend to.
Whether AI is actually used, or helps the bottom-line, is irrelevant (it's not possible to conclusively say whether a piece of code was authored by AI or not, so the subsequent narrative will be tweaked as necessary to fit the market conditions at the time).
Deleted Comment
I have to wonder when some startup will attract talent with a tag line like "We use AI like we use text editors: Only where it makes sense."
Their actual AI is trash, it just there to pad their resume for investors.
I tried few days ago their AI assistant in help center and admin and it hallucinated answer to my question (pointed me to non-existing setting), when I told it it doesn't exist it just returned a bit reworded answer (still pointing me to non-existing settings section).
Not kidding, but I'm actually afraid people will check AI usage and start nagging us that:
> "You are slow because you don't use enough AI. Look at Tom (single 28 yo guy working every weekend on Adderall), he uses AI and he is fast. Gonna add a note to check up on your AI usage metrics in a month, hopefully it will improve".
Our company has Cursor, which I sometimes use, but 1. for lots of tasks, the most precise language is the programming language, 2. I don't love it, prefer other editors, and I go for in-browser search + AI.
If this letter was published by my CEO, I would either 1. ignore it, as CEOs are often out of touch when it comes to actual day to day work or they need to jump on the AI train to show how visionary they are even if they are aware of the limitations, 2. start looking for a job, because honestly, today it's a letter like this, in 3 months, you get a negative performance review because you didn't send enough queries to Cursor.
This is my problem with AI, or "vibe coding" or whatever you want to call it.
We already have many language(s) for expressing what we want the computer to do, and I find that natural language is the most difficult one to accomplish that goal.
Sure, LLMs might be able to whip up a fancy landing page and some basic API really quick, but when it comes to solving harder problems I find it incredibly frustrating and difficult to try and prompt in English, when the programming language I'm working in is already well suited to that task and is much more explicit in what I want.
Maybe I'm using it wrong, but it's way less cognitive overhead for me to just type for for x,y := range something {} than it is to try and prompt "Iterate over this list of values...etc."
When I'm programming, I'm not thinking in English
I'd say that prompting "Iterate over this list of values...etc." is definitely using it very wrong (autocomplete should more or less handle that sort of thing anyway). Prompts should be more in line with "write a C++ function that can parse XML files that look like this (upload a few sample files) and return the data in this struct (copy and paste struct from your header file)" followed by "write a set of unit tests for this function". You then check that the unit tests look reasonable, add any other things you feel you should test for, make sure the generated code passes the unit tests, and then check it in.
Deleted Comment
The "look at Tom" thing isn't new, there have always been sloppy engineers that crank out bad code faster than their peers and got praised for it while slowing everyone else down because they had to actual debug and integrate the garbage.
I'm also not a fan of how productivity expectations seem to getting worse because the people in the business side read that this makes programmers 150% more productive. They probably do write more code but if the shelf life of that code is worse and there's less knowledge in the organization about that code because the engineer was leaning on a stochastic tool, how much more are you gonna spend maintaining and rewriting that software? Just seems like we're all super excited to make crappier software faster.
is that better or worse than ketamine for weekend productivity
Dead Comment
Oh, oh and I almost forgot, I'm also going to need you to go ahead and come in on Monday/Friday too. Okay. We lost some people this week so we need to play catch up. Thanks!
Or because you spend most of your time unravelling the crap code that AI produces. Creating apps with AI is one thing. Maintaining them is another.
Am I fundamentally missing something about the experience upper management has with AI versus the experience of a mid/senior level developer in the trenches? I also have access to Cursor and a few other LLMs.
They're handy for bouncing ideas around and for some simple tasks, but I've hardly felt like it was a force multiplier.
When it comes to coding, the amount of times it's hallucinated a function that didn't exist or used a deprecated implementation has felt like a net neutral at best.
> We will add AI usage questions to our performance and peer review questionnaire.
> AI must be part of your GSD Prototype phase.
I can understand asking your devs and other employees to try out AI in their workflows in an attempt to get some productivity gains. Sounds like a responsible CEO trying to milk some value out of his existing headcount. But, it sounds absolutely dystopian to tie performance metrics or changing the project planning process to be AI-centric.
I doubt every project fits that paradigm.
Read the post again. Does this look like something a LLM would struggle to write?
A lot of people in upper management spend an awful lot of time writing meaningless word salad. They take a single sentence like "20% is getting fired" and turn it into a four-page essay about "strategic realignment", "streamlining growth", and "enabling the future of the $corp family". It doesn't really mean anything, so there's no way for a LLM to get it wrong.
If you haven't used AI to do complicated tasks where the details actually matter, I'm not surprised you'd get the idea that it is the greatest thing since sliced bread.
Yes, the latter group have tried using it to get work done.
I don't think you can justify the use of "AI-centric" based on his letter. Maybe it's just a projection of fears? But nothing you quoted suggests 50% is more likely than 5%.
Deleted Comment
whatever happened to "never go full retard"?
talk about being out of touch.
I'm down to try and use AI to enhance processes and encourage people to do that, but making it part of performance reviews is just silly.
Dead Comment
This message should be 10 to 20x shorter, to the point and clearly actionable. Instead it feels like we got the output of prompting "can you turn these few bulletpoints into a 3 season telenovella "?
or, as he would say: Clearly the CEO is demonstrating advanced implementation of his own AI-first philosophy by leveraging transformer-based language models to exponentially increase word count while simultaneously decreasing information density—a masterclass in modern corporate communication techniques that validates his thesis about AI integration being not merely optional but fundamentally requisite in today's rapidly evolving digital landscape. The medium is indeed the message.
> AI is now a fundamental expectation at Shopify: Effective use of AI is required for all employees, and it will be integrated into performance and peer reviews. AI is essential for accelerating learning, improving productivity, and advancing business outcomes.
> AI must be integrated into the GSD prototype phase: AI should dominate the prototype phase of projects, enabling faster creation and collaboration. Teams must demonstrate how they can use AI before requesting additional resources.
> Commitment to continuous learning and sharing: Shopify emphasizes self-directed learning of AI tools, sharing knowledge and experiences with the team, and fostering a culture of experimentation to stay ahead in the fast-evolving AI landscape.
I have to question the literacy levels of people these days if a page of text is too much to handle.
This may have been one page, but one full of waffle and slop.