Ok so clearly a satire. However I kinda want this. They make some really good points about how an AI would be better than many CEOs. Honestly some of the companies I've worked for would be better with Gemini at in charge. Yes humanity is doomed, but at least I would understand the motivations and we'd have less CEO ADHD moments. (CEO ADHD -> "Some other CEO told me about X, why aren't we doing X")
I feel like if I mention technology X in my system context for Gemini, there is a 100% chance that when I ask for hiking recommendations Gemini will say "As a user of technology X, you would appreciate the beauty and elegance of the Cuyamaca National Forest"
I worked as a consultant for a company where the CEO one day, they just started using AI chat for everything. Every question you asked, they just forwarded it. Same thing for company strategy, major decisions, presentation content, and so on.
Initially, I was really annoyed. After I took a deep breath, and read through the wall of text they sent (to figure out how to respond), I eventually realized it was slightly better than their previous work. Not like, night-and-day better, but slightly better.
Since then, I've been playing with the idea of 'hiring' an AI to manage my freelance and personal work. I would not be required to do what it says, but I could take it under consideration and see if I work better that way. Sort of like the ultimate expression of "servant leadership".
> Since then, I've been playing with the idea of 'hiring' an AI to manage my freelance and personal work.
Shit, I think you are now personally responsible for 3-4 home projects I've neglected investing time into actually getting some attention. I too am much more productive and oddly enough, find the work more interesting when It's someone else asking and waiting for me to deliver it.
I haven't tried the Gemini CLI yet and creating an agent that acts like a customer I have to answer too about projects progress sounds like a perfect project idea for this weekend.
Question is, will I actually see this one through our will it too wind up in homelab project purgatory!
It's cute, and fun, but I disagree. It could make a mistake, but it could go a long time with giving us confidence and a reasonable return of intelligent results. I think that can lull us into dependency. Do we really want to give up decision-making to AI? I don't think so.
With that said, if it's used purely as a tool by a CEO, and overtime has been developed with the optimal parameters for the company with its culture and everything (thank Apple) then the AI can help make decision decisions for the company.
In this scenario the person who wants to be paid owns the output of the agent. So it’s closer to a contractor and subcontractor arrangement than employment.
A question is which side agents will achieve human-level skill at first. It wouldn’t surprise me if doing the work itself end-to-end (to a market-ready standard) remains in the uncanny valley for quite some time, while “fuzzier” roles like management can be more readily replaced.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
The AI agents don’t appear to know how & where to be economically productive. That still appears to be a uniquely human domain of expertise.
So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.
It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.
> Just let me subscribe to an agent to do my work while I keep getting a paycheck.
I've already done this. It's just a Teams bot that responds to messages with:
"Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"
"I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"
"Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"
"Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"
and then
"If you've any further questions please stick them in an email and I'll look at it as a priority.
Mòran taing,
EB."
(notice that I don't say how high a priority?)
No AI needed. Just good old-fashioned scripting, and organic stupidity.
Reminded me of an episode of the IT Crowd where they put a recording of "Have you tried turning it off and on again? as the answering machine for an IT department.
What would you actually do if you got that? I like watching movies and playing games, but that lifestyle quickly leads to depression. I like travelling too, but imagine if everyone could do it all the time. There's only so many good places.
I would use the AI to build a robot that could build copies of itself and then once there are a sufficient number of robots I'd use them to build more good places to go to.
Isn't this kind of the same as an AI copilot, just with higher autonomy?
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
Really this is the only 10x part of GenAI that I see: increasing the number of reports exponentially by removing managers/directors, and using GenAI (search/summarization, e.g. "how is X progressing" etc) to understand what's going on underneath you. Get rid of the political game of telephone and get leaders closer to the ground floor (and the real problems/blockers).
If your entire job, as a VP or director/manager, is getting progress reports, you’re probably a wildly shitty manager and ought to be replaced anyways.
Seems more like the kind of thing a “smartest guy in the building” dev believes to be true, than actual reality at a real company.
Having VPs “clear blockers” is absolutely asinine.
From what I hear, this will not happen. AI keeps absolutely making up laws and cases that don’t exist no matter what you feed it. Basically anything legal written or partially written by AI is a liability. IANAL but have been reading a tiny bit about it.
Our CEO did not write a customary Thanksgiving email. There was nothing from other C-level leadership. I’ve been around long enough to see this erosion in company culture custom. What is happening? Perhaps an AI CEO would have these subtleties.
Though I think the CEO role is realistically one of the hardest to automate, I’d say middle management is a very juicy target.
To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.
The UI looks good!
Is there a reason this is being shared here? Feels like a collection of tired, trite oneliners that I’d expect to see on Twitter rather than here.
Agreed, it’s only superficially funny, there’s a ton left on the table that could have made it actually good, it feels like it doesn’t adequately parody CEOs or AI in a way that indicates any insight.
I worked as a consultant for a company where the CEO one day, they just started using AI chat for everything. Every question you asked, they just forwarded it. Same thing for company strategy, major decisions, presentation content, and so on.
Initially, I was really annoyed. After I took a deep breath, and read through the wall of text they sent (to figure out how to respond), I eventually realized it was slightly better than their previous work. Not like, night-and-day better, but slightly better.
Since then, I've been playing with the idea of 'hiring' an AI to manage my freelance and personal work. I would not be required to do what it says, but I could take it under consideration and see if I work better that way. Sort of like the ultimate expression of "servant leadership".
Shit, I think you are now personally responsible for 3-4 home projects I've neglected investing time into actually getting some attention. I too am much more productive and oddly enough, find the work more interesting when It's someone else asking and waiting for me to deliver it.
I haven't tried the Gemini CLI yet and creating an agent that acts like a customer I have to answer too about projects progress sounds like a perfect project idea for this weekend.
Question is, will I actually see this one through our will it too wind up in homelab project purgatory!
Give me a raise so I can buy her medicine, or my grandma dies...
With that said, if it's used purely as a tool by a CEO, and overtime has been developed with the optimal parameters for the company with its culture and everything (thank Apple) then the AI can help make decision decisions for the company.
It seems to me that a great deal of people cannot wait to give up decision-making to AI
If you have a problem with it, I've got some bad news for you.
HFT firms have been doing this long before ChatGPT hit the scene, and making millions off of it.
Just let me subscribe to an agent to do my work while I keep getting a paycheck.
It’s like how we all once thought blue collar work would be first, but it turned out that knowledge work is much easier. Right now everyone imagines managers replacing their employees with AI, but we might have the order reversed.
I believe once AI scales my theory will be proven universal.
My wife believes there will eventually also be a third job created to do the job.
Lots of us are not cut out for blue collar work.
So the human is there to decide which job is economically productive to take on. The AI is there to execute the day-to-day tasks involved in the job.
It’s symbiotic. The human doesn’t labour unnecessarily. The AI has some avenue of productive output & revenue generating opportunity for OpenAI/Anthropic/whoever.
I've already done this. It's just a Teams bot that responds to messages with:
"Yeah that looks okay, but it should probably be a database rather than an Excel spreadsheet. Have you run it past the dev team? If you need anything else just raise a ticket and get Helpdesk to tag me in it"
"I'm pretty sure you'll be fine with that, but check with {{ senior_manager }} first, and if you need further support just raise a ticket and Helpdesk will pass it over"
"Yes, quite so, and indeed if you refer to my previous email from about six months ago you'll see I mentioned that at the time"
"Okay, you should be good to go. Just remember, we have Change Management Process for a reason so the next time try to raise a CR so one of us can review it, before anyone touches anything"
and then
"If you've any further questions please stick them in an email and I'll look at it as a priority.
Mòran taing,
EB."
(notice that I don't say how high a priority?)
No AI needed. Just good old-fashioned scripting, and organic stupidity.
Deleted Comment
Dead Comment
I think the limiting factor is that the AI still isn't good enough to be fully autonomous, so it needs your input. That's why it's still in copilot form
Dead Comment
Seems more like the kind of thing a “smartest guy in the building” dev believes to be true, than actual reality at a real company.
Having VPs “clear blockers” is absolutely asinine.
https://bossasaservice.com/
To the extent a manager is just organizing and coordinating rather than setting strategic direction, I think that role is well within current capabilities. It’s much easier to automate this than the work itself, assuming you have a high bar for quality.