Readit News logoReadit News
ryandrake · 6 months ago
My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better. That positive outcome is pre-supposed: there doesn't seem to be any affordance for the case where AI actually makes your work worse or slower. I guess we're supposed to ignore those cases and only mention the times it worked.

It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.

noosphr · 6 months ago
Just ask an Ai to write how it made you more productive in daily work. It's really good at that. You can pad it out to 1m words by asking it to expand on each section of with subsections.
bdangubic · 6 months ago
if one works at a place like ryandrake for sure so much this :) also ask it to ultrathink and be super comprehensive, you’ll be promoted in no time
Gud · 6 months ago
Brilliant!
belval · 6 months ago
I was in a lovely meeting where a senior "leader" was looking at effort estimates and said "Do these factor in AI-tools? Seems like it should be at least 30% lower if it did."

Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.

scrumper · 6 months ago
I was that manager. I dunno about your senior leader but with me it was coming from a healthy place. After a few months of ra-ra from the C suite about how we were now an AI-first company (we're a tech consultancy building one-off stuff for customers) and should be using it in all our customer projects, I asked the question, quite reasonably I thought, "so am I going to offer lower prices to my clients, or am I going to see much higher achieved margins on projects I sell?"

And, crickets. In practice I haven't seen any efficiencies despite my teams using AI in their work. I am not seeing delivery coming in under estimates, work costs what it always cost, we're not doing more stuff or better stuff, and my margins are the same. The only difference I can see is that I've had to negotiate a crapton of contractual amendments to allow my teams to use AI in their work.

I still think it's only good for demos and getting a prototype up and running which is like 5% of any project. Most technical work in enterprise isn't going from zero to something, it's maintaining something, or extending a big, old thing. AI stinks at that (today). You startup people with clean slates may have a different perspective.

Yoric · 6 months ago
« AI has made me productive by writing most of the answer to this question. You may ignore everything after this sentence, it is auto-generated purely from the question, without any intersection with reality. »
pjc50 · 6 months ago
It's amazing how US business culture has reinvented Soviet Stakhanovism.
cowpig · 6 months ago
What do you mean by this? My understanding is that Stakhanovism is kind of the opposite of US work culture in that it lionizes the worker and social contributions
mallowdram · 6 months ago
This is absolutely dead-on.
Mistletoe · 6 months ago
The President is a Soviet planted saboteur, it's not that surprising, it's coming from the top down. I assume this is the US manufacturing revolution he has in mind.

https://en.wikipedia.org/wiki/Stakhanovite_movement

>In 1988, the Soviet newspaper Komsomolskaya Pravda stated that the widely propagandized personal achievements of Stakhanov actually were puffery. The paper insisted that Stakhanov had used a number of helpers on support work, while the output was tallied for him alone.

rsynnott · 6 months ago
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better.

Bloody hell. That feels like getting into borderline religious territory.

yifanl · 6 months ago
Ways AI have made me more productive: Spellcheck has reduced the number of typos I've made in slack threads between 4 and 10%.
mattgreenrocks · 6 months ago
Fascinating example of corporate double-speak here!

> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.

Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"

Do you see? They cannot be wrong.

gdulli · 6 months ago
> but also went the extra step to mandate that it make us more productive, too.

Before you make any decision, ask yourself: "Is this good for the company?"

everdrive · 6 months ago
This must be how conspiracy theorists feel. How could a whole class of people (the professional managerial class) all decide at once that AI was a wonderful too we all must adopt now, and it's all going to make all of us more productive and we're 100% certain about it? It boggles the mind. I'm sure just it's just social contagion, hype, and profit motive, but it definitely feels like a conspiracy sometimes.
rsynnott · 6 months ago
It's social contagion. "Management", as a class, is actually fairly vulnerable to this; this is only the latest of a long, long line of magical things which will make everything more productive. Remember Six Sigma (as a white-collar cult, rather than as a manufacturing methodology)?
moomoo11 · 6 months ago
There’s no conspiracy.

People making the decisions are 5%, they delegate to managers who delegate to their teams and all the way down.

Decision makers (not the guy who thinks corner radius should be 12 instead of 16, obviously) want higher ROI and they see AI working for them for high level stuff.

At low level things are never sane.

Before AI it was offshore. Now it’s offshore with AI.

Prepare for chaos, the machine priests have thrown open the warp gate. May the Emperor have mercy for us.

nicbou · 6 months ago
Collective hysteria does not need to be planned. Sometimes things just fall into place, just like the conditions for a hurricane.

It seems to me like too many yearly bonuses are tied to AI implementation, due to FOMO amongst C-levels. The hype trickles down to developers afraid that they won't get hired in the new AI economy.

I don't think there's a conspiracy, just a storm front of perverse incentives.

pjc50 · 6 months ago
Spending billions of dollars on marketing works.
thatfrenchguy · 6 months ago
It’s kind of a good way to make your business collapse though, because figuring out the kinds of problems where LLMs are useful and where they’ll destroy your productivity is extremely important

Deleted Comment

duxup · 6 months ago
I wonder how much this has to do with the LinkedIn world where everyone is making "I made us 100% more efficient last week with AI!" type stuff.

I'm not normally on LinkedIn but recently was and with the AI stuff the "look at me" spam around AI seems like an order of magnitude more absurd than usual.

didibus · 6 months ago
Does your company have a stake in AI?

I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.

But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?

pkaye · 6 months ago
> Come annual review time, we need to write down all the ways AI made our work better.

That is where the AI come into full use.

cjbgkagh · 6 months ago
Just make shit up, or even better have the AI make shit up for you
Macha · 6 months ago
The problem is, the shit that's made up will be used to justify the decision as a success and ensure the methodology continues.
vrighter · 6 months ago
yep, they have to justify the spend. Where I work they've literally disabled our ability to disable it via group policy. Statistical manipulation
obezyian · 6 months ago
I went through this shit an year ago. The reports had to be weekly, though.

Everything sounded very mandatory, but a couple of months later nobody was asking about reports anymore.

meindnoch · 6 months ago
Just give them an AI generated response.
cyanydeez · 6 months ago
Soundslike a case of Republicanism.

Deleted Comment

contingencies · 6 months ago
News just in, Nvidia dumped $100B in OpenAI to pump the failing bubble.
nilkn · 6 months ago
Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.
Zagreus2142 · 6 months ago
If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).

My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.

About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.

This is abusive behavior aimed at generating a positive response the c suite can give to the board.

dukeyukey · 6 months ago
It's not a good-faith question to say "here's a new technology, write about how it made you more productive" and expect the answer to have a relationship with the truth. You're pre-ordaining the answer!
romaniv · 6 months ago
We call these workers “pilots,” as opposed to “passengers.” Pilots use gen AI 75% more often at work than passengers, and 95% more often outside of work.

Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.

Embody a pilot mindset, with high agency and optimism

Thanks for the career advice.

lkey · 6 months ago
Ridiculous, I have it on good authority that embracing the 'hacker ethos' by becoming a 'coding ninja' with a 'wizard' mindset will propel you to next-level synergisms within transformative paradigms like AI and blockchain.
karakot · 6 months ago
To leverage that hacker ethos for maximum synergy, you'll need to empower a holistic and agile mindset. This allows you to pivot toward a disruptive paradigm and monetize your scalable core competencies across the entire ecosystem.
diegof79 · 6 months ago
Yeah, the article was good until I reached that point. In which it became an ad for BetterUp consultancy to transform passengers into pilots.
anal_reactor · 6 months ago
This isn't wrong though. There's obviously two types of people using AI: one is "explain to me how X works", and the other is "do X for me". Same pattern with every technology.
jjk166 · 6 months ago
> Embody a pilot mindset, with high agency and optimism

Fly away from here at high speed

vkou · 6 months ago
A pilot has ultimate authority of how a plane is flown, because it's their ass in the fire if the plane can't land.

If you're a low-level office drone, you are not a pilot.

Deleted Comment

jennyholzer · 6 months ago
Embody a slave mindset
nphardon · 6 months ago
The ai use mandates are odd. My guess is that the c-level execs have very little practical technical skills at this point, probably haven't written a line of code in 20 years. And they believe ALL the ai hype. They think LLM's can do anything, so any employees not using them are clearly wasting time.
arwhatever · 6 months ago
The AI usage mandates are odd because why do the execs doubt that the workers try on their own to get the maximum utility out of the AI tools?
2THFairy · 6 months ago
AI criticism and pushback.

When you say "AI cannot do my job, [insert whatever reason you find compelling]" Execs only hear "I am trying to protect my job from automation".

The executives have convinced themselves that the AI productivity benefits are real, and generally refuse to listen to any argument to the contrary. Especially from their own employees.

This impedes their ability to evaluate productivity data; If a worker fails to show productivity, it can't be that AI is bad, because that'd mean the executives are wrong about something. It must be that the employee is sabotaging our AI efforts.

bluefirebrand · 6 months ago
Execs more or less always assume that workers are some combination of stupid and lazy

After all if they weren't stupid and lazy they would be important execs, not unimportant workers

nilkn · 6 months ago
Front-line workers have a conflict of interest (AI making their jobs easier may lead to layoffs); they're incentivized to be productive, but not so productive that they or a peer they like ends up without a job. That conflict of interest becomes extremely strong when most companies around them are already conducting layoffs, they already know people personally who've been laid off, and hiring remains at a low level compared to the 2010s and early 2020s.

Executives don't care about any of that and just want to make the organization more efficient. They don't care at all if the net effect is reducing headcount. In fact, they want that -- smaller teams are easier to manage and cheaper to operate in nearly every way. From an executive's standpoint, they have nothing to lose: the absolute worst-case scenario is it ends up over-hyped and in the process of rolling it out they learned who's willing to attempt change and who's not. They'll then get rid of the latter people, as they won't want them on the team because of that personality trait, and if AI tooling is broadly useful they won't even bother backfilling.

tbrownaw · 6 months ago
Well AI advocates keep insisting that the only reason for someone to not benefit is if they're resistant to change and too lazy to learn.
Galxeagle · 6 months ago
I've come to appreciate that using AI tools are a skill on it's own. Anything beyond auto code completion takes quite a bit of conscious effort to experiment with and then learn how to delegate to in a workflow. They often end up being valuable, but it did take some work to get out of my productivity 'local maximum' that maybe not everyone would naturally take on.
foolserrandboy · 6 months ago
I think if LLMs improved or our usages of them improved to the point we became design/code reviewers full time many of us would leave to do something less boring and so in some ways there is a negative incentive to investigate different AI driven workflows.
nphardon · 6 months ago
That adds to the dissonance for sure.
gosub100 · 6 months ago
Not odd under the theory that they are being done to buy wiggle room to reduce the force later on. They announce the firing and layoff of those who haven't made their forecasted numbers.
diegof79 · 6 months ago
Wow, this article resonates with me.

Today, I discussed with a product manager who insists on attaching AI generated prototypes to PRDs without any design sessions for exploration or refinement (I’m a UX designer). These prototypes contain many design issues that I must review and address each time.

Worse still, they look polished and creates the illusion that the work is nearly complete. So instead of moving faster, we end up with more back and forth about the AI miss interpretations.

ramoz · 6 months ago
My CEO sent an ai generated blog today. I've never felt more frustrated reading something in my life. "x happened, here's what it means", "groundbreaking", "game-changer", "significant", "forefront of a technological shift"
everdrive · 6 months ago
I hope you learned an important lesson about reading the next email from the CEO.
duxup · 6 months ago
At one company I worked at the executives sent out constant emails to everyone. It was part of the culture. After some layoffs HR leadership sent out a THREE PART email about how they were working on a very important project and how it took many many hours and meetings and so on.

The project was renaming the HR department ...

After that I sent all executive emails to a folder and did not read them, my mood improved drastically by not reading those emails.

ManlyBread · 6 months ago
I refuse to read anything that seems to be obviously AI generated. If they can't be bothered to write down what they think then I don't have any reason to bother with reading what they've posted either.
meindnoch · 6 months ago
Why are you reading your CEO's blog?

This question applies whether it's written by an AI or not.

baobabKoodaa · 6 months ago
You misread. It's not the CEO's blog.

Deleted Comment

zmmmmm · 6 months ago
I've never yet accepted an AI written answer when responding to my emails all though I try it routinely. Mostly it just doesn't capture my style. But even when it does, there's some kind of essential spark missing.

I think a lot about the concept that the AI output is still 99% regression to a mean of some kind. In that sense, the part it can generate for you is all the boring stuff - what doesn't add value. And to be sure, if you're writing an email etc, a huge amount of that is boring filler, most of the time. But the part it specifically cannot do is the only part that matters - the original, creative part.

The filler was never important anyway. Physically typing text was never the barrier. It's finding time and space to have the creative thought necessary to put into the communication that is the challenge. And the AI really doesn't help at all with that.

donatj · 6 months ago
My friends job of late has basically become reviewing AI-generated slop his non-technical boss is generating that mostly seems to work and proving why it's not production-ready.

Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.

He spent most of his day explaining why this shouldn't be merged.

More and more I think Brandolini's law applies directly to AI-generated code

> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.

givemeethekeys · 6 months ago
The nephew has no programming knowledge.

He wants to build a website that will turn him into a bazillionaire.

He asks AI how to solve problem X.

AI provides direction, but he doesn't quite know how to ask the right questions.

Still, the AI manages to give him a 70% solution.

He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.

Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.

zarmin · 6 months ago
In the early aughts, I was so adept at navigating my town because I delivered pizza. I could draw a map from memory. My directional skills were A+.

Once GPS became ubiquitous, I started relying on it, and over about a decade, my navigational skills degraded to the point of embarrassment. I've lived in the same major city now for 5 years and I still need a GPS to go everywhere.

This is happening to many people now, where LLMs are replacing our thinking. My dad thinks he is writing his own memoirs. Yeah pop, weird how you and everyone else just started using the "X isn't Y, it's Z" trope liberally in your writing out of nowhere.

It's definitely scary. And it's definitely sinister. I maintain that this is intentional, and the system is working the way they want it to.

lazide · 6 months ago
More precisely, each ‘AI’ is just a statistical grouping of a large subset of other (generally randomly) selected humans.

You don’t even get the same ‘human’ with the same AI, as you can see with various prompting.

It’s like doing a lossy compression of an image, and then wondering why the color of a specific pixel isn’t quite right!

averageRoyalty · 6 months ago
Who is "the nephew"?
doublerabbit · 6 months ago
Understand the first 70%.

With the 70% you then pitch "I have this" and some Corp/VC will buyout the remaining 30%.

They then in return hire engineers who are willing to lap the 70% slop, and fix the rest with more AI slop.

Your brother dies happily achieving their dream of being a bazillionaire doing nothing more than typing a few sentences in a search bar.

matheusmoreira · 6 months ago
> He spent most of his day explaining why this shouldn't be merged.

"Explain to me in detail exactly how and why this works, or I'm not merging."

This should suffice as a response to any code the developer did not actively think about before submitting, AI generated or not.

latexr · 6 months ago
I think you might’ve missed this part from the post:

> AI-generated slop his non-technical boss is generating

It’s his boss. The type of boss who happily generates AI slop is likely to be the type of person who wants things done their way. The employee doesn’t have the power to block the merge if the boss wants it, thus the conversation on why it shouldn’t be merged needs to be considerably longer (or they need to quit).

mholm · 6 months ago
"You're absolutely right— This code works by [...]"
oblio · 6 months ago
fzeroracer · 6 months ago
Sadly, I've seen multiple well-known developers here on HN argue that reading code in fact isn't hard and that it's easy to review AI-generated code. I think fundamentally what AI-generated code is doing is exposing the cracks in many, many engineers across the board that either don't care about code quality or are completely unable to step back and evaluate their own process to see if what they're doing is good or not. If it works it works and there's no need to understand why or how.
bwfan123 · 6 months ago
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it

I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.

RobinL · 6 months ago
I largely agree. As a counterpoint, today I delivered a significant PR that was accepted easily by the lead dev with the following approach:

1. Create a branch and vibe code a solution until it works (I'm using codex cli)

2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.

This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.

I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution

cruffle_duffle · 6 months ago
The best way I’ve found to use LLM’s for writing anything that matters is, after feeding it the right context, take its output and then retype it in your own words. Then the LLM has helped capture your brain dump and organize it but by forcing yourself to write it rather than copy and paste… you get to make it your own. This technique has worked quite well with domains I’m not the best at yet like marketing copy. I want my shit to have my own voice but I’m not sure what to cover… so let the LLM help me with what to cover and then I can rewrite its work.
jjgreen · 6 months ago
Ship it!
rickydroll · 6 months ago
Workslop production is how we determine who should get a ticket for Ark B.