Readit News logoReadit News
btbuildem · 3 months ago
This is because they're trying to reduce the wrong headcount. The largest inefficiencies in corpo orgs lie in the ways they organize their knowledge and information stores, and in how they manage decision making.

The rank and file generally have a really good grasp on their subset of the domain -- they have expertise and experience, as well as local context. Small teams, their managers -- those are the ones who actually perform, and deliver value.

As you move up the hierarchy, access to information does not scale. People in the middle are generally mediocre performers, buried in process, ritual and politic. In addition to these burdens, the information systems do their best to obscure knowledge, with the usual excuses of Safe and Secure (tm) -- things are siloed, search does not work, archives are sunsetted, etc.

In some orgs tribalism also plays an outsized role, with teams acting competitive, which largely results in wasted resources and seven versions of the same failed attempt at New Shiny Thing.

Then as we look higher yet in the hierarchy, the so-called decision makers don't really do anything that cannot be described as "maximize profit" or "cut costs", all while fighting not to get pulled down by the Lord of the Flies shenanigans of their underlings. They are the most replaceable.

A successful "AI Transformation" would come in top-down, going after the most expensive headcount first. Only truly valuable contributors would remain at that level. Organizational knowledge bases would allow to search, analyze and reason about the institutional knowledge accrued in corporate archives over the years, enabling much more effective decision making. Meanwhile, the ICs would benefit from the AI boost, outsourcing some menial tasks to the machine, with the dual benefit of levelling up their roles, and feeding the machine more context about the lower-level work done across the org.

fluidcruft · 3 months ago
I think another barrier is that end users don't trust IT to not pull the rug out from under us. It's quite a bit of effort to learn and figure out workflows for actually getting work done and IT doesn't tend to give a shit about that. Particularly enterprise IT's attitude about trials can kiss my ass. Enterprise IT has their timeline, and I have my deadlines. I'll get to it when I have time.

But particularly we're always dealing with IT security "experts" running dumb checklists taking things away and breaking everything and never bothering figure out how we're supposed to use computers to actually get any work done ("hmmm. we didn't think about that... we'll get back to you" is a common response from these certified goons). Apparently the security gods have decided we can't have department file servers anymore because backups are too difficult to protect against ransomware or something so we're all distracted with that pronouncement from the mountain trying to figure out how to get anything done at the moment.

_DeadFred_ · 3 months ago
What the above person is arguing for is this this exact thing but instead of 'security you don't need' it's now people you don't need because AI can scan an information repository and be the expert.
lenerdenator · 3 months ago
> This is because they're trying to reduce the wrong headcount.

> A successful "AI Transformation" would come in top-down, going after the most expensive headcount first.

This isn't a mistake. McKinsey consultants and their executives at their clients are a part of the same clique. You don't get into either without going to the right schools, being in the right fraternities, and knowing the right people. "Maximize profit" and "cut costs" are to be read as "keep the most money for ourselves in the form of earnings per share and dividends" and "pay fewer people". And since you can convert shares to money by gutting companies, there's no real incentive to remain competitive in the greater marketplace.

duxup · 3 months ago
I shared this recently with regard to teams cross training.

I worked in tech support at a big company long ago. Tech support, sales, and engineering used to have a week (for each employee) where we would leave our team and follow the other team around.

It provided incredible efficiency. I now knew what sales was talking about when they called me, they understood how I worked, engineering and I got along so well they used to invite me to their team when they had lunch catered.

Who didn't we need anymore? The middle managers between the groups who brokered what info each group could see, and how we communicated among groups. We solved problems before they started all on our own.

The middle managers won in the end, ending the cross training, too costly they said, but I think they realized that we just didn't need them / weren't engaging them anymore...

nostrademons · 3 months ago
I've wondered sometimes what the root of this dynamic is, and why corporations are as inefficient as they are. I've come to the conclusion that it's deliberate.

When I look at top-level decision-makers at my Mag-7 employer, they are smart people. Many of them were go-getters in their earlier career, responsible for driving some very successful initiatives, and that's why they're at the top of the company. And they're very intentional about team structure: being close enough to senior directors and VPs to see some of their thinking, I can tell that they understand exactly who the competent people are, who gets things done, who likes to work on what, and then they put those people at the bottom of the hierarchy with incompetent risk-averse people above them. Then they'll pull them out and have them report directly to a senior person when there's a strategic initiative that needs doing, complete it, and then re-org them back under a middle-manager that ensures nothing gets done.

I think the reason for this is that if you have a wildly successful company, the last thing you want to do is screw it up. You're on top of the world, money is raking in from your monopoly - and you're in zugzwang. Your best move is not to play, because any substantive shift in your product or marketplace risks moving you to a position where you aren't so advantaged. So CEOs of successful companies have a job to do, and that job is to ensure that nothing happens. But people's natural inclination is to do things, and if they aren't doing things inside your company they will probably be doing things outside your company that risk toppling it. So you put one section of the company to work digging holes, and put the other section to work filling them in, and now everybody is happy and productive and yet there's no net external change to your company's position.

Why even have employees then? Why not just milk your monopoly, keep the team lean, and let everybody involved have a big share of the profits? Some companies do actually function like this, eg. Nintendo and Valve famously run with fairly small employee counts and just milk their profits, some HFT trading shops like RennTech just give huge employee dividends and milk their position.

But the problem is largely politics. For one, owning a monopoly invites scrutiny; there are a lot of things that are illegal, and if you're not very careful, you can end up on the wrong side of them. Two, owning an incredibly lucrative business makes you a target for competition, and for rule-changes or political action that affect your incredibly lucrative business. Perhaps that's why examples of highly-profitable businesses that stay small often involve staying secret (eg. HFT) or being in an industry that everybody else dismisses as inconsequential (eg. gaming or dating).

By having the huge org that does nothing, the CEO can say "Look, I provide jobs. We're not a monopoly because we have an unfair advantage, we compete fairly and just have a lot of people working very hard." And they can devote a bunch of people to that legal compliance and PR to make sure they stay on the right side of the government, and it also gives them the optionality to pull all those talented people out and unmuzzle them when there actually is a competitive threat.

potatolicious · 3 months ago
> "Why even have employees then? Why not just milk your monopoly, keep the team lean, and let everybody involved have a big share of the profits?"

So we're seeing this play out. There are two factors that exist in tension here:

- The valuation of many of these companies depend on the perception that they are The Future. Part of that is heavy R&D spending and the reputation that they hire The Best. Even if the company mostly just wants to sit and milk its market position, keeping the stock price afloat requires looking like they're also innovative and forging the future.

- Some companies are embracing the milk-it-for-all-its-worth life stage of their company. You see this in some of the Mag-7 where compensation targets are scaling down, explicit and implicit layoffs, etc. This gear-shifting takes time but IMO is in fact happening.

The tightrope they're all trying to walk is how to do the latter without risking their reputation as the former, because the mythos that they are the engines of future growth is what keeps the stock price ticking.

apwell23 · 3 months ago
> So CEOs of successful companies have a job to do, and that job is to ensure that nothing happens. But people's natural inclination is to do things, and if they aren't doing things inside your company they will probably be doing things outside your company that risk toppling it. So you put one section of the company to work digging holes, and put the other section to work filling them in, and now everybody is happy and productive and yet there's no net external change to your company's position.

i work at a large music streamer and this perfectly describes my workplace. when i was outside i never understood why that company needs thousands and thousands of ppl to run what looks like stagnant product that hasn't changed much in years.

arethuza · 3 months ago
Have you seen the "sociopath, clueless, loser" model?

https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...

tempodox · 3 months ago
Logically you’re right, but power does not follow logic. And so it’s the lowest levels that get replaced by “AI”.
re-thc · 3 months ago
> A successful "AI Transformation" would come in top-down, going after the most expensive headcount first.

Do you still need an "AI Transformation" then? Sounds like just axe the CEO or cut their enormous salary = profit?

josefritzishere · 3 months ago
Management will not volunteer to replace themselves. So that would mean that if that's all that Ai is trully good at... the product is unsalable.
apwell23 · 3 months ago
do you mean a VP laying off a middle manager would send a signal that he can be similarly replaced?
_DeadFred_ · 3 months ago
You can't 90% utilize everyone. There are lots of rolls where they are needed maybe 2-5% for edge cases, but those cases are brake the company ones. So they do mediocre seemingly unimportant rolls because they are willing/able to meet those 2-5% requirements. They get kissed up to because they literally keep the company around.

I said before but in a pre-2000 world when I worked at a public company I got lectured when my employees timecards utilization were too high because you can't actually utilize people at those levels full time continuously and management wouldn't sign off on incorrect timecards. But the modern world pretends it's fine and is trying to optimize to it and it just won't work. You can't optimize out the key knowledge, and you can't keep the key knowledge and be 80% utilized doing something else.

This is all based on some ideal world that doesn't exist in reality.

StableAlkyne · 3 months ago
> while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."

I'm surprised McKinsey convinced someone to say the quiet part out loud

_fat_santa · 3 months ago
I find it all quite strange:

- AI companies of course will try and sell you that you can reduce headcount with AI

- CEO's will parrot this talking point without ever talking a closer look.

- Everyone lower down on the org chart minus the engineers are wondering why the change hasn't started yet.

- Meanwhile engineers are ripping their hair out cause they know that AI in it's current state will likely not replace any workers.

Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".

itake · 3 months ago
The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization.

No one wants to say on their resume, "I manage 5 people, but trust me, with AI, its like managing 20 people!"

Managers also don't pay people's salaries. The Tech Tools budget is a different budget than People salaries.

Also keep in mind, for any problem space, there is an unlimited number of things to do. 20 people working 20% more efficiently wont reach infinity any faster than 10 people.

btbuildem · 3 months ago
I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites, and deliver approximately same value. It would provide additional benefits, in that AIs refuse to do unethical things and generally reason acceptably well. The cost cutting would be immense!

I am not kidding. In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki. The current AIs are perfect for this.

newsclues · 3 months ago
AI is most capable of replacing the humans who have the power to decide or influence the choice to replace humans with AI.

But managers will not obsolete themselves.

So right now AI should be used to monitor and analyze the workforce and find the efficiency that can be achieved with AI.

xp84 · 3 months ago
> AI in its current state will likely not replace any workers.

This is a puzzling assertion to me. Hasn’t even the cheapest Copilot subscription arguably replaced most of the headcount that we used to have of junior new-grad developers? And the Zendesks of the world have been selling AI products for years now that reduce L1 support headcount, and quite effectively too since the main job of L1 support is/was shooting people links to FAQs or KB articles or asking them to try restarting their computer.

rsynnott · 3 months ago
> Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".

Yup, it's just the latest management fad. Remember Six Sigma? Or Agile (in its full-blown cultish form; some aspects can be mildly useful)? Or matrix management? Business leaders, as a class, seem almost uniquely susceptible to fads. There is always _some_ magic which is going to radically increase productivity, if everyone just believes hard enough.

duxup · 3 months ago
I was working with a team on a pretty simple AI solution we were adding to our larger product. Every time we talk to someone we're telling them "still need a human to validate this..."
parliament32 · 3 months ago
> ripping their hair out

I mean, nah, we've seen enough to these cycles to know exactly how this will end.. with a sigh and a whimper and the Next Big Thing taking the spotlight. After all, where are all the articles about how "that time that CEOs thought blockchain could replace databases" etc?

jordanb · 3 months ago
Also strange that this executive is worried about how the business continues to function after the people are gone. That's not the McKinsey Way!
treis · 3 months ago
I think they can. IME LLMs have me working somewhat less and doing somewhat more. It's not a tidal wave but I'm stuck a little bit less on bugs and some things like regex or sql I'm much faster. It's something like 5-10% more productive. That level of slack is easy to take up by doing more but theoretically it means being able to lose 1 out of every 10-20 devs.
iamleppert · 3 months ago
How does it make sense to trade one group of labor (human) who are generally loosely connected, having little collective power for another (AI)? What you're really doing isn't making work more "efficient", you're just outsourcing work to another party -- one who you have very little control over. A party that is very well capitalized, who is probably interested in taking more and more of your margin once they figure out how your business works (and that's going to be really easy because you help them train AI models to do your business).
newsclues · 3 months ago
It’s the same as robots in a factory.
dahcryn · 3 months ago
both make a lot of sense, but the biggest mistake they make is to see people as capacity, or as a counter.

Each human can be a bit more productive, I fully believe 10-15% is possible with today's tools if we do it right. But each human has it unique set of experience and knowledge. If I do my job a bit faster, and you do your job a bit faster. But if we are a team of 10, and we do all our job 10% faster, doesn't mean you can let one of us go. It just means, we all do our job 10% faster, which we probably waste by drinking more coffee or taking longer lunch breaks

loudmax · 3 months ago
Organizations that successfully adapt are those that use new technology to empower their existing workers to become more productive. Organizations looking to replace humans with robots are run by idiots and they will fail.
jf22 · 3 months ago
This part was never quiet...

The quiet part out loud phrase is overused.

Deleted Comment

alberth · 3 months ago
McKinsey has pitched my company on projects where their compensation is entirely outcome-based — for example, if a project generates $20 million in incremental revenue, they would earn 10% of that amount.

I have to admit, the results they demonstrated — which we validated using our own data — were impressive.

The challenge, however, is that outcome-based contracts are hard for companies to manage, since they still need to plan and budget for potential costs upfront.

So even when you have measurable benefits - it's still not so easy either.

EDIT:

To clarify the issue — companies are used to budgeting for initiatives with fixed costs. But in an outcome-based contract, the cost is variable.

As a result, finance teams struggle to plan or allocate budgets because the final amount could range widely — for example, $200K, $2M, or even $20M — depending on the results achieved.

Additionally, you almost then need a partial FTE just to manage these contracts to ensure you don't overpay because the results are wrongly measured, etc.

None of these challenges are insurmountable, but it's also not easy for companies either.

caminante · 3 months ago
What's upfront about a backloaded earnout?

You model it as a fixed %, variable cost and run revenue sensitivities. It either meets your investment criteria or doesn't.

flumpcakes · 3 months ago
Perhaps their advice needs expenditure up front - for example if they suggested using blue photocopiers and you only have pink ones. You would have to spend the money on blue photocopiers before you see the return, and before they see their services fee paid?
mschild · 3 months ago
I'd imagine the opportunity cost and man power. Even though McKinsey should do the work they will need access to people and information to accomplish it.

Deleted Comment

stronglikedan · 3 months ago
> they still need to plan and budget for potential costs upfront

Same reason they ask for "estimates" which they then later try to hold accountable as "quotes" when it suits them. Same reason I 3x my initial estimates.

halper · 3 months ago
How is that hard? They put 90% of their estimated revenue as net revenue (post-McK tax) in the budget? Seems about as hard as the underlying problem, which is guessing ("forecasting") the revenue.
dahcryn · 3 months ago
that's usually in areas where they are very certain.

I'd be surprised if they'd do that for GenAI projects, maybe only for really good clients that pay them 50mln+ a year anyway

athrowaway3z · 3 months ago
Well, that's fucking scary. I'd be digging deep if I was on the board.

Either

- the execs are leaving a laughably easy 20m on the table McKinsey knew they'd make (how did they know, and why didn't we)

- they're dealing with insider information - especially dangerous if McKinsey is changing dependencies around.

- they're doing some creative accounting

jmuguy · 3 months ago
I hadn't ever tried Notion before but I sort of vaguely understood it was a nice way to make some documentation and wiki type content. I had a need for something like a table that I could filter that I would normally just do in Google Sheets. So I go check out Notion and their entire site is focused on AI. Look at what this agent can do, or that. I signed up and the entire signup flow is also focused on AI. Finally I was able to locate what I thought was their core offering - the wikis etc. And ended up pretty impressed with the features they have for all of that.

Now maybe Notion customers love all these AI features but it was super weird to see that stuff so prominently given my understanding of what the company was all about.

tveita · 3 months ago
It's for investors AFAICT. When Masayoshi Son opens your home page it better say 'AI' in big bold letters.

Is your product a search engine? It's AI now. [1][2]

Is it a cache? Actually, it's AI. [3]

A load balancer? Believe it or not, AI. [4]

[1] https://www.elastic.co/

[2] https://vespa.ai/

[3] https://redis.io/

[4] https://www.f5.com/

ludicrousdispla · 3 months ago
but do they use 'native AI' ?

>> https://about.gitlab.com/

rkachowski · 3 months ago
whoa I'm out of the loop, what the fuck happened to redis?
walkabout · 3 months ago
Approximately 95% of my experience using "AI" so far is as something I accidentally activate then waste a few seconds figuring out how to make it stop. What little I've seen of other people's experiences with it on e.g. screen sharing calls mirrors my own. I saw someone the other day wrestling with Microsoft's AI stuff while editing a document and it was comically similar to Clippy trying to help but just fucking things up, except kinda worse because it was a lot less polite about it.

(And I develop "AI" tools at my day job right now...)

an0malous · 3 months ago
The startup I work at is doing the same strategy pivot, we’re integrating AI into every feature of the platform. Every textbox or input field has the option to generate the value from AI. Feature that no one used when it was a simple form with a button can now be done through our chatbot. We have two key product metrics for the entire company and one of them is how many AI tokens our users are generating.
liveoneggs · 3 months ago
My job is talking like this to but I don't understand why we need to keep any of the textboxes at all if the bot is populating everything.
Aperocky · 3 months ago
AI tokens that you pay for?
wintermutestwin · 3 months ago
Notion customer here and their AI crap keeps interrupting my workflow. Pretty stupid move on their part because they have motivated me to ditch the subscription.
geerlingguy · 3 months ago
They used to be like a really easy to use collaborative wiki. And I used it for a couple distributed projects and loved that aspect.

But I'm guessing their growth was linear, and hard fought, after initial success over tools like Atlassian's which are annoying and expensive.

So to get back to hypergrowth, they had to stuff AI in every nook and cranny.

pydry · 3 months ago
the sad part is that it wasnt entirely nonsensical to use AI to improve notion's use as a knowledge base but the way they actually used it was in the most hamfisted ways possible.
throwaway0123_5 · 3 months ago
I'm a heavy Notion user and haven't once used the AI features. I use AI on a near-daily basis outside Notion, but it just isn't something I need from Notion. On the other hand at least it isn't that intrusive in Notion unlike in some other apps.
mountainriver · 3 months ago
You kind of have to be or a competitor will come out being AI first and may get a bunch of funding
dwb · 3 months ago
Just tried Notion AI to build me a mermaid diagram and it was totally useless. So probably not bothering with that again. I can write good enough English without it and I don’t want to sound like generic slop anyway.
ozgrakkurt · 3 months ago
Would strongly recommend avoiding notion. They have super scummy practices for billing, removing users from company account etc.
inquirerGeneral · 3 months ago
Just read a single interview by the CEO etc, they are all in on AI
_fizz_buzz_ · 3 months ago
I recently talked to someone who works at a company that builds fairly complicated machinery (induction heating for a certain material processing). He works in management and they did a week long workshop with a bunch of the managers to figure out where AI will make their company more efficient. What they came up with was that they could feed a spec from a customer into an AI and the AI will create the CAD drawings, wiring diagrams, software etc. by itself. And they wrote a report on it. And I just had to break it to him: The thing that AI is actually best at, is replacing these week-long workshop where managers are bs-ing around to write reports. Also, it shouldn't be the managers doing a top down approach where to deploy AI. Get the engineers, technicians, programmers etc. and they should have a workshop to make plans where to use AI, because they probably already are experimenting with it and understand where it works well and where it doesn't quite cut it yet.
jstummbillig · 3 months ago
> Many software firms trumpet potential use cases for AI, but only 30 percent have published quantifiable return on investment from real customer deployments.

"Only" 30%. Interesting framing.

walkabout · 3 months ago
This kind of "data driven" corporate stuff is, IME, so bullshitty and hand-wavy that I'd assume if only 30% are able to claim to have found quantifiable ROI (most of them with laughably bad methodology a slightly-clever 10th grader who half paid attention in their science and/or stats classes could spot) it means that only 5% or fewer actually found ROI.
rsynnott · 3 months ago
This means that only 30% are even _claiming_ to have shown anything quantifiable. Given that such claims tend to be essentially puffery, the _real_ rate is presumably far lower.
fellowniusmonk · 3 months ago
I was a IC consultant for a big 4 group at one point.

Very successful in my domain on a very successful project.

I wrote an insane amount of code but more importantly I wrote libraries across multiple languages that prevented an insane amount of code from being written.

We would have literally 1k people during quarterly planning and did distributed agile and all this org stuff

(It was interesting anthropologically to me because I operated outside the game, I was just waiting for a ~non-compete I had signed for a profitable technical co-founder exit to end to jump back into starting a new company in the same space.)

And the whole thing worked, and I was very high profile on the project as probably the highest paid IC and the company hired me away from the agency and I worked their until starting my company.

There are 3 layers, the deal makers, the coordinators and the implementers.

You cannot easily automate out the deal makers because they are trust, legal/contracting and power (they use the resources of the firm to allocate: people, resources, etc.) loci. Someone has to hang if stuff goes wrong and someone has to deal with executive petulance and fragile egos.

Now lets look at the middle layer and implementers.

Let's assume for a minute we are looking at a big project where the existing company has hamstrung itself with silos and infighting and low productivity teams, this is just framing to understand the next part, it can cut either way.

The middle layer in consulting is big because other companies have big middle layers as well, and basically what is happening is tribal warfare, you need bodies and voices and change management teams to propagate what is happening otherwise the existing group will slow play the leadership and the project never gets done. If the middle layer is 1 to 1 every native sees they can be replaced. Many big 4 allow poaching for this very reason. The threat of non compliance and then also giving an easy congenial out for people who are ready to exit the consultant lifestyle.

The implementation layer is then able to be done, and it's done by juniors mostly because juniors don't have to be politically savvy, they can work to task.

Just a small slice or things I realized while consulting.