I'm struggling with the CEO being increasingly focussed on investing heavily in AI. I'm not opposed to using this tech at all – it's amazing, and we incorporate a variety of different ML models across our stack where they are useful. But this strategy has evolved to the point where we are limiting resource on key teams aligned with core business to invest in an AI team.
The argument seems to be that they've realized the only way to achieve the next round of funding is to be "AI-first". There is no product roadmap for what this looks like, or what features might be involved, or why we'd want to do it from a product point of view. Instead the reason is that this is the only way to attract a big series C round.
I'm not well-informed enough to know if this is the correct approach to scaling. Instead of working on useful, in-demand product features, it feels like we're spending a lot of time looking at a distant future that we'll struggle to reach if we take our eye off of the ball. Is this normal? Are other organizations going through the same struggle? For the first time in five years I feel completely out of my depth.
The question is whether the key value of your product can benefit from the strengths of AI? If not, don't go there. If so, you then need to determine if your team can actually deliver an AI-driven vision that enhances the existing value prop. Again, if not, don't go there. If so, do it.
But from your description, your CEO is not asking those questions - they are asking, "How do we get more funding?" Which tells me that your CEO doesn't give a crap about building a product, they are just trying to make money and get some nice bullet points on their resume about the size of a company they led.
That puts you in the position of choosing whether you want to go on a VC-driven startup ride just to have the experience, or whether you want a product-driven role. People have their reasons for both directions, but if you want a product-driven role, you are out of alignment with your CEO and probably shouldn't work for them.
This causes too many company executives to think 'if we go all in on [latest trend] then the same thing will happen to us'. Very little thought is given to whether the latest thing is a good fit for what you are building.
When it is a good fit, things can really work to your advantage. When it is not, it can just be a monkey on your back.
Folks work in mission driven organizations. In organizations that are optimizing for dollars you see quality fall by the wayside.
Mission driven organizations that understand the levers of capital are the most enjoyable workplaces of my career.
You want a CEO who is customer obsessed, not a CEO who is maximizing for dollars in the bank at all costs.
It has to be mission first for a startup to be great, really for any critical endeavor.
Succeed based on what criteria? I struggle to think of a single product with "AI" that I'd call successful (excluding the already existing and established niches of recommendation algorithms and computer vision which have been rebranded AI and maybe are what you're referring to).
Can you give any hints as far as industry or product?
I've seen folks wanting big changes and we sit down and draw things up and surprisingly it looks like the old roadmap. Everyone is happy and I don't say anything about the similarities, and most importantly we're on the same page ;)
Sometimes when leadership asks for big changes, even when they say big changes, you might find if you get into the details it's not THAT big a change.
In the meantime don't let your mind run too wild assuming and worrying about what they might be asking for. I do that too, it's a developer thing I think, we start considering possibilities or even misused terminology and we get into a loop ;)
> I've seen folks wanting big changes and we sit down and draw things up and surprisingly it looks like the old roadmap.
I've found myself doing this also. It also helps to reason through new nuances found on the same evidence, because we're more wise looking at the same data in the future, or in this case, in the present (looking from the past).
Include as much non-AI work as you can in the AI teams' projects, pitch an "AI efficiency initiative" that minimizes new spend on AI with the justification of other teams picking up the slack, talk up whatever ML you're already doing, etc.
(Also, huh - I wonder if that’s actually directly doable; “assembled by AI out of programmed outputs”?)
Don't forget to name it after a dog, too.
E.g. If you don't work on AI now, and AI models keep improving, how likely is it that a competitor who integrates AI well will eat your lunch? If it's >50%, it seems worth it to shift some focus to AI regardless of the series C round.
This post from a few days ago has some great tips on how to integrate AI _well_: https://koomen.dev/essays/horseless-carriages/
I like the post and what it goes over. I agree with huge swaths of it, especially the misuse of ai tools right now. I don't agree that anyone doing it that way (like the gmail team called out over and over here) is either stupid or naive. I don't agree every consumer tool should become an agent building playground. I don't agree building said playgrounds are easy (I think it's much much harder to design a good agent building version of a product than a good product). I don't agree consumers want everything they use to work this way. I also think it ignores the very real problem of ai bullshitting and the handcuffs that puts on people using it for mission critical things like paying the mortgage.
Ironically I think this post falls into its own trap of not thinking about the next step. Yeah a really good email agent product as demod sounds great in a world where nothing works this way yet. However, a world where every product I use has to be re-engineered from scratch, with various unknown and non-customizable (and enshittifying) LLMs under it, with various training and fine tuning, unknown access to data, and no interop is the wood-frame horseless carriage the author is mocking. That would be a terrible situation worse than the current one.
Rethinking for a world of ai agents, it would be better if products empowered consumer agents instead of tried to supplant them. In THAT world products stay simple and just expose a bunch of tools and patterns that each consumer's custom agents can effectively use. Making that work well for your own product is actually viable AND helpful AND doesn't force your users to change their behavior if they don't want to. Anything I, the consumer, do to handle or prompt my own agent pays off across my entire ecosystem and investment can be focused on the right things by the right people (ie gmail should make email tools not agent tools).
If investors only want AI, then you will either be forced to do that (whatever it means), do something else but make it look to them like AI, convince them they're wrong, or quit.
Things are a little more nuanced if your existing investors side with you but all new investors are AI-driven. Then you don't have to quit, but you do have to run the business without new outside capital. This may or may not be possible. If not, you still have to quit.
Well, then you're bound to the amount of money customers can give you.
It's usually less than a VC partner with a god complex can give you.
Note that starting a business with your own capital is not a way to escape it either, since you still have to answer to customers.
Couldn't be more excited for this kind of short-sighted, AI-instead-of-people thinking to become shameful. How many people need to aim the gun at their foot and pull the trigger before they're not bragging as they do it?
Its not going away. Its an elevator of low ability.
those with low skills or ability can make great use to elevate their skills and abilities.
Those with high skills and abilitities will not use it for much, but will notice that the skill gap between themselves and those with lower skills will get smaller, faster and with little effort. They're unlikely to match them, but it will level the playing field very quickly and with little effort exerted.
I don't understand what you mean by this. Do you think LLMs are going away? Or is that you think AI-related technologies won't continue to improve?
What is driving the market to assign such a huge value to models is that they can sell themselves as the solution to any and every problem, even if they aren't
I am sorry I have to burst your bubble, but that is not going to happen, ever.
This is the new normal now. We have to accept that reality and adjust accordingly. Maybe with new hiring filters or similar. Hopefully these filters won't be just a LLM hallucinating random things, but realistically speaking... Yeah, it's gonna be that, isn't it?
This is exactly the sort of thing people start saying before a bubble pops.
As the engineer you need to support all three, while being the voice of realism (internally). But don't get confused when they conflict -- they always conflict to some degree.
Your product is not really the product. So the only thing that really matters is how your company is perceived to those who could potentially buy it and give your C-suite an exit.