Readit News logoReadit News
ryandrake · 2 years ago
Companies keep going at it the wrong way. Instead of saying "We have AI, let's find products we can make out of AI!" they should be saying, "What products do people want, let's use whatever tools we have (including maybe AI) to make them."

The idea that a company is an AI company should be as ridiculous as a company being a Python company. "We are Python-first, have Python experts, and all of our products are made with Python. Our customers want their apps to have Python in them. We just have to 'productize Python' and find the right killer app for Python and we'll be successful!" Going at it from the wrong direction. Replace Python in that quote with AI, and you probably have something a real company has said in 2024.

cooperx · 2 years ago
Its the same as all the "we are a blockchain company" startups that popped up looking for a problem to solve with their tech rather than the right way round.

However, a lot of those got a bunch of investment or made some decent money in the short term. Very few are still around. We will see the same pattern here.

javiramos · 2 years ago
I had already forgotten about the blockchain

Dead Comment

tux1968 · 2 years ago
It was likely the same back when the steam engine was invented. Everyone who could start a steam engine company, started a steam engine company. Because learning how to be a steam engine company was difficult, new, and unique. It would be a while before finding all the products that could be sold to people incorporating that new tech.
KineticLensman · 2 years ago
Tech was very different then. The first commercial steam Engine appeared in 1712 - big static thing thing that could pump water out of mines. It took about about 100 years until 1804 to get to a steam engine small but powerful enough to pull a train. Mines and factories were pretty much the only users for decades and there were very few people who had a steam engine for the sake of it
necroforest · 2 years ago
I don't entirely disagree with you, but "what products do people want" is overly conservative. Pre-ChatGPT, very few people wanted a (more or less) general purpose chatbot.
AdieuToLogic · 2 years ago
> Pre-ChatGPT, very few people wanted a (more or less) general purpose chatbot.

And post-ChatGPT, very few people want to have to deal with "a (more or less) general purpose chatbot."

dragontamer · 2 years ago
One of my local car dealerships is using an chat system of some kind (probably an LLM?).

It's awful and a complete waste of time. I'm not sure if LLMs are getting good use yet / general chatbots are good or ready for business use.

honestjohn · 2 years ago
I don't think so, people have been wanting a general chatbot for a long time. It's useful for plenty of things, just not useful when embedded in random places.
wokwokwok · 2 years ago
To be fair, the overwhelming feedback appears to be that people dont want a general purpose chatbot in every product and website, especially when it's labelled 'AI'.

So... certainly there's a space for new products.

...but perhaps for existing products, it's not as simple as 'slap some random AI on it and hope you ride the wave of AI'.

Deleted Comment

nunez · 2 years ago
Chatbots were HUGE in the late 2010s!
johnnyanmac · 2 years ago
>very few people wanted a (more or less) general purpose chatbot.

I mean, I still don't. But from a cynical business point of view, cutting customer servce costs (something virtually every company of scale has) of 99% of customer calls is a very obvious application of a genera purpose chatbot.

expand that to "better search engine" and "better autocomplete" and you already have very efficient, practical, and valuable tools to sell. but of course companies took the angle of "this can replace all labor" instead of offering these as assistive productivity tools.

slavboj · 2 years ago
A ton of the industrial revolution was actually motivated by that input-driven thinking. You don't decide you want an Eiffel Tower from first principles, you consider "what is the coolest thing I can make out of wrought iron".
GeneralMayhem · 2 years ago
But the Eiffel Tower is an art project, not something of actual utility...
candiddevmike · 2 years ago
I blogged about our glorious journey Becoming an AI Company (TM)

https://candid.dev/blog/becoming-an-ai-company/

batch12 · 2 years ago
I was unmoved until I hit "Natural Language™".
mondrian · 2 years ago
I 95% agree, but "what people want" is probably not a strong indicator on the thresholds of paradigm shifts, since people don't know what's possible.
johnnyanmac · 2 years ago
I'd look at the collorary. "what people DON'T want" is a stronger (but still imperfect) indicator of how far you can push the overton curtain.

If you can't convince people that this is benefiting them, and instead focus on talking to investors about how much you can kill off the working class (aka, your "customers" and nowadays "product audience"), you will make it harder to properly sell your product nor audience. Companies have forgotten who the real customers are, no wonder their products aren't resonating.

SkyPuncher · 2 years ago
I only partially agree with this. Having spent a lot of time in the “find a problem then the solution” way of working, I’ve found the solutions are often too tame and lack innovation.

When you’re truly bring novel new value to things, sometime you need to say “we can do this cool thing, but don’t know what that means”. Simply knowing that capability opens you up to better sets of solutions.

8n4vidtmkvmk · 2 years ago
What's wrong with tame and lack of innovation? Sometimes people just need to get things done. There are lots of businesses with basic needs that aren't being met.
mu53 · 2 years ago
AI is trending right now. The most important thing for new companies is finding investors, and those people have been throwing cash at any company with AI.

Customers are also more interested in AI products. The tech industry has stagnated for years with incremental improvements on existing products. ChatGPT and generative AI are new capabilities that draw interest, and companies have been doing anything they can to stand out today.

siruncledrew · 2 years ago
The market is sorting itself out right now, and eventually the wheat will get separated from the chaff.

Every cycle, theres all types of people hop on board whatever the hype train is... it's the same mindset as pioneering for gold in the wild west.

I just hope we can move along more in the "wheat" direction with AI products. There's so much low-effort crap already out there.

ants_everywhere · 2 years ago
There's still a lot of real work to be done knowing what can be built and operated profitably, because the underlying tech is so new.

So just zooming out, we need people trying to figure out what can be built with this Lego set. We also need people like you're saying to work the other side so everyone can meet in the middle.

sroussey · 2 years ago
This has been the case for decades. Look at the internet and .com’s. Mobile. Etc.
slg · 2 years ago
It really is one of the more effective ways to identify a bubble when companies shift to selling themselves on the tools and technology they use rather than the problem they are solving.
widenrun · 2 years ago
You are forgetting marketing is temporal. Fifteen years ago you could sell your software as the Cloud version of a legacy app. Right now, there's a window that being the AI version will get you a call.
lallysingh · 2 years ago
That requires they you understand the capabilities and limitations of the tech way better than anyone does. So instead "let's see what we can do with this" is the underlying approach.
kohbo · 2 years ago
Might be a good philosophy for a hobbyist, not (usually so) for a business.
tim333 · 2 years ago
A python company is too specialized but software companies are a thing, Maybe AI will be another tool for software companies.
_xiaz · 2 years ago
To be fair, astral is the python company and thank god they are. I love ruff and uv
seydor · 2 years ago
People want a faster horse
goatlover · 2 years ago
Also one that flies, fueled by atomic power.
johnnyanmac · 2 years ago
if you're some Python contractor company, the angle makes sense. but of course, very few AI companies are out there trying to help others solve problems.
xbmcuser · 2 years ago
this is how things evolve everything was .com company when internet started going mainstream then real product and service providers were left standing
diatone · 2 years ago
Ehhh it’s a spectrum. First you innovate, then you commercialise. Even Google took a few years to successfully monetise and they weren’t the first mover in web search. LLMs have been around for, what, coming up on three years? Probably two to four more years to see results.

Deleted Comment

DylanDmitri · 2 years ago
I’m seeing a lot of meh products that take like 4 units of effort to integrate. I think multiple LLMs, deeply integrated into a cohesive product with 100+ effort units, that can be great. An AI that’s familiar with the use of every settings menu on windows would be awesome
highfrequency · 2 years ago
I'm not so sure. When a technological wave is big enough, it seems reasonable to start by asking: "what business can be built on this exponential wave?" This is contrary to standard YC advice (make something people want right away, don't create a solution in search of a problem) but empirically a lot of big companies started this way:

- Bezos saw the growth rate of the internet, spent a few months mulling over the question: "what business would make sense to start in the context of massive internet adoption" and came up with an online bookstore.

- OpenAI's ChatGPT effort really began when they saw Google's paper on transformers and decided to see how far they could push this technology (it's hard to imagine they forecasted all the chatbot usecases; in reality I'm sure they were just stoked to push the technology forward).

- Intel was founded on the discovery of the integrated circuit, and again I think the dominant motivation was to see how far they could push transistor density with a very hazy vision at best of how the CPUs would eventually be used.

I think the reason this strategy works is that the newness of a truly important technology counteracts much of the adverse selection of starting a new business. If you make a new To-Do iPhone app, it's unlikely that people have overlooked a great idea in that space over the last 10 years. But if lithium ion batteries only just barely started becoming energy dense enough to make a car, there's a much more plausible argument why you could be successful now.

Said another way: "why hasn't this been done before?" (both by resource-rich incumbents as well as new entrants) is a good filter (and often a limiting one) for starting a business. New technological capabilities are one good answer to this question. Therefore if you're trying to come up with an idea for a business, it seems reasonable to look at new technologies that you think are actually important and then reason backward to what new businesses they enable.

Two additional positive factors I can think of:

1. A common dynamic is that a new technology is progressing rapidly but is of course far behind traditional solutions at the outset. Thus it is difficult to find immediate applications, even if large applications are almost guaranteed in 10-20 years. Getting in early - during the borderline phase where most applications are very contrived - is often a big advantage. See Tesla Roadster (who wants a $100k electric sports car with 200mi range and minimal charging network?), early computers (what is the advantage of a slow machine with no GUI over doing work by hand?), and perhaps current LLMs (how valuable is a chatbot that frequently hallucinates and has trouble thinking critically in original ways)? It's the classic Innovator's Dilemma - we overweight the initial warts and don't properly forecast how quickly things are improving.

2. There is probably a helpful motivational force for many people if they get to feel that they are on the cutting edge of technology that interests them and building products that simply weren't possible two years ago.

Dead Comment

lispisok · 2 years ago
You're suggesting boring business way to do things. The tech ecosystem is full of startups doing that ridiculous thing you said chasing the hot new thing and raising huge amounts of money off the hype. This AI hype cycle is really bad and before that we had cryptocurrency.
rustypotato · 2 years ago
> But when developers put AI in consumer products, people expect it to behave like software, which means that it needs to work deterministically. If your AI travel agent books vacations to the correct destination only 90% of the time, it won’t be successful.

This is the fundamental problem that prevents generative AI from becoming a "foundational building block" for most products. Even with rigorous safety measures in place, there are few guarantees about its output. AI is about as solid as sand when it comes to determinism, which is great if you're trying to sell sand, but not so great if you're trying to build a huge structure on top of it.

BobbyJo · 2 years ago
I've made this statement a bunch in other mediums: The reason AI software is always "AI software" and not just a useful product is because AI is fallible.

The reason we can build such deep and complex software system is because each layer can assume the one below it will "just work". If it only worked 99% of the time, we'd all still be interfacing with assembly, because we'd have to be aware of the mistakes that were made and deal with them, otherwise the errors would compound until software was useless.

Until AI achieves the level of determinism we have with other software, it'll have to stay at the surface.

mondrian · 2 years ago
Recent work from Meta uses AI to automatically increase test coverage with zero human checking of AI outputs. They do this with a strong oracle for AI outputs: whether the AI-generated test compiles, runs, and hits yet-unhit lines of code in the tested codebase.

We probably need a lot more work along this dimension of finding use cases where strong automatic verification of AI outputs is possible.

nforgerit · 2 years ago
You hit the nail. It's been almost tragically funny how people frantically tried to juggle 5 bars of wet soap in recent 2 years solving problems that (from what I've seen so far) have been already solved in a (boring) deterministic way consuming much less resources.

Going further, our predecessors put so much work into getting non-deterministic electronics together providing us with a stable and _correct_ platform, it looks ridiculous how people were trying to squeeze another layer of non-determinism in between to solve the same classes of problems.

diatone · 2 years ago
The irony here is that there are many domains using statistical methods, that bound the complexity and failure modes of statistical methods successfully. A lot of people struggle with statistics but in domains where the glove fits I think AI will slot in all across the stack really nicely.
loa_in_ · 2 years ago
But software works only 99% of the time. For some definition of work: 99% of days it's run, 99% of clicks, 99% of CPU time in given component, 99% of versions released and linked into some business' production binary, 99% of github tags, 99% of commits, 99% of software that that one guy says is battle-tested
slidehero · 2 years ago
structured outputs help, paired with regular old systems design I think you can get pretty far. it really depends what you're building though.

>If your AI travel agent books vacations to the correct destination only 90% of the time

that would be using the wrong tool for the job. an AI travel agent would be very useful for making suggestions, either for destinations or giving a list of suggested flights, hotels etc, and then hand off to your standard systems to complete the transaction.

there are also a lot of systems that tolerate "faults" just fine such as image/video/audio gen

chasd00 · 2 years ago
> that would be using the wrong tool for the job. an AI travel agent would be very useful for making suggestions

But that’s a recommendation engine and we have that already all over the place.

hellovai · 2 years ago
i 100% percent agree. people get so caught up on trying to do everything 90% right with AI, but they forget there's a reason most websites offer at least 2 9's of uptime.
aprilthird2021 · 2 years ago
> If your AI travel agent books vacations to the correct destination only 90% of the time, it won’t be successful.

Well, I don't agree. I think there are ways to make this successful, but you have to be honest about the limitations you're working it with and play to your strengths.

How about an AI travel agent that gets your itineraries at a discount with the caveat that you be ready for anything. Like old, cheap standby tickets where you just went wherever there was an empty seat that day.

Or how about an AI Spotify for way less money than current Spotify. It's not competing on quality, it can't. Occasionally you'll hear weird artifacts, but hey it's way cheaper.

That could work, imo

ikr678 · 2 years ago
We've had good, free (non ai) media recommendation tools in the past and they got killed by licensing agreements.

AI is creating a post-scarcity content economy where quality is going to be the only driver of value.

If you are the rights holder of any premium human created media content you are not going to let a 'cheap' AI tool get access to recommend it out to people.

8n4vidtmkvmk · 2 years ago
The AI travel agent is trivial to solve though. It's the same as the human travel agent. Put the plan and pricing together, then give it to the user to sign and accept. Do it in an app, do it in an email, do it on a piece of paper, whatever floats your boat, but give them something they can review and accept instead of trying to do everything verbally or in a basic chat interface.

I'm not disagreeing with the "needs to work deterministically" -- there is a need for that, but this is a poor example. "Hey robot, plan a trip to Mexico" might still save me time overall if done right, and that has value.

MattGaiser · 2 years ago
It just needs to beat all the other non-deterministic processes at accuracy.

Call centre workers are often dreadfully inaccurate as well. Same with support engineers.

Heck even for banking, there are enormous teams fixing every screw up made by some other employee.

davidsgk · 2 years ago
I have a question for folks working heavily with AI blackboxes related to this - what are methods that companies use to test the quality of outputs? Testing the integration itself can be treated pretty much the same as testing around any third-party service, but what I've seen are some teams using models to test the output quality of models... which doesn't seem great instinctively
8n4vidtmkvmk · 2 years ago
Take this with a grain of salt because I haven't done it myself, but I would treat this the same as testing anything that uses some element of random.

If you're writing a random number generator, that generates numbers between 0 and 100. How would you test it? Throw your hands up in the air and say nope, can't test it, it's not deterministic! Or maybe you can just run it 1000 times and make sure all the numbers are indeed between 0 and 100. Maybe count up the number frequencies and verify its uniform. There's lots of things you can check for.

So do the same with your LLMs. Test it on your specific use-cases. Do some basic smoke tests. Are you asking it yes or no questions? Is it responding with yes or no? Try some of your prompts on it, get a feel for what it outputs, write some regexes to verify the outputs stay sane when there's a model upgrade.

For "quality" I don't think there's a substitute than humans. Just try it. If the outputs feel good, add your unit tests. If you want to get scientific, do blind tests with different models and have humans rate them.

EasyMark · 2 years ago
But a knowledgeable human can take the iternarary and run with it. I know I’ve done that with code enough from AI generated stuff, it’s basically boiler plate. You still run it through the same tests, reviews, and verification as you would have had to do anyway.
hx2a · 2 years ago
And yet, generative AI also seems to be poor at randomness. When I ask Google Gemini for a list of 50 random words, it gave me a list of 18 unique words, with 16 of them repeated exactly 3 times.

Abyss: 1 Ambiguous: 3 Cacophony: 3 Crescendo: 3 Ephemeral: 3 Ethereal: 3 Euphoria: 3 Labyrinth: 3 Maverick: 3 Melancholy: 3 Mellifluous: 3 Nostalgia: 3 Oblivion: 3 Paradox: 3 Quixotic: 1 Serendipity: 3 Sublime: 3 Zenith: 3

simonw · 2 years ago
Randomness is difficult. I wouldn't expect any LLM to be able to reliably produce random anything, except in the cases where they have access to tools (ChatGPT Code Interpreter could use Python's random.random() for example).
MattGaiser · 2 years ago
Are you using regular or pro? As pro has no issues with this task.
Lerc · 2 years ago
Instead of pivoting, can this behaviour be explained by trying lots of different things and then iterating on the ones that show promise?

It's all well and good to say "Make something people want" but for anything that people want usually one of three things is true

1. Someone else is already making it.

2. Nobody knows how to make it.

3. Nobody knows that people want it.

People experimenting with 2 and 3 will have a lot of failures, but the great successes will come from those groups as well.

Sure, every trend in business has a lot of companies going "we should do this because everyone else is" It was a dumb idea for previous trends and it is a dumb idea now. Consider how many companies did that for the internet. There were a lot of poorly thought out forays into having an internet presence. Of those companies still around, they pretty much will have an internet presence now that serves their purposes. They transition from "because everyone else is" as their motivation to "We want specific ability x,y,&z"

Perhaps the best way to get from "everyone else is doing it" to knowing what to build is to play in the pool.

8n4vidtmkvmk · 2 years ago
That's exactly what these companies are doing. They're trying a lot of different ideas, and seeing what sticks. The problem is that they're annoying users and causing distrust.
violetwhowwq · 2 years ago
Using herbal natural remedy was what got me tested negative to HSV 2 after being diagnosed for years. I have spent so much funds on medications like acyclovir (Zovirax), Famciclovir (Famvir), and Valacyclovir (Valtrex). But it was all a waste of time and my symptoms got worse. To me It is very bad what Big pharma are doing, why keep making humans suffer greatly just to get profits annually for medications that don't work. I'm glad that herbal remedies are gaining so much awareness and many people are getting off medications and activating their entire body system with natural herbal remedies and they have become holistically healed totally, It’s also crucial to learn as much as you can about your diagnosis. Seek options visit: worldrehabilitateclinic. com
frabjoused · 2 years ago
I’m building an integration platform. There’s a thousand ways to deeply embed AI throughout it, both to build integration workflows faster, and to help us build smarter API wrappers faster.

But AI has always been a secondary augmentation to the product itself. It’s a tool, it shouldn’t be the other way around.

FpUser · 2 years ago
ChatGPT is very useful for me to the point that I pay subscription fee. To me it IS the product.
8organicbits · 2 years ago
I haven't found a use for it. What do you use it for?
t-writescode · 2 years ago
I find it to be orders of magnitude more helpful at getting me started on the research journey when I don't know how to formulate the question of what I'm researching.

I find it useful for:

  * throwing ideas at a wall and rubber-ducking my emotional state and feelings.
  * creating silly, meme images in strange circumstances, sometimes.
  * answering simple "what's the name of that movie / song / whatever" questions
Is it always right? Absolutely not. Is it a good starting point? Yes.

Think of it like the school and the early days of Wikipedia. "Can I use Wikipedia as a source? No. But you can use it to find primary sources and use those in your research paper!"

alentred · 2 years ago
It is my new Google, ever since Web Search and the Web itself stopped to work as a source of information.

When I look for answers to specific questions, I either search Wikipedia, or ask ChatGPT. "Searching the Internet" doesn't work anymore with all the ADs, pop-ups and "optimized" content that I have to consume before I get to find the answers.

cm2012 · 2 years ago
It's outrageous at translation. I can set it to record next to my wife and her grandma speaking in Korean, through phone speakers, and it creates a perfect transcription and translation. Insane.
kamaal · 2 years ago
Everything!

Its like talking to a intelligent person about a topic you want to learn, but they know it good enough to teach you if you keep asking questions.

FpUser · 2 years ago
For many things in general for IT related tasks in particular.

a) Write me shell script which does this and that. b) what Linux command with what arguments do I call to do such and such thing. c) Write me a function / class in language A/B/C that does this and that d) write me a SQL query that does this and that e) use it as a reference book for any programming language or whatever other subject.

etc. etc.

The answers sometimes come out wrong and / or does contain non trivial bugs. Being experienced programmer I usually have no problems spotting those just by looking at generated code or running test case created by ChatGPT. Alternatively there are no bugs but the approach is inefficient. In this case point explaining why it is inefficient and what to use instead ChatGPT will fix the approach. Basically it saves me a shit ton of time.

guywithahat · 2 years ago
I use it to auto generate responses to ridiculous rhetorical questions on hacker news
cm2012 · 2 years ago
It gives amazing medical advice and answers, way better than webmd or frankly even most primary care physicians ive seen.
honestjohn · 2 years ago
Yeah, ChatGPT itself is amazing. What I don't understand is, why are other companies paying so much for training hardware now? Trying to make more specialized LLMs now that ChatGPT has proven the technology?
marymkearney · 2 years ago
Just a heads-up. I'm interested in your online workshop link, but it's private.

https://sites.google.com/princeton.edu/agents-workshop

pphysch · 2 years ago
Google has been productizing AI for a while now. 2021 Pixels have the Tensor SoC which was explicitly marketed as an AI chip. Chatbots weren't part of the equation back then, but offline image translation, magic eraser, etc certainly were.