Every time some product or service introduces AI (or more accurately shoves it down our throats) people start looking for a way to get rid of it.
It's so strange how much money and time companies are pouring into "features" that the public continues to reject at every opportunity.
At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast numbers of employees out of work and allow companies to use the massive amounts of data they've collected about us against us more effectively. All the AI being shoehorned into products and services now are mostly to test, improve, and advertise for the AI being used, not to provide any value for users who'd rather have nothing to do with it.
I work on a project where customers need to fill out a form to receive help. We introduced an AI chat bot to help them do the form by just talking through the problem and answering questions. Then the form is filled out for the customer for them to review before submitting.
Personally I find it slower than just doing it manually but it has resulted in the form being correct more often now and has a lot of usage. There is also a big button when the chat opens that you can click to just fill it out manually.
It has its place, that place just isn't everywhere and the only option.
I did a project a while back that created a wizard to fill in a form - I also found it much easier to simply complete the form, but when we demonstrated it to target users they nearly cried with relief. It was a good reminder of the importance of knowing what users actually want.
I should go back to look at that and see if we could incorporate an easy ChatBot as an improvement.
It's great when it works. Yesterday I needed to contact support for a company but all they had was a chatbot. I explained what information I was looking for and it linked me something completely irrelevant and asked if this solved my problem - with big buttons to reply yes/no. I pressed "no" which simply caused a message with "no" to be sent from me in the chat. The bot replied with "You're welcome!". I wrote a manual clarification that this did not solve my issue. The bot answered "You're welcome". Luckily, I found that ignoring this and asking the question again did work.
I'm sure there's a time and place for these things, but this sounds very much like the echo chamber I hear at work all the time.
Someone has a 'friend' who has a totally-not-publically-visible form where a chat bot interacts with the form and helps the user fill the form in.
...and users love it.
However, when really pressed, I've yet to encounter someone who can actually tell me specifically
1) What form it is (i.e. can I see it?)
2) How much effort it was to build that feature.
...because, the problem with this story is that what you're describing is a pretty hard problem to solve:
- An agent interacts with a user.
- The agent has free reign to fill out the form fields.
- Guided by the user, the agent helps will out form fields in a way which is both faster and more accurate than users typing into the field themselves.
- At any time the user can opt to stop interacting with the the agent and fill in the fields and the agent must understand what's happened independently of the chat context. i.e. The form state has to be part of the chat bot's context.
- At the end, the details filled in by the agent are distinguished from user inputs for user review.
It's not a trivial problem. It sounds like a trivial problem; the agent asks 'what sort of user are you?' and parses the answer into one of three enum values; Client, Foo, Bar -> and sets the field 'user type' to the value via a custom hook.
However, when you try to actually build such a system (as I have), then there are a lot of complicated edge cases, and users HATE it when the bot does the wrong thing, especially when they're primed to click 'that looks good to me' without actually reading what the agent did.
So.
Can you share an example?
What does 'and has a lot of usage' mean in this context? Has it increased the number of people filling in the form, or completing it correctly (or both?) ?
I'd love to see one that users like, because, oh boy, did they HATE the one we built.
At the end of the day, smart validation hints on form input fields are a lot of easier to implement, and are well understood by users of all types in my experience; it's just generally a better, normal way of improving form conversion rates which is well documented, understood and measurable using analytics.
...unless you specifically need to add "uses AI" to your slide deck for your next round of funding.
I am totally in the same boat but also I do suspect it is a minority. It’s the same way that some people really want open source bootloaders, but 99.99% of people do not care at all. Maybe AI assistants in random places just aren’t that compatible with people on HN but are possibly useful for a lot of people not on HN?
> It’s the same way that some people really want open source bootloaders, but 99.99% of people do not care at all.
In fairness to the 99.99% they don't even know what a bootloader is and if they understood the situation and the risks many of them would also favor an open option.
I don't think the rejection of AI is primarily a HN thing though. It's my non-tech friends and family who have been most vocal in complaining about it. The folks here are more likely to have browser extensions and other workarounds or know about alternative services that don't force AI on you in the first place.
I agree, but doesn't that basically mean there are two camps: people who dislike it, and people who don't care? I also agree with GP in that there isn't any visible 3rd camp: people who want it. If google themselves thought people wanted this, they wouldn't need to make an un-dismissable popup in all of their products with one button, "yes please enable gemini for me", in order for people to use it.
I'm sure google thinks that people have some sort of bias, and that if they force people to use it they'll come to like it (just like google plus), but this also shows how much google looks down on the free will of its users.
AI confidence has been dwindling[0][1] so I don't think that's the biggest contributor.
I do think it's as simple as appealing to stakeholders in whatever way they can, regardless of customer satisfaction. As we've seen as of late, the stock markets are completely antithetical to the improvement of people's lives.
The first point does indeed come into play because oftentimes most people don't throw enough of a fuss against it. But everything has some breaking point; Microsoft's horribly launched Copilot for Office 365 showed one of them
I don't think so. I have many nontechnical friends who are furious at having to deal with bad AI, whether it's a stupid chatbot that they have to talk to instead of a real person or Google "AI overviews" that often get things completely wrong.
I agree with this. I'm very surprised when I see someone blindly trust whatever the AI summary says in a google query, because I myself have internalized a long time ago to strongly distrust it.
<Maybe AI assistants in random places just aren’t that compatible with people on HN but are possibly useful for a lot of people not on HN?>
Coincidentally today, I received an automated text from my heath care entity along the lines of, "Please recognize this number as from us. Our AI will be calling you to discuss your heath."
No. I'm not going to have a personal discussion with an AI.
Mainstream press have been covering how much people hate it - people's grandparents are getting annoyed by it. Worse, it comes on the heels of four years of Prabhakar Raghavan ruining Google Search for the sake of trying to pump ad revenue.
It's a steaming pile of dogshit that either provides useless information unrelated to what you searched for and is just another thing to scroll past, and even worse, provides completely wrong information half the time, which means even if the response seems to be what you asked, it's still useless because you can't trust it.
I think this is the case. Most of my family and friends use and like the various AI features that are popping up but aren't interested thinking about how to coax what they want out of ChatGPT or Claude.
When it's integrated into a product people are more likely to use it. Lowering the barrier to entry so to speak.
> At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast amounts of employees out of work
It's this part.
Salaries and benefits are expensive. A computer program doesn't need a salary, retirement benefits, insurance, retirement, doesn't call in sick, doesn't take vacations, works 24/7, etc.
No it's not. It's because management is tone deaf and out of touch. They'll latch onto literally anything put in front of them as a way out of their inability to iterate and innovate on their products.
Throwing "ai" into it is a simple addition, if it works, great, if it doesn't well the market just wasn't ready.
But if they have to actually talk to their users and solve their real problems that's a really hard pill to swallow, extremely hard to solve correctly, and basically impossible to sell to shareholders because you likely have to explain that your last 50 ideas and the tech debt they created are the problem that needs to be excised.
Who do they think will buy their products if there are no employees anywhere? Businesses, even business facing ones, eventually rely on consumers at some point to buy things. What can be gained by putting everyone out of work?
Humans have no intrinsic value except to convert food to carbon dioxide, most of which are completely useless to the Reich. AI is cheap to train ( only a few million dollars per model ), and cheap to run:
Data centers will soon outstrip all other uses of electrical power, as for an AI calling in sick, no it needs full power 24/7. AI has no creativity, no initiative, no conscious, and absolutely zero ethics.
"In a middle-ground scenario, by 2027 new AI servers sold that year alone could use between 85 to 134 terawatt hours (Twh) annually."
It's interesting how we can frame "potentially automating tasks" in the most sinister conceivable way. The same argument applies to essentially all technology, like a computer.
But this is normal. A new thing is discovered, the market runs lots of tests to discover where it works / doesn’t, there’s a crash, valid use cases are found / scaled and the market matures.
Y’all surely lived thru the mobile app hype cycle, where every startup was “uber for x”.
The amount of money being spent today pales in comparison to the long term money on even one use case that scales. It’s a good bet if you are a VC.
It's a weird cycle though where the order of everything is messed up.
The normal tech innovation model is: 1. User problem identified, 2. Technology advancement achieved, 3. Application built to solve problem
With AI, it's turned into: 1. Technology advancement achieved, 2. Applications haphazardly being built to do anything with Technology, 3. Frantic search for users who might want to use applications.
I don't know how the industry thinks they're going to make any money out of the new model.
> At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast amounts of employees out of work
That's certainly part of it.
However, at this point I think a lot of it is a kind of emotional sunk-cost. To stop now would require a lot of very wealthy and powerful people to admit they had personally made a very serious mistake.
It is also possible that we're just in the realm of pure speculation now - if you look at Tesla and NVidia both their valuations are completely imaginary and with the latter standing to benefit a lot by being shovel sellers (but not that much) and the former seeing an active decline in profitability while still watching the numbers go up.
It may be less that people are unaware of the speculative bubble but are just hoping to get in and out before it pops.
With the amounts being invested into AI and even just in recognition of running an AI service (@see how openAI loses money on their $200 subscriber accounts). The "for funsies" services like switching an HTML form over to a chatbot are clearly not going to be a realistic resolution for this technology. I'd argue that even when it comes to code generation the tools can be useful for green-field prototyping but the idea that a developer will need to verify the output of a model will never cause more than marginal economy of tools in that sector.
The outcome that the large companies are banking on is replacing workers, even employees with rather modest compensation end up costing a significant amount if you consider overhead and training. There is (mostly) no AI feature that wall street or investors care about except replacing labor - everything else just seems to exist as a form of marketing.
Adding unwanted features, bolting on an AI assistant, changing to a subscription model, and even automating away employees can all be explained by the following iron rule: C-level leadership lives in abject terror of the numbers not going up anymore. Even if a product is perfect, and everyone who needs it owns it, and it needs no improvement, they must still find a way make the numbers go up. Always. So, they'll grab hold of any trend which, in their panic, seems like it might be a possible life preserver.
I'll repeat my favorite quote about it (paraphrased and read it here first but don't recall the attribution): AI can copy a song, tell me a joke, predict what I buy, but I still have to do my own dishes.
If AI (or any tech) could clean, do dishes, or cook (which is not a chore for many, I acknowledge that) it could potentially bring families together and improve the quality of peoples lives.
Instead they are introducing it as a tool to replace jobs, think for us, and mistrust each other ("you sound like an AI bot!/you just copied that from chatgpt! You didn't draw that! How do I know you're real?"
I don't know if they really thought through to an endgame, honestly. How much decimation can you inflict on the economy before the snake eats its own tail?
> If AI (or any tech) could clean, do dishes, or cook (which is not a chore for many, I acknowledge that) it could potentially bring families together and improve the quality of peoples lives.
One day they'll put those kinds of robots in people's homes, but I'll keep them out of mine because they'll be full of sensors, cameras, and microphones connected to the cloud and endlessly streaming everything about your family and your home to multiple third parties. It's hard enough dealing with cell phones and keeping "smart"/IoT crap from spying on us 24/7 and they don't walk around on their own to go snooping.
The sad thing about every technology now is that whatever benefits it might bring to our lives, it will also be working for someone else who wants to use it against us. Your new smart TV is gorgeous but it watches everything you see and inserts ads while your watching a bluray. Your shiny car is self-driving, but you're tracked everywhere you go, there are cameras pointed at you recording and microphones listening the entire time sending real-time data to police and your insurance company. Your fancy AR implant means you'll never forget someone's name since it automatically shows up next to their face when you see them, but now someone else gets to decide what you'll see and what you aren't allowed to see. I think I'll just keep washing my own dishes.
It's "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."
Which is a stupid argument, since there is "any tech" that can do your laundry and dishes, and it's been around for decades! Is it too hard for you to put your dishes in the dishwasher, or your clothes in the washing machine?
Is the "useful AI" technology any different from this slop? If not then I fear that's wasted money as well. Which I think is the reason this stuff keeps getting shoehorned in. They invested money in training and equipment all of which is depreciating far faster than it has returned value.
Marc Andreessen stated on twitter that was a core reason for why he likes AI, to drive down wages (which in his words "must crash").
So you are not far off from that concept of putting vast numbers of employees out of work, when influential figures like Andressen are openly stating that is their ambitions.
And Larry Ellison wants us all under the eye of AI cameras so that "citizens will be on their best behavior". I almost used the word "panopticon" there, but Ellison is proposing something strictly worse, in that there's no hope of the cameras not being watched.
Yet, some analysts claim the fact that people nevertheless use these awful choices means they like them despite their frequent complaints.
They cite "Revealed Choice", which may apply when there is an actual choice.
But in the nearly winner-take-all dynamic of digital services, when the few oligopolistic market leaders present nearly identical choices, the actual usage pattern reveals only that a few bad choices are just barely worse than chucking the whole thing.
> hopes that it will soon put vast numbers of employees out of work and allow companies to use the massive amounts of data they've collected about us against us more effectively.
They already fired so many developers and this feels more like a Hail Mary before maintenance costs and tech debt start catching up to you.
Maybe at some point giant companies like google realize that the only logical solution to the expansion problem is that they have to help space research to actually be able to expand more.
Jokes aside, investors behind google seem to not realize that google at this point is infrastructure and not an expandable product market anymore. What's left to expand to since the Google India Ad 2013? What? North Korea, China, Russia, maybe? And then everyone gets their payout?
Financial ecosystems and their bets rely on infinite expansion capabilities, which is not possible without space travel.
> At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast numbers of employees out of work
no. On the contrary. We will need people to clean the mess left by AI
> and allow companies to use the massive amounts of data they've collected about us against us more effectively.
I feel like a lot of devs and artists are dreading the idea of their entire job becoming nothing more than "debug/fix the mess an AI made". Going from designer/architect to QA/editor would kill a lot of the fun and satisfaction people get from their work.
What choice does Google have? Google search is such a shit show now that I use ChatGPT to do any complicated web search. The paid version has had web search for over two years now.
Google couldn’t just keep ignoring it. I do wish it were an option instead of on by default - except for searches they can monetize
At my company we have a live service chat feature. Recently some of our customers have been requesting an AI chatbot support (we've got fairly technical product offerings). I'm guessing they want to ask a bunch of stupid questions.
the only reasons i can imagine that a customer would want to use an AI chatbot for support instead of chatting with a person is either because they don't currently have the option to chat with a person 24/7 at all (AI is better than no chat support), or their experience with human chat support has been terrible (long wait times, slow responses, unhelpful agents, annoying language barriers, responses so unnatural and overly scripted that they might as well be bots, etc).
There's nothing AI brings to the table that a competent human wouldn't, with the added benefit that you don't have to worry about AI making things up or not understanding you.
It just takes a generation of people growing up with it until it really takes hold.
People didn't use to ask Google questions, either, but now you're the outlier if you try using search terms instead.
AI has value the way self-checkout has value: it's anti-consumer and widely hated, but it (can) save the companies money and will therefore be too widespread for anyone to opt out.
Self-checkout has its uses and supporters. Introverts, the socially anxious, people in a hurry, people who'd much prefer to bag (or double/triple bag) their own items in ways that work best for them, people who want to get the organic tomato at the price of the non-organic one.
It's absolutely still a scheme by companies to get rid of employees and get customers to do work for them for free, and there are still issues with the systems not working very well, but we at least have the option (in almost all cases) to queue up at the one or two registers with an employee doing the work. When it comes to AI, we're often not being given any choice at all. Even if we can avoid using it, or somehow avoid seeing it, we will still be training it.
I like self checkout for medium to small shops (which is pretty much all I do since my local small supermarket is 100 metres down the road). Before they put in the self checkouts there was always a huge queue for the registers and I avoided shopping there; now I almost never have to queue and it's much faster to get in, pick up a dozen items and get out.
Self-checkout has come a long way IMO. I love not having to queue as long or speak to staff. The occasional "unknown item" still happens, but it's worth it. Even better are the ones in smaller shops that don't have a weighing sensor.
I donno I like AI, I don't use it often but when I do I've found it useful and impressive. It's really improved quality of life when it comes to having something to read over my work or help with finding small bits of info. I also like self checkout because this it reduced wait times at the store.
I think people are always resistant to change. People didn't like ATMs when they first came out either. I think it's improved things.
I was going to buy a pixel 9 fold, but I literally have no idea why I should.
All the ad talked about was AI, nothing about specs, and barely a whisper of how it works, or even good demos of apps switching between open and closed.
Every phone has AI now, big deal. How about you tell me, Google, what is cool about the fold, instead of talking for 4 minutes about AI?!
It's not strange. It's about power and control. Google and the other big names could care less about user satisfaction: their customers are the ad buyers.
It's too bad because even 10 years ago Google and the internet in general were magical. You could find information on any topic, make connections and change your life. Now it is mostly santized dumbed down crap and the discovery of anything special is hidden under mountains of SEO spam, now even AI generated SEO spam that is transparently crap for any moderately intelligent user.
For a specific example I like to watch wildlife videos and specifically ones that give insight to how animal think and process the world. This comparative psychology can help us better understand ourselves.
If you want to watch Macaque monkeys for example google/youtube feeds you almost exclusively SEO videos from a handful of locations in Cambodia. There are plenty of other videos out there but they are hidden by the mass produced videos out of Cambodia.
If I find an interesting video and my view history is off the same video is often undiscoverable again even with the exact same search terms.
Search terms are disregarded or substituted willy nilly by Google AI who thinks it knows better what I want than myself.
But the most egregious thing for me as a viewer of nature videos is the AI generated content. It is obviously CGI and often ridiculous or physically impossible. For example let's say I want to see how a monkey interacts with a predatory python, I am allowed to watch that right??? Or are all the Serengeti lion hunting gazel videos to be banned in 2025? Lol. So I search "python attacks monkey" hoping to see a video in the natural setting. Instead I am greeted with maybe a handful of badly shot videos probably staged by humans and hundreds of CGI cartoons that are obviously not real. In one the monkey had a snake mouth! Lol. Who goes searching for real nature videos to see badly faked stuff?
Because of how I can not find anything on google or Youtube anymore without picking through a mountain of crap I use them less now. This is for almost any kind of topic not just nature videos.
Is that a win for advertisers? Less use? I don't think so.
In about 20 years of using the product the number of times a google or Youtube search has led to me actually purchasing a product or service DUE to an ad I saw, is I believe precisely zero.
Recently I have been seeing Temu (zero interest), disability fraud (how is this allowed), senior, and facebook ads. I am a non disabled, 30 something man. I saw an ad for burial insurance today.
Why is facebook paying to advertise "facebook" on youtube in 2025? Is this some ritual sacrifice to the god Mammon or something? Surely in 2025 everyone who would be interested in Facebook has heard of it. I have the Facebook app installed. Why the hell do facebook investors stand facebook paying google to advertise facebook non-selectively on youtube. It's the stupidest thing I ever saw.
I have not watched any political content in years. And yet when I search for a wild life video I get mountains of videos about Trump and a handful of mass produced low quality wildlife content interspersed.
Today I was treated to an irrelevant ad about "jowl reduction."
I know many of you use ad blockers but this is how horrendous it is without them. You can't find what you want, even what you just saw, and you are treated to a deluge of irrelevant, obnoxious content and ads.
Clearly it is about social control, turning our minds to mush to better serve us even more terrible ad content.
Similar result, maybe not quite so illustrative, perhaps more colorful, just involving images not videos. Ended up at a similar conclusion.
Tried to search user interface design for an ongoing project, and found that Google now simply ignores filtering attempts... Try to find ideas about multi-color designs, and all there is are endless image spam sites and Letterman style Top 10 lists. Try to filter those out, and Google just ignores many attempts.
There's so many, that even those that actually do get successfully filtered out, only reveal the next layer of slime to dig through. Maybe the people that didn't pay enough for placement?
Huge majority, far and away, where the "Alamy", "Shutterstock", "_____Stock", ect... photos websites. There's so many it's not really practical to notch filter. Anything involving images. Spend all day just trying to notch filter through "_____Stock" results to get to something real.
The worst though, was that even among sites that wrote something, there was almost nothing that was actually "user interfaces" or anything related to design, other than simplistic sites like "top 10 colors for your next design" that are easy to churn out.
Try to search on a different subject and filter for only recent results from 2024, get results from 2015, 2016. Difficult to tell if the subject had simply collapsed in the intervening 10 years (seemed unlikely) or if Google was completely ignoring the filters applied. The results did not substantially change. It's like existing in an echo chamber where you're shown what you're supposed to view. It all feels very 1984 lately.
Basically ended up at the same conclusion: their customers are the ad buyers. They don't get enough money from "normal" people to care.
One of the fun things about surveillance capitalism is that you can't correct errors in any of the millions of assumptions being made about you based on any number of tiny details collected about your life.
Sounds like somebody somewhere thinks that you're old, or that you know an old person. Maybe you live in an area with lots of old people. Maybe you've got aging parents. Maybe an old person had your IP before you did. Maybe just the fact that you're still using facebook is good enough to identify someone as being old the majority of the time.
Correction: That some subset of the people you mostly meet online tries to get rid off.
You'd be surprised how many don't even realize it's artificial, and/or welcome it. The average Google user is most certainly not similar to the average Hacker News commenter.
If I start fucking adding swear words to all my fucking search queries, how the fuck will the stupid ass search engine know that I did not want it to use that shit as one of my keywords and give me back a whole lot of fucked up shit?
Google often ignores regular words too, mockingly striking them out. This almost feels like a 1st April joke, not how a search engine is supposed to work.
One tip I like to give for exploring public data is to do an early search for the word "fuck". It's a pretty ubiquitous word, but one that you assume shouldn't show up for certain fields, so seeing it, or not seeing it, can give useful insight to the the scope of the data universe and collection. Including where/who the data comes from, whether or not any validation exists during the collection process, and how updates/corrections are done to collected data.
For example, you're required to provide accurate info about yourself when donating to a U.S. federal political campaign [0]. Is it possible that someone, somewhere in America is legally named John Fucksalot? Or works for a company named Fucks, Inc? Maybe! We're a huge country with wildly diverse cultural standards and senses of humor. But a John Fucksalot, CEO of Fucks Inc, who lives in Fuck City, Ohio 42069? Probably not, and the fact that this record exists says something about how the rules and laws regarding straw donors are readily enforce. And whether or not an enforcement action happened, what field in the FEC data indicates a revised record?
Seems like this tip can still be useful in the Age of LLMs. Not just for learning about the training data, but also how confident providers are in their models, and how many guardails they've needed to tack on to prevent it from giving unwanted answers.
disappointed that Fucking has self censored, bearing in mind their road safety signs used to suggest local residents were in on the joke https://imgur.com/5KOCwdC
The downside of this approach is that it can affect the search results returned. But I found that if you add " -fuck" or " -fucking" to your search term, it disables the AI summary without significantly affecting your search results (unless you happen to be looking for content of a certain kind).
You can probably find some other term that disables the AI but is unlikely to occur naturally in the articles you'd like to find, e.g.: "react swipeable image carousel -coprophilia".
Will it still work if "fuck" is part of a quoted phrase? If so, you could avoid it by constructing a phrase that contains the term but isn't going to match anything, ex: -"fuck 5823532165".
I work with ML and I am bullish with AI in general; said that, I would pay between 5 to 10 USD a feature or toggle called “No AI” for several services.
For myself I noticed 2 bad effects in my daily usage:
- Search: impossible to reach any original content in the first positions. Almost everything sounds like AIsh. The punctuation, the commas, the semicolon, the narro vocabulary, and the derivative nature of the recent internet pages.
- Discovery: (looking directly to you Spotify and Instagram) here I would add in the “No AI” feature another one “Forget the past…” and then set the time. I personally like to listen some orthogonal genres seasonally. But once that you listen 2 songs in a very spontaneous manner Spotify will recommend that for a long time. I listened out of curiosity some math rock, and the “Discovery Weekly” took 9 weeks to not recommend that anymore.
If Kagi made a cheaper "no AI" tier I would be happy to subscribe. AI is costly to run, so even if you don't use the AI it's priced into your subscription fee - you're paying for an expensive product you don't want or use.
e: according to Kagi's pricing page they do have a 'no AI' tier, but it limits your number of searches to 300/month. Seems like a totally arbitrary limitation, but its still better than forced AI.
Left field tip if you think a search engine is hiding stuff: Yandex. They're not actually Russian any more, but they're far enough down the list of search engines that nobody bothers to DMCA them.
It's also absolutely terrible for image search which has been absolutely poisoned by rampant proliferation of poor quality stable diffusion images - even on stock photo sites.
It got so bad that I had to add a "No AI" flag to my image search app which limits the date range to earlier than 2022. Not a great solution but works in a pinch.
> I work with ML and I am bullish with AI in general; said that, I would pay between 5 to 10 USD a feature or toggle called “No AI” for several services.
Hard fuck this. I am not giving a company money to un-ruin their service. Just go to a competitor.
I get with a bunch of these hyperscaled businesses it's borderline impossible to entirely escape them, but do what you can. I was an Adobe subscriber for years, and them putting their AI garbage into every app (along with them all steadily getting shittier and shittier to use) finally made me jump ship. I couldn't be happier. Yeah there was pain, there was adjustment period, but we need to cut these fuckers off already. No more eternal subscription fees for mediocre software.
Office is next. This copilot shit is getting worse by the day.
Eh, my partner loves Copilot in Office. There are tasks where it's taking hours or days off the time. Especially web searching and extracting information.
Imagine you are a completely non-technical friend of yours. They are very smart but they do not know a damn thing about configuring computers. They mostly just use their phone and/or tablet.
How much of "just append ?udm=14 to your search query" is absolute gibberish?
Is "install the udm14 plugin" going to make any more sense?
Is "go to udm14.com for all your searches" going to stick? Are there phishing sites at umd14.com, mdm41.com, uwu44.com, and all the other variants they'll probably misremember it as?
"just search for 'fucking whatever' and the AI crap goes away", on the other hand, is funny, uses a common dictionary word that everyone above the age of five knows how to spell, and is intensely memorable.
You can reset your algo on Spotify! I did and learned a lot. There were maybe 5 songs I wasn’t hearing that I liked, but tens of songs I did not like that I had saved years ago that came back up and were once again swiftly killed by the algorithm after a few instaskips
Spotify used to have a "dislike" button for their Discover Weekly which helped with pruning music you don't like, but with the natural law of tech enshitification they removed that feature a month ago.
That was such a frustrating decision. I had almost convinced Spotify that that one time I listened to Lustmord was just a random mood, and I don't actually want to only listen to dronecore for the rest of my life.
I used to always hesitate to use that "dislike" button because I was worried that Spotify would not be able to distinguished between "I will always dislike this song" and "I don't want this song in this specific context"
I can't tell if you misspelled narrow or if "narro" is somehow referring to "narrated" type content we now see so much of. Or even just weird narrative things (eg, recipes).
Thank you. Showing tidbits like this from HN to my kids has seemed to help guide them to be be more curious and creative in how they use the internet, instead of treating it like a magical black box.
I was late to this, but G's default search had been becoming worse and worse. The trick is equivalent to clicking the "Web" tab when you do default search. In 99.9% cases the "Web" tab is what I need, it's pure and no noise. I do not mind clicking the "All" tab e.g. for a tennis player last name during AO to get all details I need. Actually, for sport events the default G's functionality is insanely useful, such as live score updates.
It's so strange how much money and time companies are pouring into "features" that the public continues to reject at every opportunity.
At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast numbers of employees out of work and allow companies to use the massive amounts of data they've collected about us against us more effectively. All the AI being shoehorned into products and services now are mostly to test, improve, and advertise for the AI being used, not to provide any value for users who'd rather have nothing to do with it.
Personally I find it slower than just doing it manually but it has resulted in the form being correct more often now and has a lot of usage. There is also a big button when the chat opens that you can click to just fill it out manually.
It has its place, that place just isn't everywhere and the only option.
I should go back to look at that and see if we could incorporate an easy ChatBot as an improvement.
Someone has a 'friend' who has a totally-not-publically-visible form where a chat bot interacts with the form and helps the user fill the form in.
...and users love it.
However, when really pressed, I've yet to encounter someone who can actually tell me specifically
1) What form it is (i.e. can I see it?)
2) How much effort it was to build that feature.
...because, the problem with this story is that what you're describing is a pretty hard problem to solve:
- An agent interacts with a user.
- The agent has free reign to fill out the form fields.
- Guided by the user, the agent helps will out form fields in a way which is both faster and more accurate than users typing into the field themselves.
- At any time the user can opt to stop interacting with the the agent and fill in the fields and the agent must understand what's happened independently of the chat context. i.e. The form state has to be part of the chat bot's context.
- At the end, the details filled in by the agent are distinguished from user inputs for user review.
It's not a trivial problem. It sounds like a trivial problem; the agent asks 'what sort of user are you?' and parses the answer into one of three enum values; Client, Foo, Bar -> and sets the field 'user type' to the value via a custom hook.
However, when you try to actually build such a system (as I have), then there are a lot of complicated edge cases, and users HATE it when the bot does the wrong thing, especially when they're primed to click 'that looks good to me' without actually reading what the agent did.
So.
Can you share an example?
What does 'and has a lot of usage' mean in this context? Has it increased the number of people filling in the form, or completing it correctly (or both?) ?
I'd love to see one that users like, because, oh boy, did they HATE the one we built.
At the end of the day, smart validation hints on form input fields are a lot of easier to implement, and are well understood by users of all types in my experience; it's just generally a better, normal way of improving form conversion rates which is well documented, understood and measurable using analytics.
...unless you specifically need to add "uses AI" to your slide deck for your next round of funding.
In fairness to the 99.99% they don't even know what a bootloader is and if they understood the situation and the risks many of them would also favor an open option.
I don't think the rejection of AI is primarily a HN thing though. It's my non-tech friends and family who have been most vocal in complaining about it. The folks here are more likely to have browser extensions and other workarounds or know about alternative services that don't force AI on you in the first place.
I'm sure google thinks that people have some sort of bias, and that if they force people to use it they'll come to like it (just like google plus), but this also shows how much google looks down on the free will of its users.
I do think it's as simple as appealing to stakeholders in whatever way they can, regardless of customer satisfaction. As we've seen as of late, the stock markets are completely antithetical to the improvement of people's lives.
The first point does indeed come into play because oftentimes most people don't throw enough of a fuss against it. But everything has some breaking point; Microsoft's horribly launched Copilot for Office 365 showed one of them
[0]: https://www.warc.com/content/feed/ai-is-a-turn-off-for-consu...
[1]: https://hbr.org/2025/01/research-consumers-dont-want-ai-to-s...
Coincidentally today, I received an automated text from my heath care entity along the lines of, "Please recognize this number as from us. Our AI will be calling you to discuss your heath."
No. I'm not going to have a personal discussion with an AI.
https://www.reddit.com/r/google/comments/1czcjze/how_is_ai_o...
Mainstream press have been covering how much people hate it - people's grandparents are getting annoyed by it. Worse, it comes on the heels of four years of Prabhakar Raghavan ruining Google Search for the sake of trying to pump ad revenue.
It's a steaming pile of dogshit that either provides useless information unrelated to what you searched for and is just another thing to scroll past, and even worse, provides completely wrong information half the time, which means even if the response seems to be what you asked, it's still useless because you can't trust it.
When it's integrated into a product people are more likely to use it. Lowering the barrier to entry so to speak.
It's this part.
Salaries and benefits are expensive. A computer program doesn't need a salary, retirement benefits, insurance, retirement, doesn't call in sick, doesn't take vacations, works 24/7, etc.
Throwing "ai" into it is a simple addition, if it works, great, if it doesn't well the market just wasn't ready.
But if they have to actually talk to their users and solve their real problems that's a really hard pill to swallow, extremely hard to solve correctly, and basically impossible to sell to shareholders because you likely have to explain that your last 50 ideas and the tech debt they created are the problem that needs to be excised.
Data centers will soon outstrip all other uses of electrical power, as for an AI calling in sick, no it needs full power 24/7. AI has no creativity, no initiative, no conscious, and absolutely zero ethics.
"In a middle-ground scenario, by 2027 new AI servers sold that year alone could use between 85 to 134 terawatt hours (Twh) annually."
But this is normal. A new thing is discovered, the market runs lots of tests to discover where it works / doesn’t, there’s a crash, valid use cases are found / scaled and the market matures.
Y’all surely lived thru the mobile app hype cycle, where every startup was “uber for x”.
The amount of money being spent today pales in comparison to the long term money on even one use case that scales. It’s a good bet if you are a VC.
The normal tech innovation model is: 1. User problem identified, 2. Technology advancement achieved, 3. Application built to solve problem
With AI, it's turned into: 1. Technology advancement achieved, 2. Applications haphazardly being built to do anything with Technology, 3. Frantic search for users who might want to use applications.
I don't know how the industry thinks they're going to make any money out of the new model.
That's certainly part of it.
However, at this point I think a lot of it is a kind of emotional sunk-cost. To stop now would require a lot of very wealthy and powerful people to admit they had personally made a very serious mistake.
It may be less that people are unaware of the speculative bubble but are just hoping to get in and out before it pops.
The outcome that the large companies are banking on is replacing workers, even employees with rather modest compensation end up costing a significant amount if you consider overhead and training. There is (mostly) no AI feature that wall street or investors care about except replacing labor - everything else just seems to exist as a form of marketing.
“Nobody asked you, Siri!” at it.
And that, kids, is how I met your mother.
If AI (or any tech) could clean, do dishes, or cook (which is not a chore for many, I acknowledge that) it could potentially bring families together and improve the quality of peoples lives.
Instead they are introducing it as a tool to replace jobs, think for us, and mistrust each other ("you sound like an AI bot!/you just copied that from chatgpt! You didn't draw that! How do I know you're real?"
I don't know if they really thought through to an endgame, honestly. How much decimation can you inflict on the economy before the snake eats its own tail?
One day they'll put those kinds of robots in people's homes, but I'll keep them out of mine because they'll be full of sensors, cameras, and microphones connected to the cloud and endlessly streaming everything about your family and your home to multiple third parties. It's hard enough dealing with cell phones and keeping "smart"/IoT crap from spying on us 24/7 and they don't walk around on their own to go snooping.
The sad thing about every technology now is that whatever benefits it might bring to our lives, it will also be working for someone else who wants to use it against us. Your new smart TV is gorgeous but it watches everything you see and inserts ads while your watching a bluray. Your shiny car is self-driving, but you're tracked everywhere you go, there are cameras pointed at you recording and microphones listening the entire time sending real-time data to police and your insurance company. Your fancy AR implant means you'll never forget someone's name since it automatically shows up next to their face when you see them, but now someone else gets to decide what you'll see and what you aren't allowed to see. I think I'll just keep washing my own dishes.
Which is a stupid argument, since there is "any tech" that can do your laundry and dishes, and it's been around for decades! Is it too hard for you to put your dishes in the dishwasher, or your clothes in the washing machine?
And I say this as someone bearish on AI.
You just notice the shitty ones, but people on HN thinks that's the norm for some reason.
I strongly doubt this "dichotomy AI" theory.
So you are not far off from that concept of putting vast numbers of employees out of work, when influential figures like Andressen are openly stating that is their ambitions.
we still don’t know what problems to solve, but we’re gonna use AI to help us figure that out.
once we do, it’s gonna be huge. this AI stuff is going to change everything!!!
They cite "Revealed Choice", which may apply when there is an actual choice.
But in the nearly winner-take-all dynamic of digital services, when the few oligopolistic market leaders present nearly identical choices, the actual usage pattern reveals only that a few bad choices are just barely worse than chucking the whole thing.
They already fired so many developers and this feels more like a Hail Mary before maintenance costs and tech debt start catching up to you.
Jokes aside, investors behind google seem to not realize that google at this point is infrastructure and not an expandable product market anymore. What's left to expand to since the Google India Ad 2013? What? North Korea, China, Russia, maybe? And then everyone gets their payout?
Financial ecosystems and their bets rely on infinite expansion capabilities, which is not possible without space travel.
It's similar to the question of why flies lay millions of eggs.
no. On the contrary. We will need people to clean the mess left by AI
> and allow companies to use the massive amounts of data they've collected about us against us more effectively.
yes.
Deleted Comment
I would bet money that the majority of users do not actually feel this way.
Google couldn’t just keep ignoring it. I do wish it were an option instead of on by default - except for searches they can monetize
I'm surprised as well. Some people want it
There's nothing AI brings to the table that a competent human wouldn't, with the added benefit that you don't have to worry about AI making things up or not understanding you.
Or maybe they just want to try and convince the AI to give them things you wouldn't (https://arstechnica.com/tech-policy/2024/02/air-canada-must-...)
Deleted Comment
Deleted Comment
It's absolutely still a scheme by companies to get rid of employees and get customers to do work for them for free, and there are still issues with the systems not working very well, but we at least have the option (in almost all cases) to queue up at the one or two registers with an employee doing the work. When it comes to AI, we're often not being given any choice at all. Even if we can avoid using it, or somehow avoid seeing it, we will still be training it.
I think people are always resistant to change. People didn't like ATMs when they first came out either. I think it's improved things.
Deleted Comment
All the ad talked about was AI, nothing about specs, and barely a whisper of how it works, or even good demos of apps switching between open and closed.
Every phone has AI now, big deal. How about you tell me, Google, what is cool about the fold, instead of talking for 4 minutes about AI?!
It's too bad because even 10 years ago Google and the internet in general were magical. You could find information on any topic, make connections and change your life. Now it is mostly santized dumbed down crap and the discovery of anything special is hidden under mountains of SEO spam, now even AI generated SEO spam that is transparently crap for any moderately intelligent user.
For a specific example I like to watch wildlife videos and specifically ones that give insight to how animal think and process the world. This comparative psychology can help us better understand ourselves.
If you want to watch Macaque monkeys for example google/youtube feeds you almost exclusively SEO videos from a handful of locations in Cambodia. There are plenty of other videos out there but they are hidden by the mass produced videos out of Cambodia.
If I find an interesting video and my view history is off the same video is often undiscoverable again even with the exact same search terms.
Search terms are disregarded or substituted willy nilly by Google AI who thinks it knows better what I want than myself.
But the most egregious thing for me as a viewer of nature videos is the AI generated content. It is obviously CGI and often ridiculous or physically impossible. For example let's say I want to see how a monkey interacts with a predatory python, I am allowed to watch that right??? Or are all the Serengeti lion hunting gazel videos to be banned in 2025? Lol. So I search "python attacks monkey" hoping to see a video in the natural setting. Instead I am greeted with maybe a handful of badly shot videos probably staged by humans and hundreds of CGI cartoons that are obviously not real. In one the monkey had a snake mouth! Lol. Who goes searching for real nature videos to see badly faked stuff?
Because of how I can not find anything on google or Youtube anymore without picking through a mountain of crap I use them less now. This is for almost any kind of topic not just nature videos.
Is that a win for advertisers? Less use? I don't think so.
In about 20 years of using the product the number of times a google or Youtube search has led to me actually purchasing a product or service DUE to an ad I saw, is I believe precisely zero.
Recently I have been seeing Temu (zero interest), disability fraud (how is this allowed), senior, and facebook ads. I am a non disabled, 30 something man. I saw an ad for burial insurance today.
Why is facebook paying to advertise "facebook" on youtube in 2025? Is this some ritual sacrifice to the god Mammon or something? Surely in 2025 everyone who would be interested in Facebook has heard of it. I have the Facebook app installed. Why the hell do facebook investors stand facebook paying google to advertise facebook non-selectively on youtube. It's the stupidest thing I ever saw.
I have not watched any political content in years. And yet when I search for a wild life video I get mountains of videos about Trump and a handful of mass produced low quality wildlife content interspersed.
Today I was treated to an irrelevant ad about "jowl reduction."
I know many of you use ad blockers but this is how horrendous it is without them. You can't find what you want, even what you just saw, and you are treated to a deluge of irrelevant, obnoxious content and ads.
Clearly it is about social control, turning our minds to mush to better serve us even more terrible ad content.
Tried to search user interface design for an ongoing project, and found that Google now simply ignores filtering attempts... Try to find ideas about multi-color designs, and all there is are endless image spam sites and Letterman style Top 10 lists. Try to filter those out, and Google just ignores many attempts.
There's so many, that even those that actually do get successfully filtered out, only reveal the next layer of slime to dig through. Maybe the people that didn't pay enough for placement?
Huge majority, far and away, where the "Alamy", "Shutterstock", "_____Stock", ect... photos websites. There's so many it's not really practical to notch filter. Anything involving images. Spend all day just trying to notch filter through "_____Stock" results to get to something real.
The worst though, was that even among sites that wrote something, there was almost nothing that was actually "user interfaces" or anything related to design, other than simplistic sites like "top 10 colors for your next design" that are easy to churn out.
Try to search on a different subject and filter for only recent results from 2024, get results from 2015, 2016. Difficult to tell if the subject had simply collapsed in the intervening 10 years (seemed unlikely) or if Google was completely ignoring the filters applied. The results did not substantially change. It's like existing in an echo chamber where you're shown what you're supposed to view. It all feels very 1984 lately.
Basically ended up at the same conclusion: their customers are the ad buyers. They don't get enough money from "normal" people to care.
Sounds like somebody somewhere thinks that you're old, or that you know an old person. Maybe you live in an area with lots of old people. Maybe you've got aging parents. Maybe an old person had your IP before you did. Maybe just the fact that you're still using facebook is good enough to identify someone as being old the majority of the time.
And a vast decline in youtube.
You'd be surprised how many don't even realize it's artificial, and/or welcome it. The average Google user is most certainly not similar to the average Hacker News commenter.
Deleted Comment
You may think you do, but I am certain you do not.
https://en.wikipedia.org/wiki/Fugging,_Upper_Austria
I'm joking, somewhat, but can we seriously start getting mad about this shit?
For example, you're required to provide accurate info about yourself when donating to a U.S. federal political campaign [0]. Is it possible that someone, somewhere in America is legally named John Fucksalot? Or works for a company named Fucks, Inc? Maybe! We're a huge country with wildly diverse cultural standards and senses of humor. But a John Fucksalot, CEO of Fucks Inc, who lives in Fuck City, Ohio 42069? Probably not, and the fact that this record exists says something about how the rules and laws regarding straw donors are readily enforce. And whether or not an enforcement action happened, what field in the FEC data indicates a revised record?
Seems like this tip can still be useful in the Age of LLMs. Not just for learning about the training data, but also how confident providers are in their models, and how many guardails they've needed to tack on to prevent it from giving unwanted answers.
[0] https://www.fec.gov/data/receipts/individual-contributions/?...
Will it still work if "fuck" is part of a quoted phrase? If so, you could avoid it by constructing a phrase that contains the term but isn't going to match anything, ex: -"fuck 5823532165".
Deleted Comment
For myself I noticed 2 bad effects in my daily usage:
- Search: impossible to reach any original content in the first positions. Almost everything sounds like AIsh. The punctuation, the commas, the semicolon, the narro vocabulary, and the derivative nature of the recent internet pages.
- Discovery: (looking directly to you Spotify and Instagram) here I would add in the “No AI” feature another one “Forget the past…” and then set the time. I personally like to listen some orthogonal genres seasonally. But once that you listen 2 songs in a very spontaneous manner Spotify will recommend that for a long time. I listened out of curiosity some math rock, and the “Discovery Weekly” took 9 weeks to not recommend that anymore.
e: according to Kagi's pricing page they do have a 'no AI' tier, but it limits your number of searches to 300/month. Seems like a totally arbitrary limitation, but its still better than forced AI.
TOP 10 X; THE 20 BEST Y; 20 REASONS Z; etc.
I go Kagi and am immediately refreshed.
It got so bad that I had to add a "No AI" flag to my image search app which limits the date range to earlier than 2022. Not a great solution but works in a pinch.
https://github.com/scpedicini/truman-show
> I work with ML and I am bullish with AI in general; said that, I would pay between 5 to 10 USD a feature or toggle called “No AI” for several services.
Hard fuck this. I am not giving a company money to un-ruin their service. Just go to a competitor.
I get with a bunch of these hyperscaled businesses it's borderline impossible to entirely escape them, but do what you can. I was an Adobe subscriber for years, and them putting their AI garbage into every app (along with them all steadily getting shittier and shittier to use) finally made me jump ship. I couldn't be happier. Yeah there was pain, there was adjustment period, but we need to cut these fuckers off already. No more eternal subscription fees for mediocre software.
Office is next. This copilot shit is getting worse by the day.
How much of "just append ?udm=14 to your search query" is absolute gibberish?
Is "install the udm14 plugin" going to make any more sense?
Is "go to udm14.com for all your searches" going to stick? Are there phishing sites at umd14.com, mdm41.com, uwu44.com, and all the other variants they'll probably misremember it as?
"just search for 'fucking whatever' and the AI crap goes away", on the other hand, is funny, uses a common dictionary word that everyone above the age of five knows how to spell, and is intensely memorable.
Better than Google in every single aspect, except shopping. The shopping results in Google are actually good.
If someone hasn't already made a userscript to do this automatically, someone should, it would be very easy.
I was late to this, but G's default search had been becoming worse and worse. The trick is equivalent to clicking the "Web" tab when you do default search. In 99.9% cases the "Web" tab is what I need, it's pure and no noise. I do not mind clicking the "All" tab e.g. for a tennis player last name during AO to get all details I need. Actually, for sport events the default G's functionality is insanely useful, such as live score updates.