Readit News logoReadit News
inkysigma · 11 hours ago
> Large language models are something else entirely*. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn't sit well.

Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.

I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.

jrjeksjd8d · 8 hours ago
To be more charitable to TFA, machine translation is a field where there aren't great alternatives and the downside is pretty limited. If something is in another language you don't read it at all. You can translate a bunch of documents and benchmark the result and demonstrate that the model doesn't completely change simple sentences. Another related area is OCR - there are sometimes mistakes, but it's tractable to create a model and verify it's mostly correct.

LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.

schoen · 6 hours ago
There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way; it's designed like other programs, according to someone's explicit understanding. There's still active research in this field; I have a friend who's very deep into it.

The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.

The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.

Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!

ACCount37 · 29 minutes ago
LLMs are great because of exactly that: they solve things that have no other solutions.

(And also things that have other solutions, but where "find and apply that other solution" has way more overhead than "just ask an LLM".)

There is no deterministic way to "summarize this research paper, then evaluate whether the findings are relevant and significant for this thing I'm doing right now", or "crawl this poorly documented codebase, tell me what this module does". And the alternative is sinking your own time in it - while you could be doing something more important or more fun.

onion2k · 4 hours ago
and demonstrate that the model doesn't completely change simple sentences

A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.

For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.

Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.

tdeck · 8 hours ago
Aside: Does anyone actually use summarization features? I've never once been tempted to "summarize" because when I read something I either want to read the entire thing, or look for something specific. Things I want summarized, like academic papers, already have an abstract or a synopsis.
andai · 5 hours ago
Yeah, basically every 15 minute YouTube video, because the amount of actual content I care about is usually 1-2 sentences, and usually ends up being the first sentence of an LLM summary of the transcript.

If something has actual substance I'll watch the whole thing, but that's maybe 10% of videos I find in experience.

runjake · 6 hours ago
Yes, several times a day. I use summarization for webpages, messages, documents and YouTube videos. It’s super handy.

I mainly use a custom prompt using ChatGPT via the Raycast app and the Raycast browser extension.

That said, I don’t feel comfortable with the level of AI being shoved into browsers by their vendors.

mikestorrent · 6 hours ago
In-browser ones? No. With external LLMS? Often. It depends on the purpose of the text.

If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.

If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.

figmert · 6 hours ago
You mean you don't summarize those terrible articles you happen to come across and you're a little intrigued, hoping that there's some substance, and then you read, and it just repeats the same thing over and over again with different wording? Anyway, I sometimes still give them the benefit of the doubt, and end up doing a summary. Often they get summarized into 1 or 2 sentences.
simonw · 7 hours ago
I occasionally use the "summarize" button on the iPhone Mobile Safari reader view if I land on a blog entry and it's quite long and I want to get a quick idea of if it's worth reading the whole thing or not.
wkat4242 · 7 hours ago
Yes. I use it sometimes in Firefox with my local LLM server. Sometimes i come across an article I'm curious about but don't have the time or energy to read. Then I get a TL;DR from it. I know it's not perfect but the alternative is not reading it at all.

If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.

badbotty · 7 hours ago
Haven’t tried them but I can see these features being really useful for screen reader users.
mock-possum · 3 hours ago
Nah, because anything not worth reading is also not worth summarizing.
cess11 · 4 hours ago
No, because I know how to search and skim.
MrAlex94 · 3 hours ago
Looking back with fresh eyes, I definitely think I could’ve presented what I’m trying to say better.

On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.

WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.

yunohn · 3 hours ago
I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.
tliltocatl · 4 hours ago
The thing about translation, even a human translator will sometimes make silly mistakes unless they know the domain really well. So LLM are not any worse. Translation is a problem with no deterministic solution (rule-based translation had always been a bad joke). Properly implemented deterministic search/information retrieval, on the other hand, works extremely well. So well it doesn't really need any replacement - except when you also have some extra dynamics on top like "filtering SEO slop" - and that's not something LLMs can improve at all.
user3939382 · 9 hours ago
Firefox should look like Librewolf first of all, Librewolf shouldn’t have to exist. Mozilla’s privacy stuff is marketing bullshit just like Apple. It shouldn’t be doing ANYTHING that isn’t local only unless it’s explicitly opt in or user UI action oriented. The LLM part is absurd bc the entire overton window is in the wrong place.
andai · 5 hours ago
As a side note, I was like "Isn't WaterFox the FF fork by that wolf guy?"

Then I thought, "Aha! Surely LibreWolf is the one I'm thinking of!"

Turns out no, it's a third one! It's PaleMoon...

PunchyHamster · 8 hours ago
It's frankly desperate trend chasing from management that lost after starting from near total market domination, and have no idea what to do now.
Cheer2171 · 11 hours ago
No, it is disqualifyingly clueless. The author defends one neural network, one bag of effectively-opaque floats that get blended together with WASM to produce non-deterministic outputs which are injected into the DOM (translation), then righteously crusades against other bags of floats (LLMs).

From this point of view, uBlock Origin is also effectively un-auditable.

Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.

koolala · 9 hours ago
I'm ok with Translation because it's best solved with AI. I'm not ok with it when Firefox "uses AI to read your open tabs" to do things that don't even need an AI based solution.
kevmo314 · 11 hours ago
> Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.

This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/

The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.

jazzyjackson · 11 hours ago
It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.

I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)

bee_rider · 11 hours ago
I’m not too worried about starting to write like a bot. But, I do notice that I’m sometimes blunt and demanding when I talk to a bot, and I’m worried that could leak through to my normal talking.

I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.

Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.

kevmo314 · 11 hours ago
Sure, I am more referring to advocating for Bergamot as a type of more "pure" solution.

I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.

liampulles · an hour ago
The local part is the important part here. If we get consumer level hardware that can run general LLM models, there we can actually monitor locally what goes in and what goes out, then it meets the privacy needs/wants of power users.
internet_points · 3 hours ago
To me it sounds like a reasonable "AI-conservative" position.

(It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)

hatefulheart · 8 hours ago
Your tone is kind of ridiculous.

It’s insane this has to be pointed out to you but here we go.

Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.

Moru · 4 hours ago
No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.
zdragnar · 11 hours ago
You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.

The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.

The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.

kevmo314 · 11 hours ago
Yes I agree with this, but the blog post makes a much more aggressive claim.

> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.

Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.

The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.

_heimdall · 9 hours ago
Running locally does help get less modified output, bit how does it help escape the black box problem?

A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.

PunchyHamster · 7 hours ago
> but I don't understand the hate for LLMs.

It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense

XorNot · 9 hours ago
Translation AI though has provable behavior cases though: round tripping.

An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.

No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).

charcircuit · 8 hours ago
That is not an ideal translation as it prioritizes round tripability over natural word choice or ordering.
CivBase · 9 hours ago
I think the author was close to something here but messed up the landing.

To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.

It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.

zmmmmm · 3 hours ago
I just want FireFox to focus on building an absolutely awesome plugin API that exposes as much power and flexibility as possible - with the best possible security sandbox and permissions model to go with it.

Then everyone who wants AI can have it and those that don't .... don't.

sigmoid10 · 3 hours ago
I just want a browser that lets me easily install a good adblocker on all my operating systems. I don't care about their new toolbar or literally any other feature, because I will probably just disable it immediately anyway. But the nr 1 thing I use every day on every single site I visit is an adblocker. I'm always baffled when people complain about ads on mobile or something, because I literally haven't watched ads in decades now.
Arisaka1 · 10 minutes ago
>Then everyone who wants AI can have it and those that don't .... don't.

The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.

My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?

I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.

LandR · 25 minutes ago
I just want an adblocker and tree style vertical tabs, where the tab bar minimises when the mouse isn't over it.

That's literally my entire use case for using firefox.

pbhjpbhj · 3 hours ago
They've been quite forceful in the past in pushing 'plugins' by integrating them and turning them on repeatedly when people turned them off.

Did that achieve the last CEOs goals? Presumably if it did they'll use that route again.

Have Google required a default 'on' for Gemini use?

moffkalast · an hour ago
I just want them to fix their goddamn rendering.
clueless · 12 hours ago
This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...

[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

mindcrash · 11 hours ago
They are not "wanting" to introduce AI, they already did.

And now we have:

- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.

- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.

Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)

reddalo · 3 hours ago
Every time i reinstall Firefox on a new machine, the number of annoyances that I need to remove or change increases.

Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.

AuthAuth · 10 hours ago
All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like. And tons of people asked for that sidebar by the way.

We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.

Wowfunhappy · 11 hours ago
> [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

I don't want any of this built into my web browser. Period.

This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!

dotancohen · 10 hours ago
Reread your post with your evil PM hat on. You just said "I'm willing to pay for AI". That's all they hear.

Dead Comment

Xelbair · 11 hours ago
>This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...

Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.

I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.

nottorp · 35 minutes ago
It doesn't matter what they exactly want to do, what it matters is they're wasting resources on it instead of keeping the ... browsing part ... up to date.
tdeck · 7 hours ago
> We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints

Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.

infotainment · 12 hours ago
This 100% -- the AI features already in Firefox, for the most part, rely on local models. (Right now there is translation and tab-grouping, IIRC.)

Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.

_heimdall · 9 hours ago
Local models are nice for keeping the initial prompt and inference off someone else's machine, but there is still the question of what the AI feature will do with data produced.

I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.

BoredPositron · 12 hours ago
If we look at the last AI features they implemented it doesn't like they are betting on local models anymore.
recursive · 12 hours ago
I don't feel like I want AI in my browser. I'm not sure what I'd do with it. Maybe translation?
clueless · 12 hours ago
yeah, translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc

All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts

actionfromafar · 12 hours ago
I like translation, it's come in handy a few times, and it's neat to know it's done locally.
ekr____ · 12 hours ago
FWIW, Firefox already has AI-based translation using local models.

Deleted Comment

goalieca · 11 hours ago
The ux changes and features remind us of pocket and all the other low value features that come with disruptive ux changes as other commenters have noted.

Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.

1shooner · 12 hours ago
I just know I've already had to chase down AI in Firefox I definitely did not ask for or activate, and I don't recall consenting to.
isodev · 12 hours ago
There is also the matter of how training data was licensed to create these models. Local or not, it’s still based on stolen content. And really what transformative use case is there to have AI in the browser - none of the ones currently available step outside gimmicks that quickly get old and don’t really add value.
xg15 · 12 hours ago
I don't think a locally hosted LLM would be powerful enough for the supposed "agentic browsing" scenarios - at least if the browser is still supposed to run on average desktop PCs.
lxgr · 11 hours ago
Not yet, but we’ll hopefully get there within at most a few years.
koolala · 9 hours ago
This is probably their plan to monetize this. They will partner with a AI company to 'enhance' the browser with a paid cloud model and the local model has no monetary incentive not to suck.
csydas · 3 hours ago
>We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into..

https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025...

it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech

it's better to understand the concern over mozilla's announcement the following way i think:

- mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching

- mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies

- mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla

with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software

the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life

my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position

firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement

the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future

TheRealPomax · 11 hours ago
I want the people who make Firefox to make decisions about Firefox based on what users have been asking for instead of based on what a CEO of a for-profit decides is still not going to make them any money, just like every other plan that got pitched in the last 10 years that failed to turn their losing streak around.

It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.

nullbound · 11 hours ago
While I do sympathize with the thought behind it, general user is already equating llm chat box as 'better browsing'. In terms of simple positioning vis-a-vis non-technical audience, this is one integration that does make fiscal sense.. if mozilla was a real business.

Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.

api · 12 hours ago
We're still in bubble-period hyper-polarized discourse: "shoehorn AI into absolutely everything and ram it down your throat" vs "all AI is bad and evil and destroying the world."
pferde · 7 hours ago
The former is a cause, the latter an effect of it.
ToucanLoucan · 12 hours ago
I don't want any AI in anything apart from the Copilot app, where the AI that I use is. I don't want it in my IDE. I don't want it in my browser. I don't want it in my messaging client. I don't want it in my email app. I want it in the app, where it is, where I can choose to use it, give it what it needs, and leave at at bloody that.
lxgr · 11 hours ago
I also want to have complete control over what data I provide to LLMs (at least as long as inference happens in the cloud), but I’d love to have them everywhere, not just a chat UI (which I suspect will be seen as a relatively pretty bizarre way of doing non-chat tasks on a computer).
zwnow · 4 hours ago
> I think people want AI in the browser

Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.

ThrowawayTestr · 12 hours ago
I don't want to have to max out my gpu to browse reddit.

Deleted Comment

someothherguyy · 10 hours ago
bigstrat2003 · 9 hours ago
This is like when people defend Windows 11's nonsense by saying "you can disable or remove that stuff". Yes, you can. But you shouldn't have to, and I personally prefer to use tools which don't shove useless things into the tool because it's trendy.
fsflover · 3 hours ago
The difference is that on Windows all unwanted features eventually become mandatory, with no way of switching them off. With Firefox, it never happens.
derekdahmer · 8 hours ago
How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.
calvinmorrison · 8 hours ago
not to mention firefox routinely blows up any policies you set during upgrades, incompatibilities, and an endless about:config that is more opaque than a hunk of room temperature lard.
beached_whale · 9 hours ago
Easy for who? 99% of people are not going/able to setup firefox policies.
koolala · 9 hours ago
Is it really in all 4 of those places? Just need to change it in the first two, right? I hate the new AI tab feature and wish they had a non-AI option.
phyzome · 7 hours ago
Even if we ignore things like "they're chasing AI fads instead of better things" and "they're adding attack surface" and so forth, and just focus on the disabling feature toggles thing...

... Mozilla has re-enabled AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.

otikik · an hour ago
> AI browsers are proliferating

Are they, though? I get bombarded by AI ads very frequently and I have yet to see anything from those "AI browsers" mentioned on the article.

rythie · 4 hours ago
Waterfox is dependant on Firefox still being developed. Mozilla are adding these features to try to stay relevant and keep or gain market share. If this fails, and Firefox goes away, Waterfox is unlikely to survive.
benrutter · 3 hours ago
That's true, but as a Waterfox user, I'm not worried!

If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.

reddalo · 3 hours ago
> I'll just find a new browser.

The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.

renegat0x0 · 4 hours ago
A browser is a tool that allows you to browse the internet. It should be able to display HTML elements, and stuff.

LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.