No AI* Here – A Response to Mozilla's Next Chapter
Posted by MrAlex94 7 hours ago
Comments
Comment by inkysigma 5 hours ago
Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.
I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.
Comment by jrjeksjd8d 2 hours ago
LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.
Comment by schoen 16 minutes ago
The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.
The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.
Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!
Comment by tdeck 2 hours ago
Comment by figmert 4 minutes ago
Comment by tdeck 48 seconds ago
Comment by simonw 1 hour ago
Comment by runjake 48 minutes ago
I mainly use a custom prompt using ChatGPT via the Raycast app and the Raycast browser extension.
That said, I don’t feel comfortable with the level of AI being shoved into browsers by their vendors.
Comment by mikestorrent 46 minutes ago
If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.
If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
Comment by badbotty 1 hour ago
Comment by wkat4242 1 hour ago
If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.
Comment by user3939382 3 hours ago
Comment by PunchyHamster 2 hours ago
Comment by Cheer2171 5 hours ago
From this point of view, uBlock Origin is also effectively un-auditable.
Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.
Comment by koolala 3 hours ago
Comment by kbelder 3 hours ago
local, open model
local, proprietary model
remote, open model (are there these?)
remote, proprietary model
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.Comment by koolala 3 hours ago
Comment by kevmo314 5 hours ago
This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/
The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
Comment by jazzyjackson 5 hours ago
I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)
Comment by bee_rider 5 hours ago
I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.
Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.
Comment by kbelder 3 hours ago
Comment by _heimdall 3 hours ago
I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.
Comment by jacquesm 2 hours ago
Signalling your turns is zero cost, there is no reason to optimize this.
Comment by _heimdall 46 minutes ago
In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.
I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.
Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.
Comment by marssaxman 2 minutes ago
I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to think about whether signalling was necessary in this case.
Comment by eszed 2 hours ago
This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I always use my turn signals.
I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.
Comment by _heimdall 42 minutes ago
I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.
Comment by kevmo314 5 hours ago
I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.
Comment by hatefulheart 2 hours ago
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
Comment by PunchyHamster 2 hours ago
It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense
Comment by zdragnar 5 hours ago
The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.
The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.
Comment by kevmo314 5 hours ago
> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.
Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.
The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Comment by _heimdall 3 hours ago
A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.
Comment by XorNot 3 hours ago
An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.
No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
Comment by charcircuit 2 hours ago
Comment by CivBase 3 hours ago
To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.
It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.
Comment by clueless 6 hours ago
[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
Comment by mindcrash 5 hours ago
And now we have:
- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.
- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.
Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)
Comment by AuthAuth 4 hours ago
We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.
Comment by monegator 4 minutes ago
until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing
Comment by move-on-by 2 hours ago
According to the privacy policy changes, they are selling data (per the legal definition of selling data) to data partners. https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...
Comment by hannasanarion 1 hour ago
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
Comment by koolala 3 hours ago
Comment by austhrow743 1 hour ago
https://support.mozilla.org/en-US/kb/ai-chatbot This page not only prominently features cloud based AI solutions, I can't actually even see local AI as an option.
Comment by koolala 1 hour ago
Comment by Wowfunhappy 5 hours ago
I don't want any of this built into my web browser. Period.
This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!
Comment by dotancohen 4 hours ago
Comment by wkat4242 1 hour ago
I don't understand why these CEOs are so confident they're standing out from the rest. Because really, they don't.
Right now firefox is a browser as good as Chrome and in a few niche things better, but its having a deeply difficult time getting/keeping marketshare.
I don't see their big masterplan for when Firefox is just as good as the other AI powered browsers. What will make people choose Mozilla? It's not like they're the first to come up with this idea and they don't even have their own models so one way or another they're going to play second fiddle to a competitor.
I think there's a really really strong part of 2. ??? / 3. profit!!! In all this. And not just in Mozilla. But more so.
I mean OpenAI, they have first-mover. Their moat is piling up legislation to slow down the others. Microsoft, they have all their office users, they will cram their AI down their throats whether they want it or not. They're way behind on model development due to strategic miscalculations but they traded their place as a hyperscaler for a ticket into the big game with OpenAI. Google, they have fuck you money and will do the same as Microsoft with their search and mail users.
But Mozilla? "Oh we want to get more into advertising". Ehm yeah basically what will alienate your last few supporters, and getting onto a market where people with 1000x more money than you have the entire market divided between them. Being slightly more "ethical" will be laughed away by their market forces.
Mozilla has the luck that it doesn't have too many independent investors. Not many people screaming "what are we doing about AI because everyone else doing it". They should have a little more insight and less pressure but instead they jump into the same pool with much bigger sharks.
In some ways I think it's that Mozilla leadership still seems themselves as a big tech player that is temporarily a little embarrassed on the field. Not like the second-rank one it is that has already thoroughly deeply lost and must really find something unique to have a reason to exist. Because being a small player is not super bad, many small outfits do great. But it requires a strong niche you're really really good at, better than all the rest. That kinda vision I just don't see from Mozilla.
Comment by catlover76 4 hours ago
Comment by Xelbair 5 hours ago
Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.
I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.
Comment by infotainment 6 hours ago
Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.
Comment by _heimdall 3 hours ago
I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.
Comment by BoredPositron 6 hours ago
Comment by recursive 6 hours ago
Comment by clueless 6 hours ago
All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts
Comment by recursive 6 hours ago
Comment by dawnerd 6 hours ago
Comment by charcircuit 2 hours ago
Comment by nijave 6 hours ago
Agents (like a research agent) could also be interesting
Comment by actionfromafar 6 hours ago
Comment by ekr____ 6 hours ago
Comment by tdeck 2 hours ago
Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.
Comment by 1shooner 6 hours ago
Comment by goalieca 5 hours ago
Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.
Comment by xg15 6 hours ago
Comment by koolala 3 hours ago
Comment by lxgr 6 hours ago
Comment by Dylan16807 3 hours ago
And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.
Comment by isodev 6 hours ago
Comment by TheRealPomax 6 hours ago
It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.
Comment by nullbound 5 hours ago
Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.
Comment by Turskarama 3 hours ago
Comment by chillfox 3 hours ago
Comment by TheRealPomax 3 hours ago
So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Because if you can't even do that, Firefox firmly deserves (and right now, it does) it's "we don't even really rank" position in the browser market.
Comment by kbelder 3 hours ago
LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.
Comment by api 6 hours ago
Comment by pferde 1 hour ago
Comment by ToucanLoucan 6 hours ago
Comment by lxgr 5 hours ago
Comment by ThrowawayTestr 6 hours ago
Comment by someothherguyy 5 hours ago
https://mozilla.github.io/policy-templates/#generativeai
https://mozilla.github.io/policy-templates/#preferences
https://searchfox.org/firefox-main/source/browser/app/profil...
https://searchfox.org/firefox-main/source/modules/libpref/in...
Comment by bigstrat2003 3 hours ago
Comment by derekdahmer 2 hours ago
Comment by PunchyHamster 2 hours ago
Comment by calvinmorrison 2 hours ago
Comment by beached_whale 3 hours ago
Comment by koolala 3 hours ago
Comment by phyzome 1 hour ago
... Mozilla has re-enabled AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.
Comment by nirui 32 minutes ago
> If AI browsers dominate and then falter, if users discover they want something simpler and more trustworthy, Waterfox will still be here, marching patiently along.
This is basically their train of thought: provide something different for people who truly need it. There's nothing to criticize about.
However, let's don't forget that other browsers can remove/disable AI features just as fast as they add them. If Waterfox wants to be *more than just an alternative* (a.k.a. be a competitor), they needs discover what people actually need and optimize heavily on that. But this is hard to do because people don't show their true motives.
Maybe one day, it turned out that people do just want an AI that "think for them". That would be awkward, to say the least.
Comment by htx80nerd 2 hours ago
Comment by SoftTalker 1 hour ago
Comment by webstrand 1 hour ago
Comment by chauhankiran 1 hour ago
Comment by koolala 3 hours ago
Comment by koolala 1 minute ago
Looks like their independent now, nice.
Comment by doubtfuly 4 hours ago
Comment by koolala 3 hours ago
Comment by Groxx 3 hours ago
Comment by koolala 2 hours ago
Comment by aag 4 hours ago
> Alphabet themselves reportedly see the writing on the wall, developing what appears to be a new browser separate from Chrome.
Comment by Glant 4 hours ago
https://labs.google/disco https://news.ycombinator.com/item?id=46240952
Comment by zavec 6 hours ago
Comment by ekr____ 5 hours ago
Comment by autoexec 4 hours ago
That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.
Comment by ekr____ 3 hours ago
"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."
https://support.mozilla.org/en-US/kb/firefox-advanced-custom...
Comment by lerp-io 4 hours ago
at this point it’s more so a sandbox runtime bordering an OS, but okay
Comment by Papazsazsa 3 hours ago
Comment by bigstrat2003 3 hours ago
Comment by 627467 3 hours ago
Comment by fguerraz 3 hours ago
Comment by AnonC 3 hours ago
Comment by aleph_minus_one 3 hours ago
What do you say about the following link, then?
Comment by AnonC 3 hours ago
Comment by Groxx 3 hours ago
I agree it's counter-evidence right now, and I think there has been a way to donate for a long time now (just to "mozilla", not "firefox" or setting any restrictions), but I'm not sure what the historical option has been...
Comment by human_llm 1 hour ago
Comment by ChrisArchitect 7 hours ago
Mozilla appoints new CEO Anthony Enzor-Demeo
Comment by hexasquid 3 hours ago
Comment by phyzome 1 hour ago
Comment by almosthere 7 hours ago
Comment by MrAlex94 7 hours ago
Comment by Qem 4 hours ago
It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.
Comment by a24j 1 hour ago
Comment by SV_BubbleTime 1 hour ago
Comment by smt88 6 hours ago
In many other areas, there are zero "no AI" options at all.
Comment by mmaunder 2 hours ago
The black box objection disqualifies Widevine.