Any business putting “privacy first” thing that works only on their server, and requires full access to plaintext data to operate, should be seen as lying.
I’ve been annoyed by proton for a long while; they do (did?) provide a seemingly adequate service, but claims like “your mails are safe” when they obviously had to have them in plaintext on their server, even if only for compatibility with current standards, kept me away from them.
they obviously had to have them in plaintext on their server, even if only for compatibility with current standards
I don’t think that’s obvious at all. On the contrary, that’s a pretty bold claim to make, do you have any evidence that they’re doing this?
Incoming Emails that aren’t from proton, or PGP encrypted (which are like 99% of emails), arrives at Proton Servers via TLS which they decrypt and then have the full plaintext. This is not some conspiracy, this is just how email works.
Now, Proton and various other “encrypted email” services then take that plaintext and encypt it with your public key, then store the ciphertext on their servers, and then they’re supposed to discard the plaintext, so that in case of a future court order, they wouldn’t have the plaintext anymore.
But you can’t be certain if they are lying, since they do necessarily have to have access to the plaintext for email to function. So “we can’t read your emails” comes with a huge asterisk, it onlu applies to those sent between Proton accounts or other PGP encrypted emails, your average bank statement and tax forms are all accessible by Proton (you’re only relying on their promise to not read it).
Ok yeah thats a far cry from Proton actually “Having your unencrypted emails on their servers” as if they’re not encrypted at rest.
There’s the standard layer of trust you need to have in a third party when you’re not self hosting. Proton has proven so far that they do in fact encrypt your emails and haven’t given any up to authorities when ordered to so I’m not sure where the issue is. I thought they were caught not encrypting them or something.
Proton has my vote for fastest company ever to completely enshittify.
How have they enshittified? I haven’t noticed anything about their service get worse since they started.
Does it even count as enshittifying if they were born that way?
How much longer until the AI bubbles pops? I’m tired of this.
We’re still in the “IT’S GETTING BILLIONS IN INVESTMENTS” part. Can’t wait for this to run out too.
It’s when the coffers of Microsoft, Amazon, Meta and investment banks dry up. All of them are losing billions every month but it’s all driven by fewer than 10 companies. Nvidia is lapping up the money of course, but once the AI companies stop buying GPUs on crazy numbers it’s going to be a rocky ride down.
Is it like crypto where cpus were good and then gpus and then FPGAs then ASICs? Or is this different?
I think it’s different. The fundamental operation of all these models is multiplying big matrices of numbers together. GPUs are already optimised for this. Crypto was trying to make the algorithm fit the GPU rather than it being a natural fit.
With FPGAs you take a 10x loss in clock speed but can have precisely the algorithm you want. ASICs then give you the clock speed back.
GPUs are already ASICS that implement the ideal operation for ML/AI, so FPGAs would be a backwards step.
It’s probably different. The crypto bubble couldn’t actually do much in the field of useful things.
Now, I’m saying that with a HUGE grain of salt, but there are decent application with LLM (let’s not call that AI). Unfortunately, these usages are not really in the sight of any business putting tons of money into their “AI” offers.
I kinda hope we’ll get better LLM hardware to operate privately, using ethically sourced models, because some stuff is really neat. But that’s not the push they’re going for for now. Fortunately, we can already sort of do that, although the source of many publicly available models is currently… not that great.
There’s absolutely a push for specialized hardware, look up that company called Groq !
Here’s the thing, it kind of already has, the new AI push is related to smaller projects and AI agents like Claude Code and GitHub copilot integration. MCP’s are also starting to pick up some steam as a way to refine prompt engineering. The basic AI “bubble” popped already, what we’re seeing now is an odd arms race of smaller AI projects thanks to companies like Deepseek pushing the AI hosting costs so low that anyone can reasonably host and tweak their own LLMs without costing a fortune. It’s really an interesting thing to watch, but honestly I don’t think we’re going to see the major gains that the tech industry is trying to push anytime soon. Take any claims of AGI and OpenAI “breakthroughs” with a mountain of salt, because they will do anything to keep the hype up and drive up their stock prices. Sam Altman is a con man and nothing more, don’t believe what he says.
You’re saying th AI bubble has popped because even more smaller companies and individuals are getting in on the action?
Thats kind of the definition of a bubble actually. When more and more people start trying to make money on a trend that doesn’t have that much real value in it. This happened with the dotcom bubble nearly the same. It wasn’t that the web/tech wasn’t valuable, it’s now the most valuable sector of the world economy, but at the time the bubble expanded more was being invested than it was worth because no one wanted to miss out and it was accessible enough almost anyone could try it out.
I literally said exactly what you’re explaining. I’m not sure what you’re trying to accomplish here…
✨
depends on what and with whom. based on my current jobs with smaller companies and start ups? soon. they can’t afford the tech debt they’ve brought onto themselves. big companies? who knows.
Time to face the facts, this utter shit is here to stay, just like every other bit of enshitification we get exposed to.
I’m just saying Andy sucking up to Trump is a red flag. I’m cancelling in 2026 🫠
What are you considering as alternatives?
I highly suggest Tuta, https://tuta.com/, or other conventional mail boxes like https://mailbox.org/en/
The worst part is that once again, proton is trying to convince its users that it’s more secure than it really is. You have to wonder what else they are lying or deceiving about.
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.
And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.
If an AI can work on encrypted data, it’s not encrypted.
Their AI is not local, so adding it to your email means breaking e2ee. That’s to some extent fine. You can make an informed decision about it.
But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the “zero trust” bullshit on protons own page.
Where does it say “zero trust” ‘on Protons own page’? It does not say “zero-trust” anywhere, it says “zero-access”. The data is encrypted at rest, so it is not e2ee. They never mention end-to-end encryption for Lumo, except for ghost mode, and they are talking about the chat once it’s complete and you choose to leave it there to use later, not about the prompts you send in.
Zero-access encryption
Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton Mail, Proton Drive, and Proton Pass. Our encryption is open source and trusted by over 100 million people to secure their data.
Which means that they are not advertising anything they are not doing or cannot do.
By posting this disinformation all you’re achieving is getting people to pedal back to all the shit services out there for “free” because many will start believing that privacy is way harder than it actually is so ‘what’s the point’ or, even worse, no alternative will help me be more private so I might as well just stop trying.
Scribe can be local, if that’s what you are referring to.
They also have a specific section on it at https://proton.me/support/proton-scribe-writing-assistant#local-or-server
Also emails for the most part are not e2ee, they can’t be because the other party is not using encryption. They use “zero-access” which is different. It means proton gets the email in clear text, encrypts it with your public PGP key, deletes the original, and sends it to you.
See https://proton.me/support/proton-mail-encryption-explained
The email is encrypted in transit using TLS. It is then unencrypted and re-encrypted (by us) for storage on our servers using zero-access encryption. Once zero-access encryption has been applied, no-one except you can access emails stored on our servers (including us). It is not end-to-end encrypted, however, and might be accessible to the sender’s email service.
My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.
First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.
Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.
Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.
There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.
Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.
A local LLM is one YOU run on YOUR machine.
Yes, that is exactly what I am saying. You seem to be confused by basic English.
Look, Proton can at any time MITM attack your email
They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.
There is no such thing as e2ee LLMs. That’s not how any of this works.
I know. When did I say there is?
They are not supposed to be able to and well designed e2ee services can’t be. That’s the whole point of e2ee.
You’re using their client. You get a fresh copy every time it changes. Of course you are vulnerable to a MITM attack, if they chose to attempt one.
If you insist on being a fanboy than go ahead. But this is like arguing a bulletproof vest is useless because it does not cover your entire body.
Or because the bulletproof vest company might sell you a faulty one as part of a conspiracy to kill you.
So then you object to the premise any LLM setup that isn’t local can ever be “secure” and can’t seem to articulate that.
What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all. You just object to the premise of non-local LLMs and are, IMO, disingenuously making that a “brand issue” because…why? It sounds like a very emotional argument as it’s not backed by any technical discussion beyond “local only secure, nothing else.”
Beyond the fact that
They are not supposed to be able to and well designed e2ee services can’t be.
So then you trust that their system is well-designed already? What is this cognitive dissonance that they can secure the relatively insecure format of email, but can’t figure out TLS and flushing logs for an LLM on their own servers? If anything, it’s not even a complicated setup. TLS to the context window, don’t keep logs, flush the data. How do you think no-log VPNs work? This isn’t exactly all that far off from that.
What exactly is dishonest here? The language on their site is factually accurate, I’ve had to read it 7 times today because of you all.
I object to how it is written. Yes, technically it is not wrong. But it intentionally uses confusing language and rare technical terminology to imply it is as secure as e2ee. They compare it to proton mail and drive that are supposedly e2ee.
They compare it to proton mail and drive that are supposedly e2ee.
Only drive is. Email is not always e2ee, it uses zero-access encryption which I believe is the same exact mechanism used by this chatbot, so the comparison is quite fair tbh.
It is e2ee – with the LLM context window!
When you email someone outside Proton servers, doesn’t the same thing happen anyway? But the LLM is on Proton servers, so what’s the actual vulnerability?
Mullvad FTW
Yes, indeed. Even so, just because there is a workaround, we should not ignore the issue (governments descending into fascism).
Very true
Sauce?
Zero-access encryption
Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton Mail, Proton Drive, and Proton Pass.
from protons own website.
And why this is not true is explained in the article from the main post as well as easily figured out with a little common sense (AI can’t respond to messages it can’t understand, so the AI must decrypt them).
They actually don’t explain it in the article. The author doesn’t seem to understand why there is a claim of e2e chat history, and zero-access for chats. The point of zero access is trust. You need to trust the provider to do it, because it’s not cryptographically veritable. Upstream there is no encryption, and zero-access means providing the service (usually, unencrypted), then encrypting and discarding the plaintext.
Of course the model needs to have access to the context in plaintext, exactly like proton has access to emails sent to non-PGP addresses. What they can do is encrypt the chat histories, because these don’t need active processing, and encrypt on the fly the communication between the model (which needs plaintext access) and the client. The same is what happens with scribe.
I personally can’t stand LLMs, I am waiting eagerly for this bubble to collapse, but this article is essentially a nothing burger.
You understand that. I understand that. But try to read it from the point of view of an average user that knows next to nothing about cyber security and LLMs. It sounds like it’s e2ee that proton mail and drive are famous for. To us, that’s obviously impossible but most people will interpret that marketing this way.
It’s intentional deception, using technical terms to confuse nontechnical customers.
How would you explain it in a way that is both nontechnical, accurate and differentiates yourself from all the other companies that are not doing something even remotely similar? I am asking genuinely because from the perspective of a user that decided to trust the company, zero-access is functionally much closer to e2ee than it is to “regular services”, which is the alternative.
The easiest is to explain the consequence.
We can’t access your chat history retroactively, but we can start wiretapping your future chats.
If that is too honest for you, then just explain the data is encrypted after the LLM reads them instead of using technical terms like zero access.
This I can agree on. They would have been better served and made it clearer to their users by clarifying that it is not ‘zero trust’ and not e2ee. At the end of the day, once the masses start trusting a company they stop digging deep, just read the first couple of paragraphs of the details, if at all, but some of us are always digging to make sure we can find the weakest links in our security as well as our privacy to try and strengthen them. So yeah, pretty stupid of them.
For a critical blog, the first few paragraphs sound a lot like they’re shilling for Proton.
I’m not sure if I’m supposed to be impressed by the author’s witty wording, but “the cool trick they do” is - full encryption.
Moving on.
But that’s misleading. The actual large language model is not open. The code for Proton’s bit of Lumo is not open source. The only open source bit that Proton’s made available is just some of Proton’s controls for the LLM. [GitHub]
In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure!
oof.
Over the years I’ve heard many people claim that proton’s servers being in Switzerland is more secure than other EU countries - well there’s also this now:
Proton is moving its servers out of Switzerland to another country in the EU they haven’t specified. The Lumo announcement is the first that Proton’s mentioned this.
No company is safe from enshittification - always look for, and base your choices on, the legally binding stuff, before you commit. Be wary of weasel wording. And always, always be ready to move* on when the enshittification starts despite your caution.
* regarding email, there’s redirection services a.k.a. eternal email addresses - in some cases run by venerable non-profits.
Over the years I’ve heard many people claim that proton’s servers being in Switzerland is more secure than other EU countries
Things change. They are doing it because Switzerland is proposing legislation that would definitely make that claim untrue. Europe is no paradise, especially certain countries, but it still makes sense.
From the lumo announcement:
Lumo represents one of many investments Proton will be making before the end of the decade to ensure that Europe stays strong, independent, and technologically sovereign. Because of legal uncertainty around Swiss government proposals(new window) to introduce mass surveillance — proposals that have been outlawed in the EU — Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move.
This shift represents an investment of over €100 million into the EU proper. While we do not give up the fight for privacy in Switzerland (and will continue to fight proposals that we believe will be extremely damaging to the Swiss economy), Proton is also embracing Europe and helping to develop a sovereign EuroStack(new window) for the future of our home continent. Lumo is European, and proudly so, and here to serve everybody who cares about privacy and security worldwide.
Switzerland has a surveillance law in the works that will force VPNs, messaging apps, and online platforms to log users’ identities, IP addresses, and metadata for government access
Regarding the fact that proton stops hosting in Switzerland : I thought it was because of new laws in Switzerland and that they hzf not much of a choice ?
The law isn’t a law yet, its a just a proposal. Proton is still in Switzerland, but they said they’re gonna move if the surveillance law actually becomes law.
Really? This article reads like it’s AI slop reproducing Proton copy then pivoting to undermine them with straight up incorrect info.
You know how Microsoft manages to make LibreOffice pulls errors on Windows 11? You really didn’t stop to think that Google might contract out some slop farms to shit on Proton?
deleted by creator
This was it for me, cancelled my account. Fuck this Andy moron
Well, I’m keeping mine. I’m actually very happy with it. This article is full slop, with loads of disinformation, and an evident lack of research. It looks like it was made with some Ai bullshit and the writer didn’t even check what that thing vomited.
It was Snowball! He wrote the article! Must have been!
Who Proton??? Nooo come on… who could ever seen this coming? 🐸🍲
It can’t be that stupid, you must be prompting it wrong
Eat shit
Edit: is that a tag or something for the website? I still don’t like the sentiment (or the chatbot) but if it’s not something that came from Proton then I take back some of my vitriol
This is an anti-AI blog, that tagline is a joke.
I’m not familiar with this blog, so I can’t comment on their general stance, but this particular article seems balanced and fair. They point out questionable implementation practices on Proton’s side rather than criticising the AI itself.
He’s being sarcastic
Yeah I got there eventually