So I was reading this article about Signal-creator Moxie Marlinspike’s new project, Confer , which claims to be a verifiably E2E encrypted LLM chat service. There are a couple of short blog articles that give the gist of it, and some github repos including this one that includes scripts for producing the VM that will run your particular LLM session. But if I’m following this all correctly, it implies that every chat session (or perhaps every logged-in user) would have their own VM running their own LLM to ensure that the chain of trust is complete. This seems impossible from a scalability perspective, as even small LLMs require huge quantities of RAM and compute. Did I miss something fundamental here?


On the topic of private LLM chat bots, what is your opinion on Lumo? https://lumo.proton.me/about
I think it’s a pragmatic approach to a difficult problem. You’re still trusting that Proton is doing what they claim to be doing by not logging any of the data, and it lacks the verifiable trust chain of the VM that this Conifer system has (which theoretically would let you audit the code to confirm that there is no logging going on, and then check the crypto hash of the VM running your LLM conversation to confirm that it is in fact running that code), but if you trust Proton this step isn’t as important. Otherwise the approaches look fairly the same.Proton is using PGP for the inflight encryption to the LLM while Conifer is using… maybe PGP too, I’m not sure, but a similar approach. And as others have said here, LLMs are stateless so if you can trust that the platform isn’t logging the requests, there should be no record of your discussions.