Ask HN: End-to-end encrypted LLM chat (open- and closed-model)
I’m exploring a software layer—analogous to public/private-key crypto—so a user can converse with an LLM where prompts and responses remain unreadable to all intermediaries, including the model host. (I mean “cipher” in the cryptographic sense.)
Two cases: Open-weights model: ensure the operator still can’t read prompts/responses. Closed, hosted model: true E2EE so even the provider can’t inspect content.
Topics we can discuss: Best near-term path: TEEs with attestation, FHE/HE, MPC/split inference, PIR for retrieval, differential privacy, or hybrids? How to handle key exchange/rotation for forward secrecy? Practical performance/accuracy limits (e.g., non-linearities, KV-cache, streaming)? Minimal viable architecture and realistic threat model? Any prior art or teams you’d point me to?
Please DM if you are interested in working with me.
sounds interesting. use case though? i cant imagine there is a large demand for this