Model Encryption Protocol
Last Updated: Nov 7, 2025 | Version 1.2
What Is MEP?
The Model Encryption Protocol encrypts AI model weights using private key encryption so inference can run on any compute infrastructure without the need to trust the provider with your raw weights or data.
In simple terms: We scramble the model to a private key you control. The scrambled model can still process scrambled inputs and produce useful outputs—but the hosting provider never sees plaintext weights or data.
How It Works
1. Encrypt the Model Weights
Model parameters are permuted (scrambled) using your private encryption key.
What's a permutation? A reversible transformation—like shuffling a deck of cards in a specific pattern. With the key, you can unshuffle. Without it, the weights look like random noise.
The encrypted model is uploaded to compute infrastructure. Without your key, the weights are useless IP.
2. Transform Inputs
When you send a prompt, your client SDK encrypts it using the same key before it leaves your environment.
The cloud receives scrambled text, not your original prompt.
3. Run Inference in Encrypted State
Here's the key insight: because inputs and model weights are permuted with the same key, there's symmetry between the encrypted input and the encrypted embedding space.
The model can execute standard operations (matrix multiplies, attention mechanisms) on encrypted inputs and produce encrypted outputs—all while staying in the scrambled state.
The GPU runs normal inference. No decryption happens on the cloud side.
4. Decrypt Output Client-Side
The encrypted output returns to your client. Your SDK uses the private key to reverse the permutation.
You get the plaintext answer. The provider only ever saw scrambled data.
Example Flow
Your prompt:
After encryption (what the cloud sees):
Model output (still encrypted):
After decryption (your answer):
The cloud compute provider executed the inference but never saw your question or the answer in plaintext.
Performance & Cost
Runtime impact:
Negligible. Encrypted models run at essentially the same speed as unencrypted models.
Cost impact:
No additional cost per inference. You pay standard compute rates.
Output quality:
Identical to unencrypted models. There is no degradation in response quality—outputs are 1-1 with baseline models.
Where the overhead is:
Deployment. Encrypting the model is a one-time operation that takes approximately 10-30 minutes+ depending on model size. After that, inference is transparent.
Comparable Solutions
Fully Homomorphic Encryption (FHE)
What it is: Computation on encrypted data without ever decrypting.
Security: Stronger theoretical guarantees than MEP. Data never exists in plaintext during compute.
The problem: Not technically feasible for production AI today. FHE adds 100-1000x overhead in speed and cost. Large language models become unusably slow.
MEP vs FHE: MEP trades some theoretical security for practical usability. We run at production speed today while FHE remains a research-stage technology for neural networks.
Secure Enclaves (TEEs)
What it is: Hardware-isolated environment (Intel SGX, AMD SEV, etc.) that processes data in a "trusted" region.
Security: Data is decrypted inside the enclave for processing.
The problem: You still have to trust the party managing the enclave. Your plaintext data exists inside their infrastructure—you're trusting hardware and the operator.
MEP vs Enclaves: MEP keeps data encrypted end-to-end. Nothing is decrypted on the provider's infrastructure. You don't need to trust their hardware or their operations—only mathematics and your private keys.
Enclaves are useful technology, but they don't eliminate the trust requirement. MEP does.
Our Approach
MEP is in an early research state but is already practical and useful today.
There are still potential vulnerabilities in this version that we are solving. We're building iterations that will increase security and usability. However, unlike homomorphic encryption (which remains impractical for production AI) our solution is designed to be practical, useful, and scalable from day one.
We're developing additional innovations around MEP that bring us closer to our goal: making private, sovereign AI accessible to everyone.
This is the first step, not the final form. The protocol will evolve, security will strengthen, and capabilities will expand—but you can use it in production today.
Technical Notes
- Encryption method:
- Private key encryption with structure-preserving permutations
- Key management:
- Keys remain under your control. Never uploaded to Covenant servers or compute providers.
- Supported models:
- Transformer-based architectures (only text supported currently, multi-modal will be supported in the future)
- Compatibility:
- For all intents and purposes we are a near 1-1 swap out with the OpenAI API
Questions?
Contact: will@covenantlabs.ai

