AI is here – and because it’s here, we must use it. But that doesn’t mean we trust it.
One of the biggest concerns is data leakage – that the intellectual property and PII we use in our local enterprise model might leak back to the AI model provider, and from there, onward to other external users of the model.
DataKrypto has launched a solution: FHEnom for AI. FHEnom is the firm’s fully homomorphic encryption, and possibly the only current homomorphic encryption able to operate at near real time speeds. This speed is essential for AI applications.

FHEnom for AI is a zero-knowledge framework designed to protect customized and proprietary AI models. Central to the solution is the use of a trusted execution environment (TEE). DataKrypto modifies the model by placing its tokenizer, its embedding layer inside the TEE. Enterprise data is encrypted (the key is also kept safe within the TEE), and the model is trained on the encrypted data without ever seeing the cleartext company data.
The user’s PII is also protected while querying the AI. The user connects to the TEE and receives an encryption key. This encrypts the query before delivering it to the AI. Because it is homomorphic encryption, the AI responds accurately to the query without ever seeing the user question or its own answer. The answer is decrypted before delivery to the user – so the user experience is unaffected by the cryptographic trickery that prevents company and personal data being seen by the AI provider.
The TEE itself will generally be provided by a third party. “In our scheme,” explains Luigi Caramico, CTO and founder at DataKrypto, “it is a third party provider supplying the TEE that interfaces with the AI. Let’s say you have company xyz and the provider is, for example, AWS.” In this case, the xyz user connects to AWS which connects to the AI. “You have a decoupling between the AI provider and the user through a disinterested third party who doesn’t even see the question,” he continues.
AI poisoning – attempting to interfere with the training process in order to skew the results – is also prevented. Without access to the TEE, a malicious actor cannot access the key and cannot provide training data. “Without access to the key, you cannot train the AI,” explains Caramico. “You cannot fine tune the AI because it will not understand what you send it. If you block access to the TEE, there is no possibility of poisoning the AI itself.”
“FHE enables direct computation on encrypted embeddings, keeping both model weights and user data protected in ciphertext throughout processing,” says the firm. “TEEs provide hardware-enforced isolation for secure tokenization and output within a cryptographically verified enclave.”
The model cannot be attacked from outside the TEE (preventing data poisoning), while the encrypted embeddings within the model are meaningless to any third party. Even if the local model leaks data to the model provider, it would effectively be meaningless garbage.
Authorized users can still query the AI and get meaningful results, so it is important to note this is not an enterprise-wide data protection system – it is an AI data leakage solution. Attackers could still steal company data outside of the TEE before it is tokenized within the TEE, so standard data security is still necessary (such as adequate access control for users querying the model, and perhaps encryption of the corporate data files – FHEnom is an option but is not required for FHEnom for AI).
Data leakage to the model provider is prevented, and even if the entire model is stolen, nothing is lost. “We address AI’s three core vulnerabilities – model security, data confidentiality, and integrity assurance – with FHE that’s fast enough to keep pace with real-time AI workflows,” said Caramico. “Speed is critical for AI’s massive data volume scale and extreme latency sensitivity. In an era where nanoseconds matter, FHEnom for AI ensures encrypted protection for models and data, delivering the scale and speed needed to fuel innovation.”
It solves what is perhaps the biggest problem currently delaying enterprise uptake of AI: the ability to train a model on the company’s own intellectual property without any risk of losing that intellectual property or leaking PII.
Related: Knostic Secures $11 Million to Rein in Enterprise AI Data Leakage, Oversharing
Related: Singulr Launches With $10M in Funding for AI Security and Governance Platform
Related: Pangea Launches AI Guard and Prompt Guard to Combat Gen-AI Security Risks
Related: Cisco Unveils New AI Application Security Solution