How It Works, End-to-End.

From the first customer message to the final AI-generated answer, every step is encrypted, audited, and compliant — without changing how your agents work.

Step 1 of 5 · Client Chat
Customer-facing system (chat, CRM, app)
AI Private Layer
AI models (e.g., Gemini, OpenAI)
💬

Client Chat

"I want to purchase car insurance for a 2024 Toyota Corolla. My ID: 123456, phone: +974... Can you help?"

PII detected locally

Customer sends a natural-language request in your existing chat, app, or CRM.

AI Private Layer

AI Private Layer

"I want to purchase car insurance for a 2024 Toyota Corolla. My ID: [PII_ID_1], phone: [PII_PHONE_1]."

🔒
PII_ID_1 → <cipher>
PII_PHONE_1 → <cipher>

AI Private Layer locates PII, encrypts it, and replaces each fragment with an immutable token — before anything leaves your environment.

Gemini

Gemini

Request: "I want to purchase car insurance for a 2024 Toyota Corolla. My ID: [PII_ID_1], phone: [PII_PHONE_1]."

"We can help you purchase car insurance for your 2024 Toyota Corolla. Please confirm that the ID [PII_ID_1] and phone [PII_PHONE_1] are correct."

Gemini (or any LLM) only sees tokens — never raw personal data.

AI Private Layer

AI Private Layer

"We can help you purchase car insurance for your 2024 Toyota Corolla. Please confirm that the ID [PII_ID_1] and phone [PII_PHONE_1] are correct."

🔓
[PII_ID_1] → 123456
[PII_PHONE_1] → +974...

AI Private Layer decrypts the necessary fields, reassembles the full answer, and enforces any final masking rules you configure.

💬

Client Chat

"We can help you purchase car insurance for your 2024 Toyota Corolla. Please confirm that the ID 123456 and phone +974... are correct."

Your customer or agent gets a natural, complete answer — with all compliance and security handled behind the scenes.

From Raw Text to Safe AI Response

AI Private Layer sits between your systems and AI providers as a privacy firewall. It analyzes every request, detects and encrypts PII, sends only protected content to AI models, then safely reconstructs responses. Below is the end-to-end flow.

Stage 1 — Ingestion

Your App / Client Chat

What happens:

  • Your chat, CRM, or backend sends a text payload to AI Private Layer via API.
  • The payload may contain names, emails, IDs, policy numbers, payment details, etc.

Key guarantees:

  • Only your backend talks to AI Private Layer.
  • Connection secured via TLS and API keys.

Stage 2 — Detection & Encryption

AI Private Layer

What happens:

  • We run PII detection (NER models + country-specific regex).
  • Detected PII fragments are encrypted using strong symmetric encryption (e.g., AES-256-GCM) with keys stored in your KMS or HSM.
  • Each encrypted fragment is replaced with a stable token: [PII_EMAIL_1], [PII_PHONE_3], etc.

Key guarantees:

  • Raw PII never leaves your environment.
  • Encrypted bundles can be stored or logged only on your side (Client-Held Bundles).

Stage 3 — AI Processing

Gemini / OpenAI / LLMs

What happens:

  • The tokenized text is sent to the selected AI model (Gemini, OpenAI, Claude, or your own LLM).
  • The model treats tokens as regular text placeholders and generates an answer that also contains these tokens.

Key guarantees:

  • Cloud provider never sees the original PII.
  • Any logs on their side contain tokens, not personal data.

Stage 4 — Reassembly & Delivery

AI Private Layer

What happens:

  • AI Private Layer receives the model answer.
  • Tokens are mapped back to encrypted bundles and selectively decrypted.
  • Final response is rebuilt and optionally re-masked according to your policy (e.g., show phone partially, hide ID completely).
  • The answer is sent back to your system.

Key guarantees:

  • Full control over what is revealed to agents and customers.
  • Clear audit trail of what was detected, encrypted, and returned.

Technical Notes (for developers):

  • Supports REST APIs, webhooks, and message queues (Kafka/RabbitMQ) for async flows.
  • Works with multi-tenant setups (per-tenant models, regex, and keys).
  • Pluggable PII engines (regex, GLiNER, custom models).
  • Deployed on-prem, in your VPC, or in-country cloud to meet data residency.

End-to-End Data Flow

See how your data flows securely through AI Private Layer, from customer input to AI processing and back.

AI Private Layer
AI PRIVATE LAYER
Processing Servers
OpenAI
OpenAI
Claude
Claude
Gemini
Gemini
User Data Secure Output

More than Masking: A Full Privacy Firewall for AI

AI Private Layer is not just a masking service. It is an extensible privacy firewall for AI that adapts to your channels, models, and compliance requirements. Here are some of the advanced capabilities you can enable.

🎤

Voice Support

Goal: Ensure secure streaming of voice content with embedded PII.

Integrates Speech-to-Text (STT) that encrypts transcripts before text recognition. Links with Text-to-Speech (TTS) to speak answers after contextual decryption. Supports encrypted token streaming so masked entities stay protected in transit. Works with local or self-hosted engines such as Whisper, Silero, or Vosk.

Architecture: Voice Input → Edge Encryptor (PII masking in transcript) → Private Layer → LLM → Voice Output (TTS)

Use Cases: Secured call centers handling sensitive customer conversations. Voice assistants for insurance and finance workflows. Voice QA bots operating entirely on private infrastructure.

📄

File Layer

Goal: Extend PII protection to documents and uploaded assets.

File Parser module extracts text and metadata across PDF, DOCX, TXT, CSV, and more. Dedicated PII Detector inspects file content before any external sharing. Encrypts sensitive fragments and injects tokens like [PII_1] directly in the document body. Handles batch pipelines and upload APIs, ready for RAG integrations without leaks.

Sample Flow: File Upload → Extractor → Encrypt/Tokenize → Policy Engine → AI/LLM → Safe Summary

Benefits: Automated scrubbing of confidential data across document repositories. Easy connections to corporate DMS platforms (SharePoint, Google Drive, MinIO, etc.).

🌐

Browser Extension

Goal: Put end users in control of PII security inside the browser.

Chrome/Edge/Firefox extension intercepts prompts before they reach LLM endpoints. Performs local encryption and tokenization of sensitive input for services like ChatGPT, Gemini, or Copilot. Supports enterprise rules with tenant-scoped keys and domain allowlists. Stores client-held key bundles in a protected local vault.

Example Path: User Input (browser) → Local Encryptor → AI service → Local Decryptor

Value: Transforms Private Layer into a user-level privacy shield. Unlocks B2B2C coverage for agents, brokers, and internal advisors.

⚙️

Unified Admin Panel

Goal: Centralized management and monitoring for all security modules.

Configuration pages for Voice, File, and Browser security modules. Live monitoring for uploads, audio streams, and policy enforcement. Dedicated API keys and quota controls per content type.

Additions: Configuration pages for Voice, File, and Browser security modules. Live monitoring for uploads, audio streams, and policy enforcement. Dedicated API keys and quota controls per content type.

Ready to see your own workflow secured?

Share an example of your chat, CRM, or back-office flow — we will show you how AI Private Layer can protect it end-to-end in under 30 minutes.

Get in Touch

Have questions? Want to see a demo? Our team is here to help you secure your AI workflows.

Email: support@private-layer.ai