pillar

AI that knows what each user is allowed to see — and proves it.

The most common reason enterprise AI projects stall is not model quality. It is that no one can prove the AI is honouring document permissions. TeamSync solves this by making permissions a property of the platform the AI runs on — not a policy the AI is asked to remember.

Talk to a security solutions engineer · See how TeamSync handles AI for the CISO


The problem most enterprise AI quietly has.

A user asks an AI assistant about a document. The assistant retrieves the document, summarises it, and returns the answer — including content the user does not have permission to read in the source system.

This happens because most AI integrations work by indexing content into a separate vector store, then querying that store. The original ACLs never travel with the chunks. The model becomes a blind spot in your access-control model — and the regulator notices.

The fix is not "ask the AI nicely". The fix is to scope every retrieval to the requesting user's actual permissions, at request time, every time.


How TeamSync does it.

The AI calls the same permission engine that governs the document.

Whether the user opens the document directly, searches across the corpus, asks DocuTalk a natural-language question, or invokes an agentic workflow — the same access-control evaluation happens. The model never receives chunks the user is not allowed to see.

Permissions are evaluated per request, not per session.

A change to the user's group membership at 9:31 takes effect at 9:31. A document re-classification at 11:42 takes effect at 11:42. There is no AI cache to invalidate, no separate index to re-permission, no nightly job.

Every answer carries a citation and an evidence card.

The answer in front of the user is grounded — every claim is linked to the document chunk it came from, with a click-through. Behind the scenes, an evidence card records the model version, the prompt, the retrieved chunks, the reasoning trace, and the human-checkpoint outcome. Exportable.

Every AI event is anchored in the audit ledger.

A Merkle hash chain records the request, the retrieval, the answer, and the evidence card. The chain is cross-attested across regions and witness nodes. Tamper attempts break it visibly at root verification.

Customer content is never used to train models.

The corpus stays in the customer tenancy. Models call the corpus at inference. There is no second use, no training corpus contribution, no model improvement at customer cost. Contractual and architectural.


What this means for the 4 people who have to approve AI.

Role What they want to see What TeamSync gives them
CISO Proof that AI cannot return content the user shouldn't see Per-request permission evaluation; cryptographic audit; per-AI-event evidence card
Chief AI Officer Per-request evidence the policy was honoured Evidence card with prompt + retrieval scope + reasoning trace + human-checkpoint outcome
Compliance Officer Coverage against the relevant regulator framework EU AI Act Articles 11-14 documentation generated continuously
General Counsel Defensibility of AI decisions years later Replayable per request from the audit ledger

What it does not require.

  • No retrieval index outside the customer tenancy.
  • No vendor staff with standing access to customer content.
  • No model fine-tuning on customer documents.
  • No "trust the vendor" architectural assumption.

The capabilities that implement this.

  • DocuTalk — natural-language Q&A grounded in your corpus, scoped to the user's permissions, with click-through citations.
  • Semantic Search — platform-wide hybrid + entity-graph search that respects the same permissions.
  • Agentic AI Workflow — bounded-autonomy agents whose tool surface is defined by business rules and whose every action is audited.
  • Tamper-evident audit ledger — the Merkle chain that anchors every AI event.

Compliance frameworks served.

EU AI Act (Articles 11, 13, 14 for high-risk systems). NIST AI Risk Management Framework. ISO/IEC 42001 AI management system. FINRA Reg Notice 24-09 on AI in surveillance. FDA AI/ML SaMD framework. MAS Veritas. FCA AI Discussion Paper DP5/22. SOC 2 Type II. ISO/IEC 27001:2022. HIPAA + HITECH. GDPR Article 17.

See all 12 compliance overlays →


Read further.

Talk to us

Bring the question on your desk this week.

A 30-minute conversation with a solutions engineer who already speaks your industry. No pitch deck.