Option A: Data Package
Receive the JSON evidence payload via API and feed it into your own model locally. You own model choice, prompt design, and output quality. Ideal for air-gapped or on-prem deployments.
Experimental
Option B: Managed LLM Chat
Evidentia runs Gemini 3.1 Pro with a validated system prompt and anti-hallucination guardrails. You send the user query, Evidentia returns a grounded, cited response. Requires the query to be sent to the cloud.
Not mutually exclusive. Use Option B for cloud clients and Option A for air-gapped on-prem deployments.
Liability and hallucination risk: Evidentia provides cited evidence, not recommendations. Every claim includes a DOI or source link so the end user can verify directly. Via the managed API, Evidentia runs Gemini 3.1 Pro with anti-hallucination guardrails and grounded citations. For local deployments (Option A), the integrator owns model choice and prompt design.