Services (solo & founder-led)¶
You work directly with David Sanker—a lawyer-technologist who designs, builds, and governs AI systems for law, insurance, and finance. The goal: ship trustworthy, European-made AI with measurable ROI and audit-ready compliance.
Service catalog (what you can hire David for)¶
Track | Core Deliverables | Typical Clients | Typical Results |
---|---|---|---|
AI Readiness Sprint (2 wks) | Use-case heat-map, ROI model, risk register, 12-month roadmap | Law firms, insurers, banks, public sector | C-suite clarity in 10–14 days; staged investment plan |
Compliance Accelerator (1–2 wks) | EU AI Act classification, intended purpose, technical file, DPIA, staff workshop | Banks, fintech, med-tech, public bodies | “Audit-ready” documentation; smoother go/no-go gates |
MVP → Pilot Builder (6–8 wks) | GraphRAG/RAG pilot with citations; connectors (DMS/CRM/case systems); evals & guardrails | Litigation boutiques, AML/KYC teams, claims | Pilot in sandbox; measurable quality/latency/cost |
LexGraph Integration (4–6 wks) | Domain knowledge graph + hybrid search API (semantic + sparse + graph) | Global law firms, regulators, in-house knowledge teams | 50–80% faster knowledge retrieval (case-dependent) |
AI Platform & MLOps (4 wks+) | Kubernetes inference, CI/CD, observability, SSO, mTLS, incident runbooks | Enterprise IT & risk | 99.9% SLOs, auditable traceability, drift alerts |
Founder-led delivery
David does the analysis, architecture, and core implementation himself. One or two clients at a time = focus and predictable delivery.
What “done” looks like (acceptance criteria)¶
- Value: a specific KPI moved (e.g., time-to-answer, first-pass accuracy, cost per answer).
- Explainability: answers carry citations & provenance; retrieval quality reported (precision@k, MRR).
- Compliance: EU AI Act technical file, DPIA, model/eval cards ready for internal/external review.
- Operate: runbooks, dashboards, SLOs; rollback path and incident handling documented.
- Handover: code ownership clarified; admins/operators trained.
Delivery phases (how we work together)¶
1) Discover — ~2 weeks¶
- Stakeholder interviews, data inventory, risk appetite.
- Use-case heat-map; cost/benefit model; prioritised backlog.
- Go/No-Go checkpoint with a partner-style write-up.
2) Build — ~6–8 weeks¶
- GraphRAG pilot: domain knowledge graph + hybrid retrieval (semantic + sparse + graph traversal).
- Connectors to DMS/CRM/case systems, policy portals, or search (Elasticsearch/OpenSearch).
- Guardrails (PII filters, policy checks) and evals (precision@k, MRR, groundedness, TP90 latency, cost per answer).
3) Govern — ~1–2 weeks¶
- EU AI Act technical file: intended purpose, data governance, evaluations, risk controls, monitoring plan.
- DPIA, model/eval cards, security controls, post-market monitoring plan.
4) Scale (optional) — ~4 weeks¶
- Kubernetes inference (autoscaling/canaries), SSO/OIDC, mTLS, secrets, observability (metrics/traces/logs), drift detection.
- Cost dashboards; error budgets; alerting; SRE-style runbooks.
Security, privacy, and sovereignty (baseline)¶
- Network: zero-trust, mTLS, least privilege, IP allow-lists.
- Data: encryption in transit/at rest; retention policies; no shadow copies.
- Access: SSO/SAML/OIDC; role-based access; break-glass procedure.
- Secrets: vault-managed; never in code or CI logs.
- Logging: immutable audit logs; prompt/response trails with redaction where required.
- Change control: code review, CI checks, controlled releases, rollback plan.
What you measure (and what David reports)¶
- Retrieval: precision@k, recall, MRR on labelled sets.
- Generation: factuality, faithfulness, citation coverage (human spot-checks included).
- Safety: PII/PHI leakage tests, jailbreak resistance, bias probes.
- Ops: TP90/TP95 latency, error rate, throughput, cost per answer.
- Business: time-to-answer, deflection rate, cycle-time reduction, adoption.
Architecture & model choices (vendor-neutral)¶
- Hosting: EU cloud, on-prem/private cloud, or hybrid.
- Models: open-source (Llama, Mistral), EU (e.g., Aleph Alpha), hyperscalers where justified.
- Abstraction layer so you can swap models and avoid lock-in.
- Data residency respected; sensitive data can remain on-prem.
Packages & pricing (indicative)¶
- Readiness Sprint (fixed/capped) — 2 weeks; roadmap, ROI, risk register, governance starter kit.
- Compliance Accelerator (fixed/capped) — 1–2 weeks; classification + technical file + DPIA + workshop.
- Pilot Build (T&M with cap) — 6–8 weeks; GraphRAG pilot with connectors, evals, guardrails.
- Scale-out (T&M) — 4+ weeks; platform hardening, SSO/mTLS, observability, SLOs.
Precise pricing depends on scope, data sources, and integration depth. You’ll get a written statement of work with milestones and exit criteria.
Integration targets (common systems)¶
- Legal: DMS, matter/case management, eDiscovery repositories.
- Insurance/Banking: claims systems, policy DBs, CRM, knowledge bases.
- Search & content: Elasticsearch/OpenSearch, SharePoint, Confluence.
- Data: Postgres, BigQuery, S3-compatible object stores.
Data stays in your environment. Transient processing copies are deleted after use.
Example 90-day plan¶
Days 1–14 — discovery, data inventory, risk & governance framing
Days 15–56 — GraphRAG pilot, connectors, evals, guardrails
Days 57–70 — technical file, DPIA, documentation
Days 71–90 (optional) — scale-out: SSO, autoscaling, drift monitoring, SLOs
Deliverables ship continuously; you don’t wait until the end.
Engagement models¶
- Fixed/capped fee — readiness & compliance accelerators.
- Time & materials — complex builds with evolving scope.
- Value-based — when outcomes are quantifiable.
Capacity: one major build + one advisory track concurrently for focus.
Start here¶
- Quick diagnostic with David → Book a 30-min call
- Or email mail@lawkraft.com — response within 1 business hour.