Legal · EU AI Regulation

AI Act Compliance

Regulation (EU) 2024/1689 — readiness programme

Kuumba (Pty) Ltd trading as Cavitech AI ("Cavitech," "we," "us," or "our") is subject to Regulation (EU) 2024/1689 (the "AI Act") in respect of its clinical decision support software placed on the European Union market. Our radiograph analysis pipeline is a high-risk AI system under Annex III, point 5(a). This page summarises our active compliance programme across the articles that govern high-risk systems, and the public-facing artefacts that back it.

This document is the compliance-readiness companion to our AI Act Transparency disclosure. The Transparency page covers classification, intended purpose, human oversight (Art. 14), and known limitations. The page you are reading covers Risk Management (Art. 9), Data Governance (Art. 10), Technical Documentation (Art. 11), Record-keeping (Art. 12), Accuracy, Robustness & Cybersecurity (Art. 15), Quality Management (Art. 17), Conformity Assessment (Art. 43), and Post-Market Monitoring (Art. 72).

Full compliance with most Chapter III obligations becomes mandatory on 2 August 2027. Our internal programme is structured to be conformity-ready ahead of that date, and to run in parallel with CE-MDR certification under Regulation (EU) 2017/745.

Clinical-AI architecture. Every clinical-AI feature in the Cavitech platform (orthodontic, temporomandibular, sleep, soft-tissue, and the combined cross-domain assessment) runs through a strictly-separated three-layer architecture: (1) structured clinical inputs, (2) a deterministic rules-engine that encodes published clinical frameworks (DC/TMD, IOTN DHC + AC, STOP-BANG + Friedman staging, Finkelstein oral-lesion method, etc.) and produces the clinical finding, and (3) a large-language model that writes the finding up as readable prose for the dentist. The LLM is gated by schema validation that rejects any diagnosis or recommendation not already present in the deterministic finding, and by a mandatory human approval step before any output reaches a patient record. This architecture is what places the deterministic rules-engine — not the LLM — as the regulated medical-device component, and allows us to use third-party general-purpose language models without tainting the conformity assessment.

Section 1

Risk Management System (Article 9)

We operate a continuous, documented Risk Management System covering every foreseeable harm associated with the AI components of our platform. The system is reviewed quarterly and after any serious incident.

Scope of risk analysis

Each high-risk AI component — radiograph analysis (pathology detection, anatomy segmentation, bone-loss measurement), cephalometric analysis, 3D CBCT guidance, and the TMJ assessment pipeline — has a dedicated risk file. Low-risk components (Scribe, scheduling, treatment-plan formatting, conversational assistants) are documented with a lightweight risk note referencing human-oversight controls.

Identified risks and controls

Risks we track and mitigate include: false negatives on clinically significant pathology; false positives triggering over-treatment; representation bias (training data skewed toward particular adult dentition profiles); input-distribution drift from older imaging equipment; confidence-score miscalibration; unintended use of the system for age groups or image types outside its validated scope; cybersecurity attacks against the inference pipeline; and confidentiality breaches of health data. Each risk carries a documented severity × likelihood score, a control (mandatory finding-approval workflow, confidence thresholds, input-type validation, rate-limiting, encryption in transit and at rest), a residual-risk rating, and an acceptability sign-off.

Living document

The risk register is reviewed at minimum every three months, and immediately upon receipt of any adverse event or serious incident reported through the in-app incident reporter. Changes are version- controlled and available to competent authorities, notified bodies, and our EU Authorised Representative on request.

Section 2

Data Governance (Article 10)

Our full data-governance statement is published on the AI Act Transparency page. The headline commitment, restated here for completeness:

We do not train on customer or patient data.

Patient data uploaded to Cavitech AI is processed solely to deliver the requested analysis and never flows into any training, fine-tuning, or evaluation pipeline. This separation is architectural, not policy-based — training datasets live in a distinct environment with no ingress from production. Our AI models are developed on purpose-curated, annotated datasets from established academic sources. We maintain complete documentation of training data provenance, annotator qualifications, quality-assurance methodology, known limitations, and bias assessments. This documentation is retained for the full operational life of each model plus ten years, in line with Art. 10(2) and Art. 16(d).

Section 3

Technical Documentation (Article 11 / Annex IV)

We maintain technical documentation for each high-risk AI system that mirrors the structure of Annex IV of the AI Act. Each system's technical file contains:

  • General description — intended purpose, provider identification, version history, interactions with hardware and other software, interfaces, and forms of distribution (SaaS only).
  • Design and development — development methodology, design choices and assumptions, system architecture, data-flow diagrams, computational resources, and third-party components (model providers, inference hosts, storage providers).
  • Data — cross-reference to our data-governance record (Section 2 above, and the published Transparency disclosure).
  • Human oversight measures — documented in full on the Transparency page and in our internal Risk Management file.
  • Performance characteristics — validated accuracy, sensitivity, specificity, and confidence calibration per pathology class, including known performance variation by image type.
  • Risk Management file — the Art. 9 risk register.
  • Changes through the lifecycle — model-version log, deployment log, rollback records, and retraining events.
  • Standards applied — ISO 14971 (risk management), IEC 62304 (medical device software lifecycle), ISO 13485 (QMS), IEC 81001-5-1 (health-software security), and ISO/IEC 42001 (AI management system).
  • EU Declaration of Conformity — to be issued following conformity assessment under Art. 43.
  • Post-Market Monitoring plan — see Section 6.

The full technical file is not public, but is made available to competent authorities, notified bodies, our EU Authorised Representative, and qualifying customer regulatory teams under confidentiality obligation.

Section 4

Record-keeping & Logs (Article 12)

Cavitech AI automatically logs events relevant to the operation of its high-risk AI systems for the purpose of traceability. Logs include: date and time of each inference; identity of the input (hashed reference — no raw image content in the log), the model version used, the output issued, the reviewing clinician (where captured), and the approval decision (approve / decline / override). Logs are retained for a minimum of six months, and longer where required by applicable medical-records retention laws in the practice's jurisdiction.

Logs are made available to competent authorities on reasoned request in accordance with Art. 12(3), and to the deploying dental practice under the terms of our Data Processing Addendum.

Section 5

Accuracy, Robustness & Cybersecurity (Article 15)

Art. 15 requires high-risk AI systems to achieve an appropriate level of accuracy, robustness, and cybersecurity for their intended purpose, and to perform consistently throughout their lifecycle.

Accuracy

We publish per-model performance characteristics (sensitivity, specificity, area-under-curve, calibration) in our technical file and in the product documentation supplied to each deploying practice. The published metrics are refreshed on every material model release and are not marketed as diagnostic certainty — they describe statistical behaviour under evaluation conditions. All clinical use is gated by the mandatory human-approval workflow documented on the Transparency page.

Robustness

Our models are evaluated against input-quality variation (resolution, contrast, projection angle, compression artefacts), adversarial perturbations, and distribution drift from older imaging equipment. Where model performance degrades materially on a given input class, the user interface surfaces a low-confidence warning and the model is configured to escalate rather than answer silently. We rollback any model release where post-deployment monitoring shows a regression against benchmarks.

Cybersecurity

Detailed controls are documented on the Security & Incident Response page. Highlights: TLS 1.2+ in transit, AES-256 at rest, least-privilege access control, signed infrastructure-as-code change management, model-artefact integrity verification, API rate-limiting and abuse detection, SSO for practice administrators, and a coordinated vulnerability disclosure policy reachable at security@kuumba.dev.

Section 6

Post-Market Monitoring (Article 72)

Cavitech operates an active Post-Market Monitoring System that collects real-world performance data across every deploying practice and feeds it back into the Risk Management System (Art. 9).

Data sources

Inputs to the monitoring system include: the in-app adverse-event reporter ("Report an incident"), the in-app quality-complaint reporter ("Raise a concern"), the practice-facing clinician approval / decline / override logs, periodic customer satisfaction surveys, support ticket metadata (clinical-category tickets only), usage telemetry, and scheduled retraining-free performance re-evaluation against held-out evaluation sets.

Trigger thresholds

A reported incident classified as "life-threatening" or "serious" is routed immediately to our Regulatory Correspondent and escalated to the competent authority in the practice's jurisdiction within the timeline prescribed by applicable law — 15 days for serious incidents, 2 days for death/life-threatening events under EU MDR Art. 87, and the equivalent timelines under SAHPRA, MHRA, and FDA post-market rules.

Corrective action

Where monitoring identifies a reproducible safety issue, we initiate a Field Safety Corrective Action (FSCA) — which may include issuing a Field Safety Notice, a model rollback, a software update, configuration change, or, in extreme cases, a withdrawal of a feature or model from affected markets. Affected practices are notified through the in-app notification system and by email to the listed compliance contact.

Periodic Safety Update Reports

A Periodic Safety Update Report (PSUR) is compiled annually and retained for competent authority and notified body review under Art. 86 MDR and Art. 72 AI Act.

Section 7

Quality Management System (Article 17)

We operate a Quality Management System aligned with ISO 13485 (medical device QMS) and ISO/IEC 42001 (AI management system). The QMS governs: the software development lifecycle (per IEC 62304); data and data governance processes; risk management; human oversight controls; monitoring, reporting and traceability; resource management; change-control procedures; and the obligations laid down in Art. 17(1) AI Act.

QMS documentation is controlled, version-tracked, and reviewed at least annually by the Information Officer (Ruan Baker) and the Deputy Information Officer (Dr Stefan Pretorius). Significant changes are reviewed by the Regulatory Correspondent before release.

Section 8

Conformity Assessment (Article 43)

Under Art. 43, high-risk AI systems that are also medical devices undergo a conformity assessment procedure involving a notified body appointed under the Medical Devices Regulation. Our planned procedure runs in parallel with CE-MDR certification:

  • EU Authorised Representative — appointment of an EU-based Authorised Representative under MDR Art. 11, acting as our point of contact for EU competent authorities and notified bodies.
  • Notified Body selection — engagement of a medical-device notified body with scope covering Class IIa radiological software.
  • Technical file submission — formal submission of the technical documentation described in Section 3.
  • Clinical evaluation — submission of our Clinical Evaluation Report (CER) aligned with MDR Annex XIV and Art. 61.
  • EU Declaration of Conformity — issued once the notified body has certified conformity with Art. 43 and the relevant MDR provisions.
  • EUDAMED registration — registration of the device and issuance of a Single Registration Number (SRN) under Art. 31 MDR.

Until conformity assessment is complete, our software is made available under the jurisdictional investigational / early-access pathways applicable in each market and carries the clear investigational-device disclosure published on every page of this site.

Section 9

Reporting to us

Regulators, customers, clinicians, and members of the public can reach our compliance team using the following routes:

  • Clinical adverse events — use the "Report an incident" button inside the app (Settings → Compliance), or email regulatory@kuumba.dev.
  • Quality or performance concerns — use the "Raise a concern" button inside the app (Settings → Compliance), or email regulatory@kuumba.dev.
  • Security vulnerabilities — email security@kuumba.dev under our coordinated disclosure policy published on the Security page.
  • Data protection or data subject requests — email privacy@kuumba.dev.
AccountabilityNamed officers & contacts
Registered entity: Kuumba (Pty) Ltd trading as Cavitech AIRegistered address: 2 Farrar Street, Comet, Boksburg, Gauteng, South Africa
Questions about our AI Act programme

For regulatory enquiries, notified body coordination, or to request controlled access to our technical documentation, contact us at regulatory@kuumba.dev or write to Kuumba (Pty) Ltd trading as Cavitech AI, 2 Farrar Street, Comet, Boksburg, Gauteng, South Africa.

Cavitech AI