AI & Data Use Governance¶
Planned Feature
Dedicated AI governance features (model inventory, AI governance dashboard, policy configuration) are planned but not yet available in Dxtra. This page describes the regulatory landscape and what Dxtra supports today.
As AI and machine learning become central to business decisions, regulatory requirements around algorithmic transparency and accountability continue to expand. The EU AI Act, sector-specific AI regulations, and emerging algorithmic impact requirements create new governance obligations alongside traditional data protection compliance.
What is AI governance?¶
AI governance means systematically managing how your organization uses algorithms and automated decision-making. This includes documenting which systems process personal data, assessing their risks, implementing transparency measures, and monitoring them for bias and fairness. Effective AI governance demonstrates that your organization respects data subjects' rights and complies with regulations like GDPR Article 22 and the EU AI Act.
Algorithmic impact assessment¶
An algorithmic impact assessment (AIA) evaluates whether an AI system or automated decision-making process creates significant risks to individuals. Under GDPR, this is typically conducted as part of a Data Protection Impact Assessment (DPIA) when the system makes automated decisions with legal or similarly significant effects.
Dxtra's existing assessment system supports running DPIAs that include algorithmic impact considerations. When conducting a DPIA for an AI system, you can document:
- Decision scope — Who is affected and what decisions are automated?
- Data dependencies — What personal data does the system process?
- Risk factors — Does it involve sensitive data, vulnerable groups, or high-stakes decisions?
- Mitigation measures — What safeguards are in place?
Tip
Include an AIA whenever you deploy AI systems that determine eligibility, scoring, profiling, or other decisions affecting individuals' rights or interests. Use the DPIA workflow to structure this assessment.
Transparency and disclosure obligations¶
Data subjects have rights to know when AI systems make decisions about them. Key transparency measures include:
- Notifying data subjects — Inform people if profiling, automated decision-making, or algorithmic decisions affect them
- Providing explanations — Explain how the decision was made and what data influenced it
- Documenting decisions — Maintain records of algorithmic decisions when legally required
- Enabling human review — Provide a mechanism for individuals to contest automated decisions
Automated decision-making under GDPR Article 22¶
GDPR Article 22 restricts purely automated decisions with legal or similarly significant effects. You must offer individuals the right to opt out of such decisions and obtain human review.
Assess your systems:
- Is the decision made solely by an algorithm without human involvement?
- Does it produce legal effects (hiring, credit, insurance) or similarly significant effects?
If both are true, Article 22 applies. You must provide individuals with meaningful information about the decision logic and offer human review.
Dxtra's own AI governance¶
Dxtra uses AI to power document generation, compliance recommendations, and assessments. Here is how we govern our own AI usage:
- Data protection — Customer data is never used to train our models
- Human-in-the-loop review — AI-generated documents can be routed for human review before publication (feature in progress)
- Quality assurance — A five-point quality gate ensures accuracy and relevance
- Performance tracking — AI model performance is monitored via Langfuse, capturing latency, token usage, and execution traces for all AI operations
Warning
Always review AI-generated content before relying on it for compliance decisions. AI can make mistakes or miss context specific to your organization.
AI regeneration quotas¶
Dxtra provides monthly allowances for AI-powered regeneration of assessments, recommendations, and documentation:
| Plan | Monthly regenerations |
|---|---|
| Start | 1 |
| Growth | 2 |
| Scale | 3 |
| Enterprise | 10 (custom) |
Regeneration is useful when you've updated data flows, added new systems, or want fresh risk assessments. Each regeneration triggers a new analysis and produces updated documentation.
Frequently Asked Questions¶
Do we need an AIA for all AI systems? Not all AI systems require a full AIA, but those with high-impact decisions, sensitive data, or vulnerable populations should have one. Conduct a DPIA with algorithmic impact considerations for high-risk systems.
What happens if we can't comply with Article 22? Some systems may need to include human review in the decision process, offer opt-out mechanisms, or be redesigned. Document your approach and why you've chosen it.
How often should we monitor AI systems for bias? Establish a monitoring schedule based on risk. High-stakes systems may require monthly or quarterly reviews; lower-risk systems can use annual audits.
Can we use Dxtra's AI features for our own system development? Dxtra's AI capabilities help with governance documentation and assessment. For building your own AI systems, follow industry best practices for bias testing, validation, and monitoring.