Skip to content
Last updated: 2026-04-06
Reference

Cron Triggers

Dxtra uses Hasura cron triggers for recurring background tasks. These triggers fire on a schedule and send a POST request to dx-conduit webhook handlers. All triggers authenticate via the nhost-webhook-secret header.

Overview

Trigger Schedule Human-Readable Retry Timeout Purpose
delete_expired_data_controllers 0 0 * * * Daily at midnight UTC None Default Delete expired data controller accounts
monthly_usage_check 0 0 1 * * 1st of month at midnight UTC 3 retries / 10min 600s Check data subject counts against subscription limits
process_scheduled_events */10 * * * * Every 10 minutes None 240s Process the scheduled events queue
verify_domain_records */15 * * * * Every 15 minutes None Default Verify pending domain DNS records
consolidate_data_subjects 0 * * * * Every hour (on the hour) 2 retries / 2min 300s Merge duplicate data subject records
dsar_compliance_check 0 8 * * * Daily at 8:00 AM UTC 2 retries / 10min 120s Check for overdue DSARs and unverified completions

Trigger Details

delete_expired_data_controllers

Schedule: 0 0 * * * (daily at midnight UTC) Handler: /hasura/cron_triggers/delete-expired-controllers Retries: None

Deletes data controller accounts whose subscription has ended. Runs once daily to clean up expired trial accounts and cancelled subscriptions. This is a hard delete that removes the organization and associated data after the grace period.

Destructive Operation

This trigger permanently removes data controller records. The subscription end date and grace period should be carefully configured to avoid premature deletion.


monthly_usage_check

Schedule: 0 0 1 * * (1st of each month at midnight UTC) Handler: /hasura/cron_triggers/monthly-usage-check Retries: 3 (600s interval, 600s timeout, 6h tolerance)

Checks that the data subject count for each organization is within their subscription limits. If an organization exceeds their allotted data subjects, this trigger can:

  • Send usage notifications to the DPO
  • Flag the organization for subscription upgrade
  • Apply rate limits to consent collection

The 6-hour tolerance window ensures the job completes even if temporarily delayed.


process_scheduled_events

Schedule: */10 * * * * (every 10 minutes) Handler: /hasura/cron_triggers/process_scheduled_events Retries: None (240s timeout, 6h tolerance)

Processes events from the scheduled_events table. This is the main scheduler that drives recurring tasks such as:

  • Periodic compliance re-assessments
  • Data retention scans
  • Integration polling (e.g., NetSuite data sync)
  • Automated report generation

The handler receives a transformed payload containing only the scheduled event data. Events are processed in batch with a 4-minute timeout per execution.


verify_domain_records

Schedule: */15 * * * * (every 15 minutes) Handler: /hasura/cron_triggers/domain_verifications Retries: None

Checks DNS records for all pending domain verifications. When a data controller adds a custom domain for their Transparency Center, they must add DNS records (TXT/CNAME) to prove ownership. This trigger periodically checks if those records are in place.

Once verified, the domain_verification_cors_sync event trigger fires to update the CORS allowlist.

Manual Verification

Users can also trigger an immediate verification check via the triggerDomainVerificationRecheck Hasura action instead of waiting for the next cron cycle.


consolidate_data_subjects

Schedule: 0 * * * * (every hour on the hour) Handler: /hasura/cron_triggers/consolidate-data-subjects?dryRun=false&batchLimit=50 Retries: 2 (120s interval, 300s timeout, 600s tolerance)

Merges duplicate data subject records where the same person has multiple data_subject_id values (detected via PPL hash matching). This is Phase 4 of the Identity Consolidation System.

How it works:

  1. Scans for data subjects with matching PPL hashes across different data_controller_users records
  2. Merges duplicate records by consolidating consent history, preference values, and rights requests onto the primary record
  3. Processes in batches of 50 to avoid long-running transactions

Immediate merges happen at login time; this cron catches any race condition duplicates that slip through.

Query parameters:

  • dryRun=false -- performs actual merges (set to true for testing)
  • batchLimit=50 -- maximum records to process per execution

dsar_compliance_check

Schedule: 0 8 * * * (daily at 8:00 AM UTC) Handler: /hasura/cron_triggers/dsar-compliance-check Retries: 2 (600s interval, 120s timeout, 6h tolerance)

Daily compliance check for Data Subject Access Requests (DSARs). Identifies:

  • Overdue requests: DSARs that have exceeded the regulatory response deadline (typically 30 days under GDPR)
  • Unverified completions: DSARs marked as completed but not yet verified by the DPO

When issues are found, this trigger generates notification messages for the responsible DPO and can escalate to compliance issue records.

Regulatory Context

GDPR Article 12(3) requires data controllers to respond to DSARs within one month, extendable by two further months for complex requests. This trigger ensures organizations are alerted before deadlines are missed.


Scheduling Reference

Cron Expression Syntax

Text Only
┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6, Sunday=0)
│ │ │ │ │
* * * * *

Retry Configuration Fields

Field Description
num_retries Maximum number of retry attempts on failure
retry_interval_seconds Wait time between retries
timeout_seconds Maximum execution time per invocation
tolerance_seconds How late a trigger can fire and still be executed (prevents backlog execution after downtime)