Architecture

This page outlines the structural principles governing Discriminology’s measurement and intelligence systems.

Meaning Before AI

Discriminology designs measurement systems in which meaning is formalized before artificial intelligence is invoked. Across our platforms — including Discriminology+ — structured data is processed through explicit, versioned measurement schemas that define how information is aggregated, scored, and interpreted. Artificial intelligence operates only downstream of this deterministic analytic core. It reflects on structured summaries; it does not compute scores, access raw responses, or redefine constructs.

Measurement remains authoritative. Intelligence is layered.

Layered System Design

Our architecture separates responsibility across three distinct layers:

1. Survey & Data Ingestion

Data is securely ingested from external systems (e.g., Qualtrics or other structured sources) and validated against expected instrument structure. Personally identifiable information is not required or collected for analytic processing.

2. Deterministic Measurement Core

All scoring and classification occur through rule-based analytic logic governed by formal measurement configuration. This includes construct definitions, item mappings, reverse scoring rules, performance thresholds, and demographic grouping logic.

No machine learning is used to compute scores. Results are reproducible and auditable.

3. Reflective Intelligence Layer

AI operates only on aggregated, pre-computed summaries. It does not receive raw response rows, individual-level identifiers, or database access. Its role is interpretive augmentation — supporting reflection on structured results within defined boundaries.

This separation ensures that analytic meaning is governed by explicit rules, not generative inference.

Data Minimization by Design

Discriminology systems are architected around strict data minimization principles:

  • No student names stored in analytic systems

  • No email addresses required for scoring

  • No device fingerprinting or behavioral tracking

  • No persistent individual profiling

Aggregation occurs before AI invocation. Individual responses are never transmitted to AI systems. There is no internal identity layer linking analytic results to named students.

The architecture enforces this constraint at the system level — not as a policy preference, but as a structural limitation.

Controlled AI Deployment

AI-assisted interpretation is deployed within Discriminology’s managed AWS environment using AWS Bedrock.

Key architectural safeguards include:

  • All processing occurs within Discriminology’s AWS account

  • No external AI APIs are called

  • Customer data is not used to train foundation models (per AWS Bedrock service terms)

  • AI receives only structured analytic summaries

  • Outputs are constrained by defined prompt templates and validated JSON schemas

  • No AI system has direct access to raw databases or identifiable information

AI augments interpretation. It does not generate new data, automate decisions about individual students, or override measurement definitions.

Governance & Security

Security controls are embedded across the system:

  • Role-based authentication (AWS IAM + Cognito)

  • Per-survey authorization enforcement

  • TLS-enforced transport security

  • Encryption at rest (AWS-managed)

  • Structured output validation and XSS protections

  • Cache isolation scoped to survey ownership

Every analytic and AI request verifies user authorization before data is returned. No cross-survey or cross-tenant exposure is permitted.

Measurement infrastructure must be durable. Our design decisions prioritize reproducibility, auditability, and long-term reliability over speed or trend.

Architectural Position

Discriminology Plus is not primarily a survey tool. It is a schema-governed measurement infrastructure layer that:

  • Ingests structured survey data from multiple sources

  • Applies formal measurement logic to produce deterministic analytics

  • Layers constrained AI-assisted interpretation on pre-computed summaries

  • Preserves privacy through architectural data minimization

  • Maintains auditability through versioned schema governance

As educational environments decentralize and AI systems proliferate, we believe shared measurement frameworks must remain interpretable, secure, and interoperable. Our architecture is designed to preserve shared reference across distributed systems without requiring uniform authority.

AI is not the foundation. Measurement is.

If you would like technical documentation, security review materials, or detailed implementation specifications, we welcome further inquiry.