MIDAS is an open platform for governing execution authority at decision surfaces across agents, AI systems, and enterprise workflows.
-
Updated
May 8, 2026 - Go
MIDAS is an open platform for governing execution authority at decision surfaces across agents, AI systems, and enterprise workflows.
A long-form article introducing the Twin Test: a practical standard for high-stakes machine learning where models must show nearest “twin” examples, neighborhood tightness, mixed-vs-homogeneous evidence, and “no reliable twins” abstention. Argues similarity and evidence packets beat probability scores for trust and safety.
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the “Warning Card” template, so ML preserves human agency while staying useful under uncertainty.
Longform article reframing abstention (reject option / selective prediction) as product design, not model weakness. Covers coverage as a KPI, calibration as a prerequisite, threshold selection under review capacity and risk, queue/UX design for human-in-the-loop workflows, and anti-patterns that break safety in production.
Personal GitHub profile README showcasing production ML, GenAI/RAG systems, LLM observability, and decision-ready AI workflows.
Event-driven NLP governance architecture using FastStream, Redpanda, and PostgreSQL with auditability, human-in-the-loop control, and ethical safeguards.
Computational framework for Constraint-Driven Stability with scientific toy models and applied AI decision surfaces.
Deterministic governance system for AI-driven marketing that separates diagnostics, human reasoning, and execution into strictly controlled layers.
Defines the decision layer for AI systems where deployment outcomes are governed, recorded, and reconstructable. 5th conforming implementation of draft-farley-acta-signed-receipts (IETF).
Stock redistribution and fairness-based transfer recommendations (Excel/VBA prototype).
A governed system for translating applied AI research into auditable, decision-ready artifacts.
Turn-based control architectures
Defines the Selection Layer — the decision system through which AI models determine visibility, inclusion, and recommendation.
Research repository by Xufen Tu exploring human judgment, decision architecture, and responsibility structures in complex AI-mediated systems.
Control-plane architecture for AI & agentic systems: governance as admission control, decision admissibility, and audit-grade evidence.
Open-source framework for Decision Traces in complex decision systems. Providing a verifiable audit trail for observable, explainable, and humane choices in software and agentic engineering
CFS (Cognitive Flow System) — a causal influence framework for modeling how decisions emerge in complex systems through structured causal constraints. DOI: https://doi.org/10.5281/zenodo.19142077 DOI: https://doi.org/10.5281/zenodo.19103972
Dietary Destabilization Triangle assessment tool
Optimization-driven product selection for commercial buying decisions under budget and business constraints.
Human-in-the-loop AI decisioning system for document-based workflows
Add a description, image, and links to the decision-systems topic page so that developers can more easily learn about it.
To associate your repository with the decision-systems topic, visit your repo's landing page and select "manage topics."