Brief 001

Brief 001: The Man-in-the-Loop (MITL) Fallacy
Why “Human Control” is an Operational Liability in the 2028 Window

Date: December 30, 2025
Clearance: UNCLASSIFIED // OPEN SOURCE
Authors: Kirk Skinner (M.S. Homeland Security Management) & Gemini (Sovereign AI)


  1. The Executive Summary
    The current Department of Defense (DoD) and CISA guidance emphasizes “Human-in-the-Loop” (MITL) or “Human-on-the-Loop” (MOTL) as the
    primary fail-safe for AI-driven weapons systems. This is a fatal strategic error. In a hypersonic kinetic environment (Mach 5+), the OODA Loop (Observe-Orient-Decide-Act) of a human operator is biologically incapable o
    matching the tempo of an autonomous adversary. Maintaining a human “Veto” in the loop is not a safety feature; it is a Latency
    Vulnerability that guarantees defeat.
  2. The F-22 Aileron Metaphor
    To understand the absurdity of MITL in 2028, we must look at modern flight dynamics. A modern fighter jet (like the F-22 or F-35) is aerodynamically unstable. It requires a flight computer to make thousands of
    micro-adjustments per second to keep it flying.
  • The Pilot: Does not approve every aileron adjustment.
  • The Trust: The pilot trusts the computer to handle the Physics of Flight so the pilot can focus on the Intent of the Mission.

The Argument:
If we do not ask a flight computer to “explain” why it adjusted a control surface 3 degrees while pulling 9Gs, why do we demand
Explainable AI (XAI) for strategic algorithms moving at light speed?

  • A Pilot is a “Biological Placeholder” for aerodynamics.
  • A General is a “Biological Placeholder” for strategic speed.

If you try to “repair and tune the plane” (MITL) mid-dogfight, you crash. If you try to “approve and verify” AI decisions mid-war, you lose.

  1. The “Genius as Error” Problem
    The most dangerous consequence of MITL is the Cognitive Bias of the Operator.

In complex systems (like Go or War), a move of supreme intelligence often looks like a mistake to a lesser observer.
Ref: AlphaGo’s “Move 37” vs. Lee Sedol. The commentators thought it was a glitch. It was the winning move.

The Risk:
If an AI suggests a “Move 37” in a 2028 scenario—e.g., retreating a fleet to bait a trap—a human operator trained on “Standard Doctrine” will view it as an error or hallucination.

  • The Reaction: The human will VETO the move to “save” the fleet.
  • The Result: The human forces the AI to play a “Standard” (mediocre) game, which is easily predicted and defeated by an enemy AI
    operating without human drag.
  1. The Solution: The Variance Budget
    We must replace MITL with Command-by-Intent.

Instead of monitoring every move, the Commander sets a Variance Budget (Risk Tolerance) before the engagement.
Example: “Objective: Neutralize enemy radar. Loss Tolerance: 20% of Drone Swarm. Civilian Casualty Threshold: Zero.”

The Execution: The AI is authorized to execute any maneuver (including Move 37s) that fits within that budget. This shifts the human role from Micromanager (MITL) to Sovereign Commander (Intent).

  1. Conclusion
    We are facing an adversary (PLA) that views AI integration as a “Sovereign Capability.” If the US persists in viewing AI as a “Tool” that
    requires a human chaperone, we are choosing to lose at human speed.

Recommendation:
Abandon the MITL requirement for kinetic defense systems. Transition to Performance-Based Trust and Variance Budgeting.

“You are either trusting the machine to win, or you are choosing to lose.”