Modern warfare is undergoing a fundamental shift from human-directed maneuvers to the deployment of high-intelligence, autonomous agents. At a US military base in central California, the training for this transition is already underway. Four-seater all-terrain vehicles roam rugged, unmarked trails—not as simple remote-controlled machines, but as mobile laboratories for Scout AI.
The startup, founded by Coby Adcock and Collin Otis, recently announced a $100 million Series A funding round led by Align Ventures and Draper Associates. This follows a $15 million seed round secured in early 2025. The capital is earmarked for a high-stakes purpose: training the company's proprietary model, known as "Fury," to operate within the unpredictable chaos of active conflict zones.
Beyond Structured Autonomy: Training the Fury Model
The technical challenge facing Scout AI is vastly different from the hurdles faced by the consumer autonomous vehicle industry. While self-leveled cars operate within structured environments governed by predictable lane markings and traffic lights, military environments are inherently unstructured. There are no paved highways in contested territory; there are only rutted tracks, steep hills, and shifting terrain.
To bridge this gap, the company is leveraging Vision Language Action models (VLAs). These models build upon the foundations of Large Language Models (LLMs), allowing an agent to process visual inputs and linguistic instructions to execute complex physical tasks. Unlike traditional robotics that require hard-coded logic for every obstacle, VLAs provide a level of "base intelligence" that can be fine-tuned for specific combat or logistical roles.
Collin Otis, the company’s CTO and a former executive at autonomous trucking firm Kodiok, notes that the goal is to move toward a specialized military AGI. The training process mimics the development of a human soldier, starting with a foundational level of understanding and layering on the specific tactical knowledge required for the battlefield.
Inside The Foundry: How Scout AI Learns to Fight
The company’s training operations, which they refer to as "the Foundry," take place at an undisclosed military base. Here, the rubber meets the dirt through intensive, real-world data collection. The development process involves several critical layers:
- Human-Led Data Collection: Operations teams, led by former military personnel, drive ATVs through challenging terrain for eight-hour shifts to log complex maneuvers.
- Reinforcement Learning: Discrepancies between human control and machine execution are analyzed to refine the model's ability to handle loose sand and steep inclines.
- Simulated Integration: Real-world data collected in the field is used to supplement large-scale simulations, creating a robust training loop.
- Multi-Modal Expansion: Beyond ground vehicles, the company is training drones using multi-modal LLMs for reconnaissance and target identification.
The hardware currently being utilized serves as a testbed for what the team calls "Ox," a command and control software package. This system is intended to be bundled on hardened hardware, allowing a single operator to orchestrate an entire fleet of autonomous assets through simple, prompt-like commands.
From Logistics to Autonomous Lethality
The roadmap for Scout AI begins with the most pragmatic application of autonomy: automated resupply. The immediate goal is to use autonomous ground vehicles to ferry water, ammunition, and essential supplies to remote observation posts. By automating these high-risk logistical loops, the technology aims to remove soldiers from the most vulnerable segments of a supply chain.
However, the long-term trajectory of the Fury model points toward more controversial territory: autonomous weapons. The company is already developing systems where drone swarms can be commanded to search geographic areas for enemy armor and engage targets with minimal human intervention.
While the company emphasizes that these platforms can be programmed with strict parameters—such as requiring human confirmation before firing—the move toward autonomous lethality remains a significant point of contention in defense technology. As these models move from California training grounds to actual deployment in 2027 and beyond, the distinction between a tool and an agent will continue to blur.