The second lieutenant moves his platoon up a densely wooded hill. He is grimly aware that he may be ambushed from above, but his orders are to secure the ridge with minimal casualties. He orders one squad to advance to make contact with the smallest unit possible, leaving two squads and a weapons squad out of direct fire contact in order to maintain his options. Suddenly, the platoon leader hears the crack of machine gun and small arms fire from above. His integrated visual augmentation system (IVAS) sensors reveal enemy forces concealed in the darkness, while also triangulating their locations through the aimpoints of the other soldiers in contact, whose weapons are fully instrumented. He notifies his chain of command that his element has been engaged and from which direction through an interactive map symbology display linked to his IVAS. It’s time for a decision: direct his outlying squads to attempt to outflank the ambush, withdraw, or call for supporting fires. Knowing fire support is available, he decides to combine the latter two courses. He gives the order to pull back and issues a call for fire. The platoon leader automatically transmits the enemy’s grid coordinates to the fire support team. The company’s unmanned aerial vehicle indicates the enemy is retreating. He orders a renewed advance.
Is this scenario real or science fiction? Neither. It’s a training simulation of what the U.S. believes the next war will look like. The Department of Defense is exploring how new technologies will dramatically improve the way the U.S. military prepares for tomorrow’s battlefields. The end result will be a synthetic training environment that leverages next generation artificial intelligence, as well as augmented and virtual reality to heighten situational awareness, command and control, and combat power. [full article]