Shadow Warriors Pursuing Next-Gen Surveillance Tech
Shadow Warriors, pursuing next-generation surveillance technology
The US Special Operations Command and intelligence for advanced research projects are pursuing new technologies to identify and track threats.
Commandos rely on this type of capability when attacking terrorist groups and performing other critical missions.
“Intel manages operations,” SOCOM Commander Richard Clark said during a recent Senate Armed Services Committee hearing. “In order to compete more effectively in the future, we need to modernize both our precision strike and the ISR … so [special operators] they can quickly see and feel the battlefield where they may have to fight. “
Encrypted communications and e-war capabilities are also crucial to protecting forces, he noted.
SOCOM’s Special Intelligence Program Executive Office is responsible for pursuing these types of technologies.
The mission of the office is to guide the rapid and targeted acquisition of state-of-the-art sensors and related command and control systems, deployment, recovery and specialized communications in all areas, to enable a full understanding of the situation for the Special Forces. operations ”PEO David Breed said in an email to National defense.
The technology portfolio covers technical collection and communication to include hostile forces, marking, tracking and locating; power tracking; tactical video systems for reconnaissance, surveillance and target collection; and remote counseling and support kits.
It also includes integrated air, sea and ground sensor systems; signals for processing, exploitation and dissemination of intelligence data; operation of sensitive sites with opportunities for biometric, forensic and intelligence analysis; and the use of national space technologies.
“We’re really looking at operations in close proximity and non-permissive environments,” Breed told last year’s Virtual Conference on Special Operations Industrial Forces, run by the National Defense Industrial Association on behalf of SOCOM.
Breede’s top three technology development priorities are unsupervised ground-based sensors, flexible tactical RF systems and shared autonomous platforms, he said in an email.
“While reducing the size, weight and power requirements of unattended terrestrial sensors will always be the focus, the key to modernization will be to increase on-board processing power, integrate alternative communication paths and improve interoperability with different sensor networks,” he said. he.
Such technology can help commandos gather critical information without having to put “boots on the ground” in dangerous areas and remote locations. They can also make it easier to consult with foreign partners without the SOF being side by side with them on the front lines, Braid said.
To enhance communications, small tactical RF systems need to become more flexible with not only software-defined radios, but also flexible frequency antennas and modularity between platforms and domains, he noted. SOCOM uses radio stations to transmit images as well as voice and text communication.
PEO Special Reconnaissance also looks beyond today’s remotely controlled intelligence gathering systems and monitors shared autonomous platforms.
“Autonomy is crucial to the ability to work in a contested environment where traditional communication and navigation solutions can be challenged,” Breed said. “Autonomy of cooperation allows unmanned platforms to operate on the basis of a shared understanding of the environment without active operator control in these contested environments.”
The Special Operations Command has high hopes that artificial intelligence and machine learning capabilities will help reduce manpower requirements for deploying robotic platforms.
“Today you have an operator who is integrated one-on-one with an unmanned aerial system and he completely takes him out of the battle while he maneuvers with him,” said James Smith, CEO of SOCOM.
“We are improving ISR from … unattended ground-based sensors, unmanned aerial systems,” he added. “The problem is that each of these sensors takes the operator off the line. So how do we use artificial intelligence and machine learning to make these sensors interact autonomously and provide feedback to an operator to enable that force to maneuver across the target? “
Autonomous unmanned aerial vehicles or ground robots equipped with AI can be used to clean areas such as buildings or tunnels and release SOF maneuvering forces to be much more efficient and effective on the battlefield while pursuing their targets. mission, he noted.
The Special Operations Command also wants to upgrade the machine learning capabilities demonstrated by Project Maven, which uses the technology to sort out a flood of videos collected by drones in war zones like Afghanistan and identify items of interest, Clark said. The technology helped separate the wheat from the chaff and greatly facilitated the processing of intelligence, exploitation and distribution.
“We can now collect … terabytes of value,” he said during a panel discussion hosted by the Hudson Institute. “One cannot sort and review this in sufficient detail, nor fast enough to get to the relevant information. So I think that’s an important part of what … Project Maven has invested in this by discovering objects so that people can only do those things that people have to do and try to get them to do machines to do all these other things. “
The ability to speed up the SOCOM guidance cycle without requiring hundreds of analysts to investigate intelligence is crucial, he added.
Meanwhile, the Intelligence Advanced Research Projects Activity, also known as IARPA, has a new program to develop next-generation surveillance capabilities for the national security community.
The organization, which falls into the office of the director of National Intelligence, is investing in high-risk research efforts that seek to overcome some of the most difficult technical challenges facing US spy agencies.
The program for biometric recognition and identification of height and range, or BRIAR, aims to cultivate new software-based algorithms systems capable of performing biometric identification of the “whole body” of drones and other platforms.
“Many Ministry of Defense intelligence communities and agencies require the ability to identify or recognize individuals in challenging scenarios, such as long-distance, … through atmospheric turbulence or from elevated and / or aerial sensor platforms,” as described in the IARPA program description. . “Extending the range of conditions under which accurate and reliable biometric identification can be performed would significantly improve the number of addressed missions, the types of platforms and sensors from which biometric data can be used reliably, and the quality of results and solutions.”
The technology’s mission could include fighting terrorism, protecting forces, protecting critical infrastructure and border security, said program leader Lars Erickson.
The quality of images collected by drones and other enhanced surveillance platforms is often hampered by a number of factors that make it difficult to achieve biometric recognition, he said during a presentation to the industry.
Atmospheric turbulence is a major problem that the agency hopes to overcome through the BRIAR program. “This introduces blurring and distortion and fluctuations in intensity due to dynamic changes in air molecules in … this optical path between the target and the sensor,” Erickson explained.
The use of “video probe” footage is also an obstacle, he noted.
“In this case, you have different problems that are present in the images,” he said. “You have a very brief look at the subject that interests you.
This is a heavy angle of view. There is a high pitch angle and of course there are challenges for movement and resolution. And this would prove difficult to make an accurate and reliable match. “
While face recognition – including long-distance and “unlimited” face recognition – is a key capability of interest, the intelligence community needs “whole body” biometrics, he noted.
“We are now relying on face recognition,” Erickson said. “It’s not surprising. Face recognition has made significant progress in the last few years, but there has been a benefit or desire to be able to use additional biometric signatures or information in a scene that can magnify or inform or merge with the face. [recognition] to improve your reliability and accuracy of these ”matches.
This may include the detection and analysis of body shape, movement, measurements or other aspects of human shape for the purposes of recognition, identification or verification.
For example, drones can observe a group of people walking around an area and trying to select people of interest using various indicators.
“Movement, gait, maybe body shape or anthropometric information – if you can use this or extract what … promises to improve biometric matching.”
An option known as re-identification or ReID is also on the wish list. This includes systems that can identify the color and shape of an individual’s clothing, as well as their gender, age, hair style, and items they can wear, such as backpacks.
“ReID is a problem when you try to identify other observations of a person with different networks of cameras. Where else have you seen this man? Erickson explained. “This work is a very current topic in computer vision. There are many activities here and this is mainly due to the use cases around smart city technology and public safety.
To be successful, BRIAR needs to refine multimodal merged biometric signatures, such as full-body identification, build on unlimited face recognition capabilities, and collect large amounts of relevant data, Erickson said.
The desired “results” of the program include: matching images over long distances (100 to 1000 meters); coincidence in difficult points of view (20 to 50 degrees); mitigation of atmospheric turbulence; templates for multiple video images; localization of the body and face in moving video; cross view of the whole body, coinciding both indoors and outdoors; resistance to incomplete or clogged views; and multimodal synthesis, according to Ericson slides.
Solutions must be agnostic for sensor platforms and optics; adaptation to edge processing and real-time streaming; accurate in various demographic data and body shapes; invariant to posture, lighting, expression and changes in clothing; and adapting or transferring solutions to be used in different platform-specific environments.
“The [technology] the assessment will be performed on aggregate assessment kits that have images of objects across a wide range of sensors and platforms, ”Erickson said. “This is how we will fundamentally evaluate the statistical indicators of these algorithms.
So they have to be agnostic or at least stable to the types of sensor platforms and optics that will be used during testing.
The four-year program is expected to begin in the third or fourth quarter of fiscal 2021. IARPA hopes to transfer the technology to other government agencies once the project is completed. His clients include the CIA and other intelligence agencies, the US military and the Department of Homeland Security.
Historically, about 70 percent of completed IARPA surveys have successfully passed to government partners, according to the agency.