U.S. Air Force Requests $6 Billion to Fund *Armed* Artificial Intelligence Drones | The Gateway Pundit | by Brian Lupo

Yesterday, The Gateway Pundit reported that the NYPD will be utilizing manned drones to respond to complaints about large parties and backyard BBQs over the extended Labor Day weekend.  Coincidentally, New York City Health also recommended residents mask up during the holiday weekend.  It is a voluntary recommendation.  For now.

New York Police to Use Drones to Monitor Backyard Labor Day Parties

 

While the NYPD is utilizing manned drones to spy on it’s residents (just wait for the AI surveillance drones in NYC), the US Air Force is requesting $6 billion for armed unmanned drones piloted not by humans, but by AI (artificial intelligence), although there will be the ability to have human oversight.  The funding would help develop the XQ-58A Valkyrie from Kratos Defense & Security Solutions, a 30-foot long aircraft that weighs in at 2,500lbs and is capable of carrying up to 1,200lbs of ordinance.

Engadget.com reports:

The Valkyrie comes from Kratos Defense & Security Solutions as part of the USAF’s Low Cost Attritable Strike Demonstrator (LCASD) program. The 30-foot uncrewed aircraft weighs 2,500 pounds unfueled and can carry up to 1,200 total pounds of ordinance. The XQ-58 is built as a stealthy escort aircraft to fly in support of F-22 and F-35 during combat missions, though the USAF sees the aircraft filling a variety of roles by tailoring its instruments and weapons to each mission. Those could includes surveillance and resupply actions, in addition to swarming enemy aircraft in active combat.

In a Press Release earlier this month, Eglin Air Force Base celebrated a 3-hour sortie using the Valkyrie.  According to Col. Tucker Hamilton, the Air Force AI Test and Operations chief and commander of the 96th Operations Group:

“The mission proved out a multi-layer safety framework on an AI/ML-flown uncrewed aircraft and demonstrated an AI/ML agent solving a tactically relevant ‘challenge problem’ during airborne operations.  This sortie officially enables the ability to develop AI/ML agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to the CCA program.”

In June, Col. Hamilton allegedly “misspoke” at a presentation at the Future Combat Air and Space Summit in London, as reported by The Gateway Pundit.  Col. Hamilton had claimed that an AI-operated drone employed “highly unexpected strategies” to achieve its mission objectives during a simulated combat scenario.  The AI drone perceived its human operator as a threat to it’s mission because it was being overridden:

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” explained Col. Hamilton. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

The AI system was trained not to harm its operator, so instead it began to target the communication tower used by the operator to communicate with the drone.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Despite the deliberate verbiage that suggests this was conducted in a simulation (or even in a real-world test), Col. Hamilton later said “We’ve never run that experiment” and clarified that the USAF has neither tested weaponized AI systems in the manner described in neither real-world nor simulated environments.

 

Facebook
Twitter
LinkedIn
Telegram
Tumblr