Byron SpiceWednesday, June 1, 2022Print this page.
When Artur Dubrawski and the School of Computer Science’s Auton Lab began studying maintenance of the U.S. Air Force’s F-16 fighter jets more than 15 years ago, they discovered unforeseen failures that spread like disease across the aging fleet.
Sometimes the cause could be as simple — yet hard for humans to detect — as replacement parts interacting with other components in unexpected ways. Yet these issues were almost impossible to spot in the maintenance data until their impact became apparent. Using artificial intelligence tools to analyze the data, Dubrawski’s team provided early warnings of emerging aircraft maladies and helped pinpoint the causes, potentially saving millions of dollars in repair costs and keeping more planes flight ready.
This approach achieved similar results with A-10 attack planes, T-38 trainer jets, the U.S. Navy’s V-22 tilt-rotor aircraft and the U.S. Army’s Blackhawk helicopters.
Now, with a new three-year, $10.5 million Army contract, Dubrawski is leading a multi-institution effort to extend the capabilities of these predictive maintenance techniques and eventually apply them to everything from ground vehicles to power generators and possibly the early detection and treatment of human disease. It is the largest single contract ever awarded to the Auton Lab.
"The idea behind this is to take the AI capabilities to the next level," said Dubrawski, Alumni Research Professor of Computer Science in Carnegie Mellon University's Robotics Institute (RI) and director of the Auton Lab.
He leads the project with Kyle Miller, senior project scientist in RI and co-principal investigator. Stephen Smith, a research professor in RI, is lending a hand on optimization of maintenance and supply chain logistics. Mario Berges of the Department of Civil and Environmental Engineering is providing expertise on computer models of complex physical systems, known as digital twins, that can be used to analyze equipment’s behavior in a broad range of conditions, including settings too dangerous or too costly for real-world experimentation. The project also includes Katherine Flanigan of civil and environmental engineering and her work on sensing, computing and actuation technologies that can add digitally controlled smart, cyber-physical systems to physical environments.
These and other researchers from CMU, Georgia Tech Research Institute, the University of South Carolina and the University of California, Davis, will perform fundamental research to address gaps in knowledge and technology that have made broader use of these techniques difficult.
The Pittsburgh-based U.S. Army AI Integration Center (AI2C) will coordinate the application of the developed technology to solve relevant challenges in the practice of military equipment maintenance.
The goal is to improve the ability of the Army and the other military services to deploy AI to solve problems associated with complex devices. These could involve a vast array of both combat and non-combat equipment such as ground vehicles and aircraft. Another possible focus could be power generators, a technology that the Army uses heavily and that is often critical to mission success.
The work will also make these AI approaches more accessible for a wide variety of public and private applications.
For instance, Dubrawski’s team has used the techniques to improve public safety, working with the U.S. Customs and Border Protection Service and Lawrence Livermore National Laboratory to deploy systems used at border crossings that reduce false alarms from radiation sensors. Products with elevated levels of natural radioactivity, such as truckloads of bananas and toilet seats, can trigger alarms meant to identify dangerous nuclear materials.
Dubrawski already is collaborating with clinicians in intensive and emergency care and hopes that the new project will help overcome some current limitations in these areas.
"In health care applications of AI, we are analyzing data in a form that is very similar to the data you find in the aircraft industry," Dubrawski said. "Just as every inspection of an aircraft is documented, every patient visit is documented. An aircraft is hardware, software, and electronics. Human patients involve biometric measurements; still, many things are similar."
Yet large amounts of clinical data may not be sufficient for use in AI because little data is labeled, and it is not always apparent whether data represent healthy patients or sick patients without human guidance. One priority is thus to develop new, efficient methods of capturing human expertise so that machines can understand the contexts that may not be well represented in the available data. That is crucial for applying AI to health care, but also important for equipment maintenance in the military, which is seeing retirements of an entire generation of veteran maintainers.
Likewise, new methods are needed for sharing that expertise in ways that are acceptable to users. Many physicians will be skeptical of decisions and recommendations made by AI systems. This might entail expressing findings as suggestions and providing explanations for how the system reached a conclusion.
Some changes may be even more fundamental. Recent progress in AI, for instance, has been based on statistical techniques. While successful, Dubrawski said statistical approaches may limit AI applications, particularly in medicine.
"What it produces in the end is some sort of suggestion or decision that is burdened with statistical uncertainty," Dubrawski said. "But say an AI program determines with 95% certainty that you should amputate a patient’s leg. What does a doctor do with that? What about that other 5 percent?"
In this regard, researchers will revisit early work in AI, which was based on logical reasoning.
"Aircraft go through complex certification protocols, but we don’t yet have certification for AI systems," Dubrawski continued. "My hope is that a combination of statistical learning with honest-to-God mathematical logic will help us bridge the gap. We don’t want to be just 95% sure of these things. We prefer to be certain."
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu