AI Co-pilot Enhances Flight Safety with MIT’s Air-Guardian

Scientists at MIT have developed Air-Guardian, a deep learning system that works alongside human pilots to improve flight safety. This AI copilot, powered by the Liquid Neural Networks (LNN) model, intervenes when human pilots overlook critical situations, helping to prevent potential incidents.

Enhancing Flight Safety with Air-Guardian

Air-Guardian utilizes a unique approach to enhance flight safety. It constantly monitors both the attention of the human pilot and the AI’s focus, identifying instances where the two are not aligned. When the human pilot fails to recognize a critical aspect, the AI system steps in and takes control of that specific flight element, ensuring safety while maintaining the pilot’s control.

“The idea is to design systems that can collaborate with humans. In cases when humans face challenges in order to take control of something, the AI can help. And for things that humans are good at, the humans can keep doing it,” said Ramin Hasani, AI scientist at MIT CSAIL and co-author of the Air-Guardian paper.

For example, when an airplane is flying close to the ground, the AI can take over if the gravitational force becomes unpredictable, potentially causing the pilot to lose consciousness. In situations where the human pilot is overwhelmed with excessive information on the screens, the AI can sift through the data and highlight critical information that may have been missed by the pilot.

Air-Guardian employs eye-tracking technology to monitor human attention, while heatmaps indicate where the AI system’s attention is directed. When a divergence between the two is detected, Air-Guardian evaluates whether the AI has identified an issue that requires immediate attention.

The Power of Liquid Neural Networks (LNN)

Air-Guardian’s effectiveness stems from its use of Liquid Neural Networks (LNNs), developed by MIT CSAIL. LNNs have proven to be highly effective in different domains, particularly in applications that require efficient and explainable AI systems.

Unlike traditional deep learning models that are often seen as “black boxes” due to their lack of transparency, LNNs offer explainability. Engineers can delve into the decision-making process of the model, making it more suitable for safety-critical applications. LNNs can also learn causal relationships within their data, making them more robust in real-world settings.

“For safety-critical applications, you can’t use normal black boxes because you need to understand the system before you can use it. You want to have a degree of explainability for your system,” Hasani explained.

Furthermore, LNNs are highly compact, requiring fewer computational units or “neurons” compared to traditional deep learning networks. This compactness enables LNNs to operate on devices with limited processing power and memory, making them suitable for edge computing applications such as self-driving cars, drones, robots, and aviation.

In a previous study, the MIT CSAIL team demonstrated that an LNN with only 19 neurons could perform a task that typically required 100,000 neurons in a classic deep neural network. This compactness is particularly important for edge devices, where unlimited compute resources are not available.

Future Applications and Potential

The development of Air-Guardian and the insights gained from using LNNs can be applied to various scenarios where AI assistants collaborate with humans. From simple tasks across applications to complex tasks like automated surgery and autonomous driving, human and AI interaction plays a crucial role.

LNNs also have the potential to contribute to the growing trend of autonomous agents, such as virtual CEOs capable of decision-making and explanation. Their universal signal processing capabilities make them versatile in handling various types of input data, ranging from video and audio to text and user behavior.

“Liquid neural networks are universal signal processing systems. It doesn’t matter what kind of input data you’re serving, whether it’s video, audio, text, financial time series, medical time series, user behavior… Anything that has some notion of sequentiality can go inside the liquid neural network,” Hasani explained.

The current state of LNNs can be compared to the transformative impact of the “transformer” paper in 2016, which laid the foundation for large language models like ChatGPT. LNNs have the potential to bring powerful AI systems to edge devices like smartphones and personal computers, ushering in a new wave of AI advancements.

“This is a new foundation model… A new wave of AI systems can be built on top of it,” Hasani highlighted.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts