High-tech applications in intelligent cockpit human-machine interfaces (HMIs).

As the core interaction system connecting the driver and the vehicle, the intelligent cockpit human-machine interface (HMI) is constantly being integrated into various high-tech applications, greatly improving the driving experience and safety. From personalized services with artificial intelligence and deep learning, to immersive experiences in augmented and virtual reality, to centralized management of multi-robot operation control units (MOCUs), these technologies not only optimize the driving process but also lay the foundation for future autonomous driving and intelligent transportation. This article will delve into the high-tech applications of smart cockpit HMIs, analyze their key role in improving driving efficiency, reducing cognitive burden, and enhancing driving safety, and look forward to the future development trends of smart cockpits.

In today’s era, smart cockpits have become the focus of the automotive field with their striking “smart” characteristics. The core technology of this intelligence can be traced back to the early years of military fighters. Today, these advanced technologies derived from fighter jets shine in the human-machine interface (HMI) of the intelligent cockpit, mainly in the following aspects:

1. Artificial intelligence and deep learning

Artificial intelligence (AI) and deep learning technologies are being widely integrated into smart cockpit HMIs, greatly improving drivers’ decision-making and task execution efficiency. For example, the introduction of new features based on cognitive solutions, such as adaptive automation, enables efficient multitasking, significantly improving the efficiency of task completion in complex environments. Through the learning and analysis of large amounts of data, this technology can intelligently predict the needs of drivers and provide personalized services and suggestions, bringing drivers a more convenient and efficient driving experience.

1. Personalization

The seats and mirrors adjust automatically: By identifying the driver’s identity, deep learning algorithms are used to analyze the driver’s previous seating position, mirror angle and other setting preferences, and automatically adjust to the most comfortable state when the driver enters the vehicle. For example, when driver A gets into the car, the system automatically adjusts the seat to a specific height, tilt, and front and rear positions based on its historical data, while adjusting the mirrors to the appropriate angle to provide the driver with a personalized driving environment.

(Pictured: After the driver enters the vehicle, the seats and mirrors automatically adjust.) Source:Subaru official website)

What does a product manager need to do?
In the process of a product from scratch, it is not easy to do a good job in the role of product manager, in addition to the well-known writing requirements, writing requirements, writing requirements, there are many things to do. The product manager is not what you think, but will only ask you for trouble, make a request:

View details >

Music and entertainment recommendations: Analyze the driver’s music playback history, listening habits, and current emotional state (which can be judged by technologies such as facial expression recognition) to recommend suitable music, radio programs, or audiobooks. For example, if the system detects that a driver typically prefers to listen to news broadcasts during the morning rush hour on weekdays, the news channel will automatically be played for them during that time period. Alternatively, if the system determines that the driver is relaxed through facial expression recognition, some soothing music may be recommended.

2. Intelligent voice assistant

  1. natural language understanding: Using deep learning technology, intelligent voice assistants can accurately understand the driver’s natural language instructions, including complex questions and requests. For example, a driver can say “find the highest-rated coffee shop nearby and navigate past”, and the voice assistant can understand the intent of this sentence, find the coffee shop that meets the requirements and plan the best route by connecting the in-car navigation system and online review platform.
  2. Multiple rounds of dialogue: Enables multiple rounds of conversations with drivers to understand driver needs and provide coherent service based on context. For example, the driver first asks, “What’s the weather like today?” After the voice assistant answered, the driver then asked, “What about tomorrow?” , the voice assistant is able to understand that this is a continuous question about the weather and gives a forecast for tomorrow.

3. Driving behavior analysis and safety warning

  1. Driving style recognitionBy analyzing the driver’s acceleration, braking, steering and other operation data, deep learning algorithms can identify the driver’s driving style, such as aggressive, smooth, or cautious. The system can adjust the vehicle’s safety settings and reminders according to driving style. For example, for aggressive drivers, the system may issue more frequent speeding warnings in advance and provide stronger brake assistance in case of an emergency.
  2. Fatigue and distraction detection: Using cameras to monitor the driver’s facial expressions, eye state, and head movements, deep learning models can determine whether the driver is tired or distracted. If signs of fatigue are detected, such as frequent blinking, yawning, or head drooping, the system will emit audible and visual cues to remind the driver to rest. If the driver is distracted, such as looking at his phone or talking to passengers excessively, the system will also alert him in time to improve driving safety.

When the driver is tired, the system will send out a reminder. Source:Subaru official website

4. Intelligent navigation and route planning

  1. Real-time traffic forecasting: Combined with deep learning algorithms and big data analysis, the in-vehicle navigation system can predict traffic flow and road conditions changes in different road sections, providing drivers with optimal route planning. For example, during rush hour, the system can avoid congested road sections and choose faster routes based on historical traffic data and real-time traffic information.
  2. Destination forecasting: By analyzing drivers’ daily travel habits and historical destinations, deep learning models can predict drivers’ possible destinations. When the driver starts the vehicle, the system can automatically provide several possible destination options, making it easy for the driver to quickly select and start navigation.

5. Vehicle fault diagnosis and prediction

  1. Fault detection: Using sensor data and deep learning algorithms, the intelligent cockpit system can monitor the operating status of various components of the vehicle in real time and detect potential faults. For example, by analyzing engine sound, vibration and temperature data, the system can detect engine abnormalities in advance and remind the driver to overhaul them in time.
  2. Failure predictionThrough the analysis of the vehicle’s historical fault data and current operating status, the deep learning model can predict possible future failures and arrange maintenance in advance to avoid dangerous situations caused by vehicle failures during driving. For example, the system can predict when tires need to be replaced based on data such as tire wear, air pressure changes, and mileage, reminding drivers to prepare in advance.

2. Cognitive human-machine interface (CHMI)

The development of CHMI (Cognitive Human-Machine Interface) aims to monitor the driver’s cognitive load in real time through the automated system, and dynamically adjust the format and function of the HMI according to the driver’s cognitive state. This adaptive automation not only effectively reduces the pilot’s workload, but also provides driving solutions and operating concepts that are more tailored to the needs of the pilot. By accurately assessing the driver’s cognitive state, CHMI can intelligently adjust the display content and interaction methods of the interface, ensuring that the driver receives the most effective information support at different mission stages.

1. Cognitive load monitoring and adjustment

When the driver drives on the highway for a long time, the CHMI system judges the driver’s cognitive load to gradually increase by monitoring the driver’s eye activity, heart rate changes and other physiological indicators. At this time, the system automatically lowers the volume of music in the car, reducing the display of non-critical information, allowing the driver to focus more on the road situation. If the cognitive load increases further, the system may send a voice prompt advising the driver to rest or turn on the autopilot assistance function to reduce the driver’s stress.

For example, the MBUX intelligent human-computer interaction system equipped with the Mercedes-Benz E-Class can judge the driver’s cognitive load through multi-dimensional data during long-distance driving. If the load increases, the central control screen information will be streamlined, and the volume of voice prompts will be reduced, which can be described as “simplifying the complex”, allowing the driver to focus on driving and ensure safety.

For another example, the DiLink intelligent network system equipped with BYD Tang can adjust the interaction mode in complex road conditions. In urban congested road conditions, drivers need to frequently deal with surrounding vehicles and pedestrians, and the cognitive load is large. At this time, the system will automatically reduce the prompt information of entertainment functions, such as pausing the lyric display of the music playback interface, hiding the recommended content of video playback, etc., highlighting key navigation information, helping the driver “simplify the complex”, reduce cognitive load, and better cope with complex road conditions.

2. Fatigue and attention monitoring

When the CHMI system detects signs of fatigue in the driver (frequent blinking, drooping head, etc.), it acts quickly. For example, automatically turn on the seat vibration function, play fast-paced music, or open the windows for ventilation, which stimulates the driver’s senses and improves his concentration. If fatigue is severe, the system may also suggest that the driver find a safe place to stop and rest, and provide information about nearby rest areas.

In terms of attention monitoring, if the driver is distracted during driving, such as looking at the scenery outside the car for a long time or talking excessively with passengers, the system will judge by monitoring the driver’s direction of sight and head movements. At this point, a voice warning is issued to remind the driver to pay attention and a warning icon is displayed on the instrument cluster. At the same time, the system may automatically adjust the color of the interior ambient lights to a striking color to attract the attention of the driver.

Take the Volvo XC90, for example, with its intelligent system that excels in fatigue monitoring. When the driver is detected to have signs of fatigue, such as frequent yawning, eyes closed for a long time, etc., the driver will be immediately reminded to rest through seat vibration and voice prompts, and a prominent fatigue warning icon will be displayed on the dashboard. At the same time, the vehicle will also automatically open the windows for ventilation and adjust the temperature inside the car to a more suitable state to help the driver stay awake.

(Photo: Attention Assist: Mercedes’ system to alert drowsy drivers – EPFL)

The Tesla Model 3 is unique in terms of attention monitoring. It is equipped with advanced cameras and sensors that will alert the driver to a strong voice warning if the driver looks away from the road for a long time while driving, such as focusing on checking his phone or talking excessively to passengers, and the display screen in the car will flash a red border, which will attract the driver’s great attention and ensure driving safety.

3. Personalized interaction adjustments

Different drivers have different driving habits and preferences. By learning from driver behavior patterns, CHMI systems can provide a personalized interactive experience for each driver.

For example, for drivers who prefer aggressive driving, the system may display more performance parameters on the dashboard, such as engine speed, acceleration, etc., and provide more responsive handling feedback. For drivers who drive cautiously, the system will emphasize safety tips, such as recommendations for maintaining a safe distance, monitoring the surrounding environment when driving at low speeds, etc.

According to the driver’s daily usage habits, the CHMI system can automatically adjust settings such as seat position, mirror angle, and air conditioning temperature. For example, if a driver frequently sets the air conditioning temperature to a lower temperature when driving in the morning, the system will automatically adjust the air conditioning temperature to their preferred setting the next time the driver gets in the car. At the same time, the system can also automatically play the driver’s favorite music genres or radio programs according to their music preferences.

For example, NIO ET7’s voice assistant Nomi can provide personalized interactive services based on the voice command habits and preferences of different drivers. For drivers who frequently use voice control to adjust the temperature, Nomi will proactively ask if they need to adjust the temperature inside the car to the usual temperature value after getting in the car. At the same time, Nomi can also plan navigation routes in advance according to the driver’s daily itinerary, and actively provide recommendations for nearby points of interest, such as nearby restaurants, coffee shops, etc., to meet the driver’s individual needs.

Through its intelligent cockpit system, the Xpeng G9 can automatically adjust the seat position, mirror angle, and air conditioning temperature for different drivers. The Xpeng G9’s intelligent cockpit system can automatically adjust the interior settings for different drivers, switch the playback mode according to music preferences, and create an exclusive music cockpit atmosphere.

3. Augmented reality and virtual reality

In the design of the human-machine interface of modern intelligent cockpits, the integration of augmented reality (AR) and virtual reality (VR) technologies has brought unprecedented ways for drivers to display and interact with information.

Augmented reality technology can integrate virtual information with the real world to provide drivers with a more intuitive driving experience. For example, with AR technology, drivers can see important content such as navigation information and traffic alerts directly on the windshield without distracting from the dashboard or central control screen.

When the vehicle is driving at a complex intersection, AR navigation can clearly indicate the correct direction of travel, and the driver can see it at a glance through dynamic arrows and signage. In addition, AR can also be used to display information about obstacles around the vehicle, and when the vehicle is close to other objects, a warning sign will appear on the windshield to remind the driver to pay attention to safety.

BMW’s intelligent driver assistance system integrates AR navigation functions, which project navigation information onto the windshield through the head-up display system, providing drivers with a more convenient navigation experience.

VR technology is also being used in smart cockpits to enhance the in-car entertainment experience. Passengers can watch movies, play games, etc. through VR devices in the car, adding to the fun of the journey. Combined with the in-car audio system, it brings a more immersive music experience to passengers.

For example, Audi’s in-car VR experience “Experience rides” is a technology that detects the vehicle’s driving conditions in real time and applies it to VR content, giving passengers the same experience as a real car ride in the VR world. In addition, the VR headset developed by GAC Group can provide binocular display with up to 5K resolution, and allow users to enjoy multi-dimensional immersive entertainment through the real touch of the cockpit.

4. Multi-robot operation control unit (MOCU)

MOCUs are a modular, flexible, and intuitive system primarily designed to support the control of a wide range of unmanned or robotic systems. The system greatly improves the operator’s situational awareness by simplifying the control interface and unified graphical user interface.

1. Feasibility of centralized control

From a technical point of view, the design concept of a multi-robot operation control unit (MOCU) is to enable centralized control of multiple different functional systems. In the intelligent cockpit, different assisted driving systems such as autonomous driving systems, automatic parking systems, lane keeping systems, etc. have their own functional emphasis, but they all involve vehicle control and monitoring. Through a unified control unit, the control logic and data interfaces of these systems can be integrated to achieve centralized control.

For example, in some high-end smart cars, there is already a similar embodiment of integrated control concepts. Through a central display or control interface, the driver can set and adjust multiple driver assistance functions, which is somewhat similar to the centralized control function of MOCU.

2. Convenient management and coordination

The unified operating interface does make it easier for drivers to manage and coordinate these smart devices. Different driver assistance systems can increase the driver’s operational complexity and cognitive burden if they operate independently. Through a unified interface, drivers can intuitively understand the status of each system and quickly switch and adjust.

For example, when the driver needs to switch between autonomous driving mode and manual driving mode, the operation can be completed more conveniently and quickly through a unified operation interface, without the need to set up different systems separately. At the same time, in some complex driving scenarios, such as highway driving, urban congestion road conditions, etc., drivers can coordinate different assisted driving systems through a unified interface to improve driving safety and efficiency. MOCUs can display information about the environment around the vehicle, the position and speed of other vehicles, traffic signs and signals, etc., to help drivers make more accurate decisions.

3. Support multimodal interaction

MOCUs typically support multimodal interactions, such as voice control, gesture control, touch screen control, and more. This allows drivers to choose the most convenient interaction method according to different situations, improving the convenience and safety of operation. For example, during driving, the driver can control the MOCU through voice commands, enabling the operation of driver assistance systems without the need for distraction to operate the touch screen or keyboard.

4. Improve safety and efficiency

Centralized control of multiple driver assistance systems can improve driving safety and efficiency. Through a unified control unit, it is possible to work together and share information between systems. For example, when the autonomous driving system detects an obstacle ahead, it can notify the lane keeping system to adjust in time to avoid the vehicle deviating from the lane. At the same time, centralized control can also avoid conflicts and misoperations between systems and improve driving safety.

In addition, centralized control can also improve the efficiency of driving. For example, in the process of automatic parking, by working together with other driver assistance systems, parking operations can be completed more quickly and accurately, saving the driver’s time and energy.

5. Voice recognition and gesture input

To further reduce the cognitive burden on drivers, modern HMI systems integrate voice recognition and gesture input capabilities. These advanced technologies allow occupants to operate complex systems with simple voice commands or gestures, bringing great convenience to the driver.

Voice recognition technology can accurately recognize the driver’s instructions and realize the control of navigation, music playback, phone answering and other functions. For example, BMW’s intelligent cockpit system allows the driver to set up navigation through clear voice commands, such as “navigate to the nearest gas station”, and the system quickly plans routes and provides voice guidance. On the music playback controls, say commands such as “Play next song” or “Turn up the volume” to make it easy.

Gesture input provides a more natural and convenient way for drivers to interact. For example, in Mercedes-Benz’s smart cockpit, the driver can switch music tracks by waving his hand, and the volume can be adjusted with an up or down gesture. This interaction is more natural and fluid, reducing the risk of driver distractions from operating traditional buttons.

6. Adaptive automation and neurophysiological parameter measurement

In the smart cockpit, adaptive automation by measuring the driver’s neurophysiological parameters is of great significance.

For example, some Cadillac models use in-car sensors to monitor driver heart rate changes. When the heart rate rises abnormally, the system judges that the driver may be in a state of tension, and automatically adjusts the vehicle’s suspension system to make it smoother to improve ride comfort. At the same time, the ambient lighting in the car will also switch to a more soothing color to help the driver relax.

Audi’s smart cockpit monitors the driver’s brain waves to determine his attention state. If the driver’s distraction is detected, the system will sound an alarm sound and display a reminder message on the dashboard, while automatically lowering the volume of the music in the car to allow the driver to focus more on driving. If the driver is judged to be in a fatigued state, the vehicle will automatically turn on the adaptive cruise function and adjust the seat angle appropriately to remind the driver to rest.

This adaptive automation technology based on the measurement of neurophysiological parameters can effectively assist the vehicle in decision-making, reduce the burden on the driver, and ensure the safety and comfort of driving.

7. Integrated information display

In the intelligent cockpit, integrated information display plays a key role.

For example, Tesla’s smart cockpit has an oversized central control display that displays important driving information such as speed, cruising range, energy consumption, navigation information and vehicle status in a simple and intuitive way. Drivers don’t need to be distracted to check multiple gauges and can see the vehicle operating at a glance. (Picture: Tesla’s central control display displays various information)

Volvo’s intelligent cockpit also adopts an integrated information display design. Its digital instrument cluster can clearly display vehicle speed, navigation guidance, currently playing music information, etc., while also popping up vehicle safety warning information when needed, allowing the driver to react quickly.

This integrated information display method greatly reduces the driver’s decision-making time in the task and improves driving efficiency. At the same time, the driver can understand the status of the vehicle more accurately, providing a strong guarantee for safe driving.

summary

The application of emerging technologies in intelligent cockpit human-machine interfaces (HMIs) has greatly improved the efficiency and safety of drivers’ task execution. At the same time, through various innovative methods, the cognitive burden of drivers is effectively reduced, allowing them to focus more on driving. With the continuous development and innovation of technology, the high-tech applications in intelligent cockpit HMI will continue to expand and deepen, bringing more surprises and convenience to the future driving experience.

End of text
 0