Acorrding to a CMU professor, radar outputs of detected objects are sometimes ignored by the vehicle’s software to deal with the generation of “false positives,” said Raj Rajkumar, an electrical and computer engineering professor at Carnegie Mellon University. Without these, the radar would “see” an overpass and report that as an obstacle, causing the vehicle to slam on the brakes.
On the computer vision side of the equation, the algorithms using the camera output need to be trained to detect trucks that are perpendicular to the direction of the vehicle, he added. In most road situations, there are vehicles to the front, back, and to the side, but a perpendicular vehicle is much less common.
“Essentially, the same incident repeats after three years,” Rajkumar said. “This seems to indicate that these two problems have still not been addressed.” Machine learning and artificial intelligence have inherent limitations. If sensors “see” what they have never or seldom seen before, they do not know how to handle those situations. “Tesla is not handling the well-known limitations of AI,” he added.
The limitations, or the unstable behivors, of AI or Machine Learning (ML)-base software is the main motivation of my research and work on software (formal) verification. With tranditional software (without AI and ML involving), formal verification such as static analysis, model checking, or theorem proving plays an important role in design and development of safety-critical systems like autopilot fly-by-wire software systems, subway systems. My work objective is to make formal verification techniques (programming language techniques such as type checking, static analysis, model checking,..) to work well with AI or ML-base safety-critical software system as we found in self-driving cars.