A Pikachu jumps into the middle of the road, can autopilot recognize it?

There has been such a strange accident in the history of autonomous driving: in 2016, on a Florida highway, driver Joshua Brown turned on Tesla’s Autopilot mode and then took his hands off the steering wheel.

At an intersection, a white truck turned left and happened to meet Brown’s Tesla.

white truck under the sky | pixabay

According to eyewitnesses at the time, Brown’s Tesla was still traveling at full speed when the white truck appeared in front of it and slammed into the bottom of the truck. According to the survey, the speed of Tesla at that time was 119 kilometers per hour.

Brown died on the spot.

But the Tesla continued to drive and gradually veered off the road until it hit two fences and a power pole before stopping.

Schematic diagram of the accident | Wikimedia Commons by Jzh2074

The official explanation for the accident was: “Neither the autopilot nor the driver noticed that the white side of the trailer was against the bright sky, so there was no braking. ” The truck, recognized as the sky.

And in the five years since, at least three more Teslas have rushed towards the white truck at full speed – it seems that the autopilot still can’t distinguish the white giant from the glowing sky.

In the eyes of autonomous driving, the world is always different.

How does autonomous driving see the world?

The eyes of autonomous driving are mainly served by sensors such as millimeter-wave radar, lidar, and cameras.

Millimeter-wave radar uses the reflection of electromagnetic waves to measure physical information such as distance and speed of objects. It has strong anti-interference ability and long detection distance, and is suitable for use in bad weather such as rain, snow and sand. It can also be installed concealed without affecting the overall appearance of the vehicle. But it cannot measure heights, and the world is flat in its eyes .

Millimeter wave radar on the shopping platform | screenshot of Taobao

Lidar can see the 3D world. Similar to millimeter-wave radar, Lidar also calculates the distance, speed and other information of objects by sending pulses and receiving reflections. But because the laser travels at the speed of light, its measurements are more precise. Lidar emits thousands of pulses per second. By calculating the reflected light waves at different angles, Lidar can restore a three-dimensional image of the target object.

Lidar scan schematic | Wikimedia Commons

But if you want to know what’s in your eyes, you still have to give it to the camera. With the blessing of the algorithm, the camera can identify whether the person, animal or building in front is in front, but it is still in 2D form, like looking at a photo.

The current mainstream recognition solution is multi-sensor fusion. For example, some representative domestic models will have 5 high-precision millimeter-wave radars, 13 exterior cameras and 1 interior camera.

Tesla, on the other hand, insists on using only cameras (in the latest update, Tesla even gave up the most basic ultrasonic radar on the car, and really achieved pure vision without radar in the whole car), citing concerns about radar and camera perception When inconsistent, the car doesn’t know who to trust. But the more basic reasons are: first, the cost of the pure camera solution is lower; second, the rapid progress of the algorithm can indeed make up for the disadvantage of the camera compared to the radar.

Tesla interior display | unsplash

But no matter which route wins out, the sensors can only see, and understanding what those “seeing” means will depend on perception algorithms. Combined with information from different sensors, the current algorithm can know where the car is and what the road looks like, understand traffic lights, track obstacles that keep appearing on the road, and read the text on road signs.

“Honest autonomous driving has outperformed novice drivers”, to a certain extent.

When a Pikachu pops out in the middle of the road

But when a Pikachu jumps into the middle of the road, the autopilot must be dumbfounded.

When a Pikachu jumps into the middle of the road | Giphy

This “rare and sudden” unexpected situation is called a corner case (CC) in the autonomous driving industry . This is a term from system testing, which originally meant situations where multiple parameters were in extreme cases. The accident at the beginning of the article is a typical CC.

CC is usually divided into five levels, corresponding to different complex situations of autonomous driving cognitive accidents: a certain pixel is not clear; the scene is switched from “habitual” to “unaccustomed”; unknown objects appear (but the location and target are clear) ); the location and target are also unclear; until the full “accident” of target, scene, and environment.

But any small CC may have serious consequences.

A cute yellow-haired electric mouse appears on the road, which is a kind of CC at the level of “unknown object”. The consequence of failure to identify is either you suffer an electric shock of 100,000 volts, or you are hammered by the elf trainer. And what could happen in the real world would only get worse. Volvo’s Australian technical manager broke the news in 2017 that the company’s self-driving cars didn’t recognize kangaroos.

Kangaroo: “You don’t know me”? |Giphy

“Cars detect animals by using the ground as a reference point to determine the distance of objects,” said the technical manager, but when a kangaroo hops across the road and jumps in the air, the autopilot recognizes it as a piece Objects in the sky are judged to be farther away than the actual distance; and when the kangaroo lands, the autopilot will recognize it as closer than the actual distance.

Self-driving cars have trouble recognizing kangaroo jumping | Giphy

We can imagine on an Australian coastal road, next to a few cheerful jumping muscle kangaroos, an autonomous car is overwhelmed and faces the unknown object that is suddenly big, small, far and near. Stop is not, go Neither.

Thinking about it makes me terrified.

A laser pointer turns autonomous driving

As can be seen from the figure below, the current mainstream sensors all have their shortcomings, which can cause inaccurate judgments of road conditions.

With the limitations of the sensor, the algorithm becomes very easy to “cheat” .

Some people have experimented, irradiating a beam of ordinary laser on a tram, the autopilot will recognize the tram as a frog; when the laser is irradiated on the turtle, the turtle will become a jellyfish in the eyes of the autopilot; if the laser is changed again Color, it will also identify a snake as a sock, a microphone, jackfruit, corn, or a hot dog.

When the laser changes color, the snake changes shape | Reference [1]

According to the analysis of safety experts of autonomous driving, this may be because the algorithm also recognizes the color of the laser beam as part of the original object. After the combination of purple, it will be finally recognized by the algorithm as “thorn bud”.

Under the laser, hedgehogs turn into buds | Reference [1]

The success rate of such a confusing algorithm is very high. In indoor and outdoor tests, the success rate can reach 100% and 77.43%, respectively . And such a laser pointer can be bought for twenty or thirty dollars.

Another important reason for the generation of CC comes from the inexhaustibility of current image recognition data. The training methods of image recognition are mainly human stuffing data, machine memory and classification. The algorithm is like a child with excellent memory. It can remember any information and scenes inserted, but it is not a mature and flexible master. A leaf stuck on a traffic sign or an insect lying on the camera may make the algorithm output. very different results.

Why is CC so important?

The field of autonomous driving describes the importance of solving the CC problem with the “28 law” – it takes 80% of the time to solve this seemingly “rare” 20% problem – they say, this is the way to a truly automatic A must for driving.

Accurately identifying CC and handling CC correctly is actually a matter of balancing safety and efficiency (or driving experience). People are all looking forward to the absolute safety of autonomous driving, but no one is willing to accept the autonomous driving traffic experience of five kilometers per hour and frequent braking from time to time (besides, low speed and braking can also bring safety problems).

No one wants to accept the self-driving experience of frequent braking | Giphy

The dangers and barriers to popularization of autonomous driving come not only from “yes, but not recognized”, but also from “no, but misjudgment.” Tesla’s “ghost brake” incident is a typical example.

A Tesla owner was driving in San Francisco with a plastic bag floating on the road. “Suddenly, the car seemed to be locked,” recalled the owner, whose Tesla suddenly decelerated from about 40 kilometers per hour to 24 kilometers per hour, “but it immediately loosened because the plastic bag was removed.” .

Common plastic bags on the road also affect autonomous driving | Wikimedia Commons by Ivan Radic

The owner was lucky, at least his car didn’t brake abruptly on a San Francisco road. From February to June 2022, the National Highway Traffic Safety Administration received a total of 404 complaints about Tesla’s inexplicable braking. Owners describe the encounter as “ghost braking” — when a Tesla with Autopilot-assist mode on suddenly brakes or slows when nothing is dangerous, increasing the risk of a rear-end collision.

According to Phil Koopman, a professor at Carnegie Mellon University who focuses on self-driving car safety research, this may be caused by Tesla’s developers failing to set the correct decision thresholds for cars, especially Tesla The identification of information almost only relies on the camera, without the assistance of other sensors such as radar.

Road traffic is one of the most complex systems we have ever created | Giphy

The good news is that today, Tesla has solved the problem of smart braking through algorithm optimization. But on our way to “true self-driving”, there are more peculiar cases waiting to be solved – highway traffic, which is one of the most complex systems we have ever created.

references

[1] Duan R, Mao X, Qin AK, et al. Adversarial laser beam: Effective physical-world attack to dnns in a blink[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 16062-16071.

[2] http://jst.tsinghuajournals.com/CN/rhhtml/20180417.htm#outline_anchor_8

[3] https://new.qq.com/rain/a/20220818A00WJL00

[4] https://www.youtube.com/watch?v=PRg5RNU_JLk

[5] https://equalocean.com/analysis/2022091918917 [6] https://thedriven.io/2021/07/13/it-can-see-dogs-the-big-reveal-from-teslas-radar -free-version-of-full-self-driving/

[7] https://www.jianshu.com/p/5ab134804d4c

[8] https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk

[9] https://blog.csdn.net/maopig/article/details/107961922

Author: Rui Yue

Editor: Lying Worm, Shen Zhihan

This article is from Nutshell and may not be reproduced without authorization.