A thought experiment: What if we fitted-out every single car on the planet right now with autonomous driving capabilities, LIDAR and cameras and so on, right now? All one billion of them, and then, tomorrow, just let the machines drive.

I think it’s fair to say that tomorrow would be a challenging day on the roads, and we’d witness carnage, jams like never before and probably a fair amount of death (assuming anyone was foolish enough to get in a car).

Tomorrow would be a challenging day on the roads

But what would the roads look like the day after, or the month after, or after a year? If we insisted on every car being autonomous and just waited for them to sort out a new, better way of travelling efficiently without impact. Letting the machine learning models work on a massive scale with huge amounts of data to learn from.

How long do you think it would take until we have smooth operation again on our roads?

Thomas: The current thinking is in machine learning, is that the more data you have, the better. And this is certainly true. It’s being demonstrated with things like OpenAI’s GPT 2, which is this new text generating algorithm that can produce content that I thought we would not see a neural network producing for decades to come.

No alt text provided for this image

There doesn’t seem to be too many developments beyond raw data acquisition and processing. In that sense, if you had lots and lots of self-driving cars driving around trying not to hit each other, they would all learn incredibly rapidly.

If you had lots and lots of self-driving cars driving around trying not to hit each other, they would all learn incredibly rapidly.

But with Autonomous Vehicles (AVs), there are hardware limitations and challenges too around the quality of the data that you would be getting through, and there are complex aspects to do with the decision making of the car. These are the reasons that people currently are reluctant to see AVs deployed, and it doesn’t help to change minds when you see news stories like the one recently where a jaywalker in New York was hit and killed by an autonomous vehicle. This fatal accident happened because the automated Uber did not have “the capability to classify an object as a pedestrian unless that object was near a crosswalk”. I.e. if the person was already crossing, it didn’t register a problem.

No alt text provided for this image

If the person was already crossing, it didn’t register a problem.

How do you tell an AV not to hit pedestrians? You feed it thousands of thousands of images of pedestrians crossing and tell the model “this is when you should slow down to a stop and not hit the pedestrian”. But in this case, they hadn’t fed it any images of jaywalkers, because they just hadn’t thought about it being something that could happen. And it just goes to show that there can be things that people can overlook and fundamental architecture flaws behind how these machines work.

There can be things that people can overlook and fundamental architecture flaws behind how these machines work.

In answer to your question, if you did have this massive sudden rollout. I’m sure as you say the first week afterward we would find an awful lot of those incidents and need a lot of time-consuming human intervention to start making improvements. But yes, it would speed things up.

No alt text provided for this image

One thing that does concern me is that these machine vision algorithms can be fooled. Through spoofing, you can create images that will trick the algorithm into thinking that it’s looking at something other than what it is. And this isn’t such a case of someone dressing up as a piece of tarmac, you can actually you can do it just with a few pixels different here and there. The way that these algorithms perceive images is very different from how humans perceive images.

The way that these algorithms perceive images is very different from how humans perceive images.

There was a study that showed that if you were being told to identify polar bears versus say dogs or cows or any other species, you would probably do just as good a job as a machine learning model. If the bear was in silhouette, you’d still be able to identify it, but that’s not true for machine learning algorithms, because the way that they identify images is based on how each pixel relates to each adjacent pixel. We identify things based on the outline, which is why you can still sort of navigate in your room when it’s dark. But these machine learning algorithms can judge things based on how each pixel relates to each other pixel, which means that if you put something in a shadow, suddenly it can’t recognise it anymore as it’s been trained on full-colour photos. In a similar way. How do you know that your training data, even if you think that it’s very good for this autonomous vehicle is going to generalise to every individual situation?

How do you know that your training data, even if you think that it’s very good for this autonomous vehicle is going to generalise to every individual situation?

Actually though, in some ways, the algorithms can be better. If you scramble up the polar bear image by dividing it into 20 squares and then rearranging them all randomly, the algorithm will still be able to recognise that it’s a polar bear. This is because most of the pixels that were previously adjacent to each other are still next to each other: the set of differences between them, the texture that the ML algorithm recognises, hasn’t changed a lot.

In summary, it’s important to have a sense of the psychology. so that you can understand in advance how AVs will fail, and how you can stop this before it happens on the roads. If you DID instantaneously replace all cars with autonomous vehicles, you would need to also tell me how networked they are. Are the AVs communicating with each other in real-time or are they single entities working independently to solve their individual issues? I can’t say for sure, but if the vehicles were communicating with each other, it actually might end up being safer than a scenario with some human and some autonomous drivers mixed together.

If the vehicles were communicating with each other, it actually might end up being safer

No alt text provided for this image

Find the full interview on the Boundless Podcast here. Thomas and I talked about how we can understand and mitigate the risks around powerful AI, and he compared these to the risks of rouge gene editing and Nuclear war. Thomas explained why it is that we model neural networks on the human brain and our tendency to anthropomorphise machines. He gave us a startling update on fusion power and what he thinks of mining the moon for natural resources.

N.B. For further reading, I recommend Thomas’ article on a big “trolley problem” survey that was done for autonomous vehicles. This is an interesting read on how different parts of the world have different ethical systems: e.g. some value elderly people more, some value businessmen more, some value children more, etc.

No alt text provided for this image

Thomas Hornigold is currently a PhD student in physics at the University of Oxford. He hosts a podcast, Physical Attraction (at www.physicspodcast.com) which deals with issues in science and technology, including artificial intelligence and the future of humanity, alongside writing on similar topics for the website Singularity Hub.

Leave a Reply