Smart Cars – The rise of Artificial Intelligence and the Autonomous Vehicle Revolution

In recent times, we have heard a lot about self-driving cars and how they will transport us smoothly and safely from place to place, gliding through our cities and making our lives that much simpler and more enjoyable. Their arrival, we are told, is imminent. And indeed, if things go to plan – the manufacturers’ plan that is – then that arrival may be just a year or two away.

Every big player in the industry is researching and developing the various technologies needed for a true autonomous vehicle (AV) and there are plenty of others who have joined in the race to be among the first to bring them to the public. Amongst these are technology titans such as Google, ridesharing behemoths such as Uber and Lyft, and innumerable start-ups, all of whom are keen to grab a slice of a market that will, it is claimed, be worth trillions of dollars.

Around the world, research, development and trialling of AVs is gathering pace. In Australia too, there is some movement – in June, the NSW government announced it was to invest $10 million in self-driving car trials and the governments of all other states and territories have some sort of investment and research underway or are preparing to do so. In Queensland, the Department of Transport and Main Roads is investing in the Ipswich-based Cooperative and Automated Vehicle Initiative (CAVI) Cooperative Intelligent Transport System (C-ITS) pilot project that will see some 500 private and fleet vehicles retrofitted with C-ITS devices that enable vehicles to ‘talk’ to other vehicles, infrastructure, road operations systems and cloud-based data sharing systems.

We know a fair bit about a lot of the technologies that are key to cracking the self-driving car conundrum. Lidar (Light Detection and Ranging), Radar, GPS, and cameras are all critical for cars to be able to navigate our roads without the need for human intervention.

What may be a little more difficult to get your head around is the system that reads and interprets the data that these other technologies gather. Radar, Lidar and camera images, as well as GPS data on speed and location, is all well and good but somewhere deep in the heart of a car’s onboard computers, something must ‘understand’ what all that information means and adjust the movements of the vehicle in response.

That system, the ‘brain’ of the car if you will, is Artificial Intelligence (AI).

For the layman, the term Artificial Intelligence will often conjure up images of the ‘technology’ we have seen in movies – computers and machines that think like us, reason like us, have a consciousness.

Sophisticated AI like that, however, has not yet been developed and what we see instead is a sort of AI-lite – software and code that seem fiendishly clever but are really deciphering and making sense of data sent their way and performing functions based on that information.

In fact, the term Artificial Intelligence may be better thought of as an umbrella term for various systems that analyse data – systems such as Machine Learning and Artificial Neural Networks which use mathematical algorithms or replicate the workings of the human brain and are designed to be ‘Deep learning’ by using the stupendous amount of data that can be input into them.

Grasping these terms, and AI in general, is a bit tricky, so it’s worth turning to someone who really understands what all this means, has an insider’s view of this area of automotive research and where it could be headed.

That person is Michael Milford, a Professor at the Queensland University of Technology and a world-renowned researcher in robotics, autonomous vehicles and artificial intelligence. His team works with Fortune 100 companies and governments developing AI and autonomous vehicle systems, as well as researching how these systems will impact society, infrastructure and investment decisions. His accomplishments are many and among them are holding the positions of Chief Investigator at the Australian Centre for Robotic Vision and Board member of the MTA Institute. He has collaborated with organisations including the Australian Research Council, the US Air Force, Harvard University, Oxford University, MIT . . . you get the picture, he knows what he’s talking about.

“Machine learning is a decades-old field of research of creating mathematical algorithms that are deployed in software and that do some sort of useful task,” says Professor Milford. “Deep Learning is a more specific term that concerns artificial neural networks.

“Human brains have a big network of about 100 billion neurons, and artificial neural networks basically replicate parts of that natural neural network in software. Deep learning is about really big artificial neural networks, with millions of ‘cells’ in the software, being stuffed with data to train it to be smart. An example of that would be if you are training a network to do facial recognition surveillance at an airport. You input millions of images of faces and it gradually learns how to recognise individuals reliably.

“A lot of the control systems and behaviour systems under development for autonomous cars use aspects of deep learning,” he adds. “But there is a huge spectrum. Some car companies are using traditional, what we call ‘rules-based, if-this-do-that’, approaches while others are just driving a car around and feeding the raw camera data into a deep neural network and getting it to try and learn how to control the car. Then there are combinations in the middle.”

While AI of this type – that analyses data and appears smart – is being used to develop AVs, in a less sophisticated form it is already everywhere in our lives.

You may not be aware of it, but within a few feet of where you are sitting right now, AI is churning away, analysing information and recognising patterns in both your behaviour and the behaviour of others to perform a function designed to make your life easier. Digital ‘assistants’ on your phone or computer; online purchasing security and fraud detection; music and movie apps that recommend new songs or films based on your listening and viewing habits (think Netflix); online advertising recommendations – ‘primitive’ AI is being put to work everywhere.

And while the system that is ultimately developed to control an autonomous car will be much more powerful and ‘smart’, we can already see the technology seeping into new car models. The most up-to-date, whizz-bang safety features offer the clearest examples of this, with Adaptive Cruise Control, Traffic Sign Recognition, Parking Assistance, Collision Avoidance, Automatic Braking and more all becoming the norm, and all possible, through AI.

So, how is it that AI is making such a dramatic difference to our lives, and becoming such a disruptor to industry, at this point in history? It has been a well-known concept for decades (just think of how many films have been made in which AI plays a major role) and researchers, tucked away in universities and scientific organisations around the world, have been poking away at it for many years, exploring its feasibility and potential in sectors as diverse as robotics and health as well as automotive.

The answer, says Professor Milford, lies both in the increase in capability of various related technologies – computing power for AI, and battery power for the related field of EVs – and the financial windfall that many a far-sighted investor could sense was at their fingertips should the tech be made to work.

“Both EVs and AVs have been around as projects in universities for many decades,” says Professor Milford. “There were autonomous vehicles on roads in the 1980s and though they weren’t very fast, the core concept was there.

“But a few things have happened to make it suddenly come to the forefront. Key players recognised the potential worth of the market – Intel has claimed there will be a $7 trillion market for autonomous vehicles in the next 20-30 years – and once you get a few people saying that, there is, inevitably, the fear of missing out, so everybody gets on board.

“There have also been steady improvements in technology. In the case of EVs, the cost per energy storage for batteries has continued to improve and at some point you start crossing those critical thresholds where you don’t buy an electric vehicle or electric storage unit because it is fashionable or because you want to save the environment, you buy it because it is the best financial choice.

Those are pivotal turning points and we will continue to see them.”

So, how ‘smart’ will vehicles become? If the desire, and the money, is there, will a Knight Rider-style KITT car – a true, thinking, autonomous vehicle – be just a few years away? And if not KITT, is AI far enough along that manufacturers will be able to meet some of the ambitious targets they have set themselves for the introduction of self-driving cars?

“For a car driving around and interacting with humans and other drivers, the debate is that if we get to a viable technology that is widespread, will that car have some level of actual intelligence or will it be a highly engineered rules-based system,” says Professor Milford. “The answer is probably somewhere in the middle,  but we just don’t know yet.

“What we do know is that, when it comes to technology, we are very bad at predicting things. Some things will roll out very fast while some will, frustratingly, be in the same place in 30 years.

Acknowledging there is great uncertainty is important.

“So, those targets are possible, but if you look at the predictions made over the last five years, none of them have really come true. And that is because getting this right is hard.”

While predictions can be tough, plenty of people in investment funds, governments and corporations are hanging their reputations, and shareholder and taxpayer wallets, on the success of the grand AI/AV project.

And the money involved is staggering.

A $7 trillion market is not to be sniffed at, and vast sums continue to pour into R&D. Last year, Ford bought AI start-up Argo AI for $1 billion; and earlier this year the investment business Softbank Vision Fund announced it is to sink $2.25 billion into GM’s Cruise AV program. The Softbank Group, along with a consortium of others, also made a monster investment in Uber at the beginning of this year, ploughing nearly $10 billion into a deal that left Softbank as the company’s biggest individual shareholder with a 15 per cent stake. And there a myriad other tales of heavy investments being made, and of big companies snapping up AI and AV start-ups.

The seemingly sedate world of academia is not immune to these moves, and there are plenty of stories of companies, keen to employ the sharpest minds in the field, enticing researchers to move from the university lab to the corporate workplace where money and resources seem endless.

“This is an issue on everyone’s mind at the moment,” says Professor Milford. “Many of our top students and staff are now at self-driving car companies in America and there are some incredible tales of people being aggressively and repeatedly targeted.

“It is tempting though,” he adds with a smile. “Anyone in this field who is visible internationally is approached regularly, and a lot of people have gone not just for money reasons, but for fulfilment reasons too. You get access to near unlimited resources, and being able to deploy technology in a fleet of cars and maybe save lives is incredibly appealing for researchers who might otherwise be frustrated in academia.”

This talent drain is, perhaps, indicative of Australia’s place in the AV/AI space – it’s the home of immensely talented people who could help the country be a leader in tech industries but who are hampered by slow government action and a lack of support for innovation and R&D. The problem has been recognised and many are working to correct it – the AV trials underway across the country are indicative of that – but there is no doubt, says Professor Milford, that the epicentre of many of these ambitious technological adventures is the U.S.

“These projects assessing the vehicles and what they might be able to do – that’s all very good, but one of the problems is that almost none of the R&D is happening in Australia, it is all happening overseas,” he says.

“We have the talent here and there is no reason that Australia shouldn’t play a role in at least some niche areas of the technology and develop some tech that is in millions of cars worldwide. That would be awesome.

“To do that, I think government needs to entertain more flexible attitudes to how they invest in projects, and universities need to entertain more flexible attitudes to how academics and students move in and out of academia into companies and start-ups and back again.

“My dream, which is not yet realised, is that these big companies, who are already spending so much money and know that we have an amazing resource of talent in Australia, will set up some sort of R&D operation here.

“And that is one thing I have been working on with colleagues. It’s tough because everything is very American-centric, and they say, ‘Why don’t we just buy your talent?’ So, it’s a work in progress.”
Safety, and the possibility of saving lives (reports have suggested that self-driving cars could reduce traffic fatalities by up to 90 per cent) is, Professor Milford says, one reason why academics are moving to the corporate world, and it is a word that is perhaps the most important in any conversation about AI and autonomous vehicles. It may well be true that AVs will prevent the deaths of tens of thousands of people every year, but there is still some nervousness about the technology and there is no denying that accidents, and indeed fatalities, attributable to its failure, have occurred in testing.

“You encounter two extreme views of AVs quite often, and they are both wrong in my opinion, and unhelpful,” says Professor Milford. “One is that we have to deploy them now because they’ll save lives and there is a moral imperative. But that is not true because they are not good enough yet. Then there is the hostile reaction to autonomous cars that says they should never happen. And that also is not correct.

“Because there is so much money and so much concentration of talent and pressure in this area, it is a volatile situation and some people are probably pushing too quickly to deploy and test some of the systems on roads.

“At the same time, there really is something of a moral imperative because if we can bring down the road toll, with limited deployments, even by just a few
per cent, there is strong pressure to do that.”

Over the years, decades in fact, Artificial Intelligence has not necessarily received good PR. While it may be whirring away in the background of devices and products that are changing the way we live, it’s usually the devices and products that get the glory, not the geeky software that makes them work.

Instead, AI must contend with the way it is portrayed in movies and pop culture. And that is almost always very bad indeed.

From The Matrix to The Terminator, I Robot to 2001: A Space Odyssey, from Alien to Westworld to War Games (all excellent movies by the way), AI seems to always be the ‘bad guy’.
Often the scenario goes something like this – humans develop learning software so advanced that it develops consciousness. It becomes self-aware and realises that humans are either, a: an imminent threat to its own existence, b: a plague destroying the planet, or c: getting in the way of a predetermined mission. Once it’s worked out which, the AI proceeds to work diligently on either wiping out all of humanity, dominating humanity into some sort of often unnecessary slavery, or knocking off the few humans in its way so it can perform its mission.

While these may be just sci-fi stories made to entertain, the idea that AI could be something of a danger is not one that is laughed away. In fact, some significant figures in science and technology have raised concerns.

A few years ago, the late theoretical physicist Professor Stephen Hawking – who, incidentally, used a basic form of AI to help him communicate – said in an interview with the BBC, “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence that would take off on its own and redesign itself at an ever-increasing rate, humans, who are limited by slow, biological evolution couldn’t compete and would be superseded.”

Elon Musk, founder of Tesla, also had some surprising words for the technology – surprising, perhaps, because AI plays a big role in the autonomous features of his company’s cars. Speaking at a symposium at U.S. research university MIT, Musk said, “I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it is probably that . . .”.

He then added, “With artificial intelligence, we are summoning the demon.”

On top of these comments, there was news last month that more than 160 AI-related companies and organisations, plus 2400 individuals, signed a pledge that stated in part that they would, ‘. . . neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.’

The pledge revolved around the use of Lethal Autonomous Weapons Systems (LAWS) – weapons that can identify,
target, and kill a person, without a  human ‘in-the-loop’.

Does that not sound terrifying? Are we on the first step to the world of The Terminator and Arnold strolling about growling ‘I’ll be back’?

“You do get very strong opinions on this!” says Professor Milford, with a smile. “My own opinion is that in the long-term, general intelligence – that is, intelligence like a human – is a concern. A lot of people are working on how we can address and control that development path, but it’s worth remembering that it is very hard to get to that level of AI. It’s unlikely we’ll get there in the near future.

“The second thing to remember is that hardware is difficult too. While the core algorithms that you see Google and other companies use are growing in capability quite quickly, the physical hardware into which that would be deployed is not advancing as fast. So, one of the safeguards is that we don’t actually have millions of highly physically capable robots around us that could do a whole heap of damage!

“Overall, technology is a fantastic thing for humanity. It has issues that we have to control, and it can have unintended negative consequences that we have to anticipate and remedy, but if we are interested in furthering humanity and improving the overall average quality of life for everyone, then AI, robotics and all of these technologies, are critical.”

Source: Motor Trader Aug Edition

14 Aug 2018