Tootling along a local lane a while back, we found ourselves squeezing past an oncoming car bedecked front, back, top and sides with a veritable smörgåsbord of sensors and scanners.
Somewhere deep inside my skull my lizard brain started hissing: ‘They’re going to steal your soul!’
‘Shut up, Smaug,’ I told it. ‘It’s just a giant corporation, with more money than sense. They’re trying to data-capture this little-travelled bucolic byway in glorious 3D for future generations, who are begging to traverse it in driverless cars.’
‘Seems a lot of effort to go to for a bit of tatty tarmac that connects nowhere to nowhere else in particular,’ observed Smaug. And with that he went back to incubating a clutch of jewel-encrusted gold chalices.
Indeed it does seem like a lot of trouble. Especially when you’d have to 3D-map every road around here twice – winter and summer – to get a full-enough picture of where the edges and ditches are for an autonomous car to reliably navigate them year round.
It would appear that far bigger brains than mine are also grappling with this conundrum: for cars to do what human drivers do almost unthinkingly, AV tycoons may first have to build themselves a multi-billion dollar, bazillion-petabyte, 3D model of every road and street in the world upon which the yearning masses may sometime wish to be set free from holding steering wheels.
Even then, the AVs themselves will still have to cart around their own set of cameras, sensors and scanners to compare what’s outside to what the model says should be there. Some of those bits of kit currently cost more than the car they’re fitted to. Moreover AV enthusiasts have started suggesting that it’s going to be so difficult that unpredictable factors like pedestrians and cyclists ought to be ‘re-behavioured’ (shoved out of the way, in English) to give their AVs a decent shot of achieving deathless perfection.
Fear not though. When the going gets tough, the AV engineers have another trick up their sleeve. They simply redefine the going into something more manageable.
A team in Cambridge called Wayve say they have used an artificial intelligence technique called Deep Reinforcement Learning to train a vehicle to perform a simple task, using only a single front-facing video camera, in half an hour.
In the land of blind faith, can the the one-eyed car be king?
Simple is the operative word, however. They got a Renault Twizy electric quadricycle to teach itself to traverse a 250m access road with a bend on it. To be fair to them, that’s really not a bad start considering the whole Cambridge project might have cost less than a single LIDAR set-up on one of Tesla’s AVs (which Tesla seems to have shelved, by the way).
You have to wonder whether those other engineers working on billion-buck AV projects, on seeing Wayve, smote their foreheads, crying ‘Why didn’t we think of this?’ or whether they smirked and said: “Oh yes, the single camera and deep reinforcement approach. We remember that. Wait until you try to turn out on to the A14.”
And Wayve do admit “A fault that may be identified is that the agent may choose to avoid more difficult manoeuvres, e.g. turning right in the UK (left in US).”
This Twizy was great until we fitted it with AI. Now it’s as stubborn as a mule. Who’da thunk?
Nevertheless, Uber is putting money into Wayve – although whether solely because of the concept or also for PR purposes remains to be seen.
Certainly, a story on the project in The Cambridge News on 16 August ran under a bracingly Uber-friendly headline: ‘Meet the Cambridge duo behind safe driverless car technology backed by Uber’.
Well maybe not quite as friendly as Uber might have hoped. So there’s an unsafe driverless car technology backed by Uber somewhere? Why, yes there is.
More interestingly, Wayve’s vision of driverless transportation is very different in scale and scope from the one Uber has been promoting – the one where driverless Ubers whisk you anywhere you want; even up my local byway from nowhere to nowhere else in particular (and try not to hit the fibre broadband cabinet hidden in the cow parsley).
In Wayve’s World, the goal of making “autonomous vehicles a day-to-day reality for everyone” will be achieved by restricting that part of reality to the last mile.
Wayve’s technology is being developed with safety in mind.
Co-founder Alex [Kendall] told Cambridgeshire Live: “In the future we imagine a world where mass transit systems move people and goods around – trains, buses, aircraft or hyperloops – between major hubs.
”We are targeting the last mile transportation, to get from that hub to the final destination through safe and intelligent autonomy.”
Two things to note here. First, they don’t see AVs as a technology for the actual, real world that we actually, really live in. Rather, AVs won’t happen until an imagined world of mass transit hubs conveniently turns up to do the heavy lifting.
Second, there’s the safe word again. Twice. Uber must mainly be into Wayve for the artificial driving intelligence technology. But they also clearly believe that people see AVs as unsafe, and AV promoters as hubristic. Else why the Twizys and the deprecatory last mile stuff?
Investors are not going to keep funding Uber to lose billions a year funding expensive white elephant AV programmes, either. If Wayve are right about their approach to AV driving intelligence ‘capable of working on most cars’ being much cheaper in the long run, then that would make more sense to Uber. Especially if the cost of constructing the imagineered wonder world of mass transit systems and major hubs can be passed on to other people while Uber get to dominate the most profitable ‘last mile’ travel sector.
Call me a doubter but I can’t take Wayve’s talk of building a brain that will have artificial driving intelligence terribly seriously.
In his highly-recommended book, Neuropolis: A Brain Science Survival Guide, Robert Newman relates how, when shown a photo or picture, a human breaks up the visual information and processes it in 30 different parts of the brain. If the visual stimulus requires a physical response – say when driving a car – the responses are bewilderingly-more complex. There is more about how the brain and body work together – even about in which order – that we don’t yet understand than we do.
If Uber ends up ditching much of the prohibitively costly, unwieldy and demonstrably unsafe LIDAR-and-maps approach to AV design in favour of developing autodidactic cars, shouldn’t we try to get to know more about how our own brains and intelligence work before trying to train comparatively simple artificial brains to drive cars safely on real roads?