Waymo van prang, self-driving cars still suck, AI research jobs, and more

Little good news to increase your trust in machines here, to be honest

Uh oh, not another self-driving car crash It’s Waymo’s turn to be involved in a car crash. Reports from local news in Arizona showed a beat up Waymo van and a trashed Honda Sedan lying around piles of debris on a road in Chandler on Friday.

Check out the impact from a TV report from ABC 15.

The white Waymo van was in autonomous mode when the crash happened, according to the police. It’s not entirely clear who is at fault, though. Police say the Honda Sedan smashed into the Waymo motor after trying to avoid hitting another car on Chandler Boulevard, Chandler, Arizona.

And, by the way, speaking of semi-autonomous cars: Tesla’s Elon Musk hung up on the chief of America’s transportation watchdog, when the regulator called with concerns about Tesla corporate blog posts blaming Autopilot deaths on drivers.

The Register has contacted Waymo for comment.

Self-driving cars still suck On the subject of self-driving cars, a recent report from the California’s Department of Motor Vehicles reveals that autonomous vehicles still make simple mistakes.

The DMV asked eight companies: Baidu, Delphi Automotive/APTIV PC, Drive.ai, GM Cruise, Nissan, Telenav, Waymo and Zoox to identify common failures and how often human drivers had to take over during mishaps.

The rate of disengagements vary, and not every company disclosed this information. Waymo had the least disengagements per miles driven.

Interestingly, the companies had similar problems. Many like Nissan, Drive.ai and Telenav experienced “localization errors”, where the GPS or maps failed and the vehicle was unable to really work out its position in relation to its environment, so it sometimes braked suddenly or swerved in and out of lanes.

A few had sensor errors. Cruise said some of the data incoming to the car’s many sensors did not quite match, giving conflicting information and causing the car to behave erratically. Errors included failing to give way to another vehicle trying to enter a lane or not braking hard enough for a stop sign.

More worryingly, some could not recognise vital objects like traffic lights and signs. Waymo cars have been known to ignore a “no right turn sign on red signal”. Baidu and Delphi Automotive/APTIV also reports similar issues. Zoox had a range of planning and hardware discrepancies that led to poor driving and localization issues.

Read the reports in full here.

Trust in Facebook’s AI. Yeah, right. Facebook is hiring a “Trust in AI” research scientist familiar with machine learning and its ethical impacts to join its empire in Silicon Valley.

The posting comes as social media giant is desperate to clean its public image and scrub away the stink of the Cambridge Analytica (now Emerdata) data leaks. CEO Mark Zuckerberg has always referred to some magical hand-wavey AI to fix all its problems. We’ll see if it works.

Facebook wants AI to autonomously detect and eradicate hate speech, pornography, fake news and terrorist propaganda. In reality, however, that job is still down to an army of unfortunate human peons that must deal with all that horrible content. Zuckerberg said he hoped to expand to the team to 20,000 moderators.

Now, he also hopes good old algorithms will detect biases and improve fairness, safety, privacy, transparency – the whole shebang – in its data and products.

On Wednesday, Isabel Kloumann, a Facebook research scientist, spoke at F8 and introduced a tool called “fairness flow”. It’s an all-knowing algorithm that helps sniff out biases, apparently.

Kloumann did not reveal many technical details about how the algorithm works. What data was it trained on? What features does it look for? What products is it used for? Perhaps, the new Trust in AI research scientist will have a better idea.

If you missed F8, we wrote about some its announcements, including a horrendous new dating service, an updated Pytorch framework, and VR headset.

“The ideal candidate will also be a thought-leader in the AI ethics, law and policy and engage with teams around the company to help develop their own technologies, practices and processes in the space,” according to the job posting.

All you need is a PhD in Machine Learning, AI, AI ethics, law or policy and some good research experience.

If you fancy yourself as an ethical AI thought leader then apply here.

Free Go-playing AI agent Here’s more Facebook news: The Facebook AI Research (FAIR) team has released the model and code for ELF Go, a bot that has beat a few top-ranking human and machine Go players.

The code for DeepMind’s AlphaGo, AlphaGo Zero, and AlphaZero is a closely guarded secret, and other companies like TenCent and Facebook have been trying to replicate its results.

“Inspired by DeepMind’s work, we kicked off an effort earlier this year to reproduce their recent AlphaGoZero results using FAIR’s Extensible, Lightweight Framework (ELF) for reinforcement learning research,” it said in a blog post.

“The goal was to create an open source implementation of a system that would teach itself how to play Go at the level of a professional human player or better. By releasing our code and models we hoped to inspire others to think about new applications and research directions for this technology.”

ELF OpenGo isn’t too shabby. It has beaten LeelaZero the strongest public bot, winning 198 games and losing two, as well as a 14 win and 0 losses against four of the top 30 human Go players. The trained model only requires a single GPU and makes a move in 50 seconds.

But to train it to that level you’ll need up to 2000 GPUs. If you have that many spare lying around then you can play with ELF Go here. If not, just carry on reading.

New FB AI labs Okay, last bit of news from Facebook. It is opening two new research labs in Seattle and Pittsburgh after hiring three academics from the University of Washington and Carnegie Mellon University.

Expertise in AI is rare. Many researchers lured by the sky-high salaries and large amounts of data and compute choose to work for tech conglomerates. Some driven by the academic freedom and allure of teaching prefer to stay at universities. But now companies are increasingly happy to offer them a happy medium, letting them split their time between industry and academic. You get the advantages of both so why not?

Luke Zettlemoyer, a professor at the University of Washington and an expert in natural language processing, will be in the Seattle lab. Abhinav Gupta, an associate professor and Jessica Hodgins, a professor, both at Carnegie Mellon University’s robotics department are heading the lab over at Pittsburgh, according to the New York Times.

The brain drain has left people worried about who will be left to educate the next generation of AI engineers.

Magenta on JavaScript Developers at Google’s creative machine learning team, Magenta, have released a JavaScript API for some of its tools so they can run in web browsers.

The release of TensorFlow in JavaScript – TensorFlow.js – makes it possible to write APIs like Magenta.js. Some of the tools released include MusicVAE, a variational auto encoder that creates new melodies from two input sounds (we wrote about this in a previous roundup); MelodyRNN, an LSTM that generates melodies; DrumsRNN, also an LSTM generating drum kit sounds; and ImprovRNN, a model similar to Melody RNN.

You can experiment with Magenta.js now. ®