Facebook AI guru alt-tabs out, Nvidia EULA audits, Baidu changes, chip tricks, and more

Machine-learning news and code to pore over

Facebook AI chief LeCun steps aside – Yann LeCun, considered to be a pioneer of neural networks for computer vision, has stepped back as Facebook’s AI supremo. Jérôme Pesenti, ex-CEO of medical startup BenevolentAI and former IBM Watson vice president, will take the reins, according to Quartz journo Dave Gershgorn.

But LeCun isn’t completely legging it. He will remain at Facebook and continue to lead the social network’s machine-learning boffinry nerve-center FAIR in New York. Last year, The Register heard rumors that LeCun was tired of menial managerial tasks. Now, that boring management stuff has been offloaded to other people.

“There was a need for someone to basically oversee all the AI at Facebook, across research, development, and have a connection with product,” LeCun confirmed to Gershgorn on Tuesday this week.

Pesenti, as veep of AI, and Joaquin Candela – head of Facebook’s Applied Machine Learning team in San Francisco – will both report to CTO Mike Schroepfer.

Schroepfer said in a Facebook post that LeCun was now the social network’s Chief AI Scientist, effectively allowing Pesenti to oversee the website’s machine-learning-powered products so LeCun can focus on research.

Squeeze more out of CPUs – Amazon has published a tutorial on how to use neural-network acceleration engine NNPACK with Apache’s deep-learning library MXNet. NNPACK is optimized for performing inference on CPUs, and is useful when your hardware lacks a suitable GPU for AI tasks.

“NNPACK is available for Linux and macOS X platforms. It’s optimized for the Intel x86-64 processor with the AVX2 instruction set, as well as the ARMv7 processor with the NEON instruction set and the ARMv8,” explained AWS technical evangelist Julien Simon.

New Baidu AI lab hires – Chinese web juggernaut Baidu has announced the addition of new labs and research scientists in an attempt to reshuffle research efforts since Andrew Ng ejected from the biz.

Kenneth Church – who served as the president of Association for Computational Linguistics, an international society for people working on natural language processing, and has previously worked at IBM Watson, Microsoft and AT&T labs – has joined Baidu.

The Chinese internet monster has also snared Jun Luke Huan and Hui Xiong away from their academic posts at the University of Kansas and Rutgers University, respectively, in the US. It has also created two new internal research factions: the Business Intelligence Lab and the Robotics and Autonomous Driving Lab. Now there are a total of five labs, including its Institute of Deep Learning, Big Data Lab and Silicon Valley Artificial Intelligence Lab.

It’s not entirely clear what happened at Baidu to prompt this internal shakeup. But The Register has heard whispers of internal politics and a culture clash between the teams in China and America that led to the departure of several research staff including its previous chief scientist, Andrew Ng, and AI lab director, Adam Coates.

Squeeze more for less on your GPU – OpenAI published TensorFlow code for gradient checkpointing, a technique that reduces the memory needed on graphics processor chips to train large neural networks.

It’s a tricky concept to understand, but the gist is that this software takes up less space to carry out gradient descent, a algorithm often used to train models.

Feed-forward neural networks are a little clumsy to train because all the nodes in different layers are processed in the reverse order. It means that the results obtained from running through all the nodes in the previous layers have to be kept in memory. So the deeper your network, the more memory it takes to train it.

Here’s where gradient checkpointing comes in. Markers are used for nodes as checkpoints. “These checkpoint nodes are kept in memory after the forward pass, while the remaining nodes are recomputed at most once. After being recomputed, the non-checkpoint nodes are kept in memory until they are no longer required,” according an OpenAI.

OpenAI researchers Tim Salimans and Yaroslav Bulatov said they could fit more than ten-times larger models onto a GPU with a 20 per cent increase in computation time. You can find out more here.

A new AI computer vision challenge – Google researchers have launched a contest to improve image compression techniques using neural networks as well as more traditional methods.

The announcement is linked to a workshop at the upcoming Computer Vision and Pattern Recognition conference (CVPR), happening in Utah, USA, in June. The goal is to come up with novel methods to compress images.

A training dataset containing thousands of pictures has been released, and consists of two parts: dataset professional (2GB) and dataset mobile (4GB).

“The datasets are collected to be representative for images commonly used in the wild, containing thousands of images. While the challenge will allow participants to train neural networks or other methods on any amount of data (but we expect participants to have access to additional data, such as ImageNet and the Open Images Dataset), it should be possible to train on the datasets provided,” wrote Michele Covell, a scientist at Google Research, in a blog post.

The validation part of the dataset will be released this month, and the test dataset will be made public on April 15, before the competition closes on April 22. The results will be announced on May 29, and participants can submit a paper to the Workshop and Challenge on Learned Image Compression (CLIC) at CVPR by June 4. Previous research has shown image compression is possible with recurrent neural networks and generative adversarial networks. The CLIC workshop is being sponsored by Google, Twitter, and ETH Zurich, a Swiss university.

Nvidia can now audit CUDA Toolkit users – Nvidia has updated its software licensing agreement of its CUDA Toolkit allowing it to audit organizations, startling individual developers and academics.

It allows Nvidia to audit CUDA toolkit users to check if they are using the toolchain in an appropriate manner – by showing up at your door if necessary. Enterprise-grade software licenses tend to include these auditing requirements, but to attach them to software development tools that can be used by anyone – from individuals to corporations – has been described as extreme by Reg readers who’ve been in touch about this developing situation.

“During the term of the AGREEMENT and for three (3) years thereafter, you will maintain all usual and proper books and records of account relating to the CUDA Licensed Software provided under the AGREEMENT. During such period and upon written notice to you, NVIDIA or its authorized third party auditors subject to confidentiality obligations will have the right to inspect and audit your Enterprise books and records for the purpose of confirming compliance with the terms of the AGREEMENT,” the end-user license agreement (EULA) reads.

We asked Nvidia to clarify what exactly counts as a breach of agreement. A spokesperson told us: “Anyone can develop applications on CUDA or use CUDA-based applications for free. What we want to protect against is a person or entity taking CUDA, re-naming (‘rebranding’) it or charging for it. That said, we have no current plans to audit anyone under our CUDA license, we haven’t done so in the past, and we hope that we’ll not have to do so in the future.”

The EULA goes on to say that if Nvidia finds out that users are breaching agreement terms, then they will be required to pay Nvidia the cost of conducting “the inspection and audit.”

The audit clause was added in September, and spotted at the turn of 2018. It comes at a time when Nvidia also announced it had updated its end-user licensing agreement to ban vendors from selling GeForce and Titan GPUs for datacenters, except for processing blockchain related activities.

Look out for more on this issue this week at El Reg.

Nvidia’s Xavier chip touted again – Let’s just keep talking about Nvidia. Earlier this month it had another go at unveiling Xavier, a processor tailored for self-driving cars.

Xavier was previously teased by Nv CEO Jensen Huang this time last year. Now it seems the thing is inching closer to production. Huang said the SoC will be used as part of the company’s Drive PX Pegasus system, a computer for powering fully autonomous level-five Total Recall-style Johnny Cabs.

Level five being a vehicle control system that just asks for a destination and drives the whole way, down to level one and two, which are varying degrees of intelligent cruise control.

“The computational requirements of robotaxis are enormous – perceiving the world through high-resolution, 360-degree surround cameras and lidars, localizing the vehicle within centimeter accuracy, tracking vehicles and people around the car, and planning a safe and comfortable path to the destination. All this processing must be done with multiple levels of redundancy to ensure the highest level of safety. The computing demands of driverless vehicles are easily 50 to 100 times more intensive than the most advanced cars today,” the biz wrote in a blog post.

Level five? We’ll believe it when we see it.

TensorFlow 1.5.0 – The popular open-source AI framework, Tensorflow, has released version 1.5.0. According to its GitHub page, a few bugs have been patched and some major changes include:

  • Prebuilt binaries are now compiled against CUDA 9 and cuDNN 7.
  • Linux binaries are built using Ubuntu 16 containers, potentially.
  • There are glibc incompatibility issues with Ubuntu 14.
  • Starting from 1.6 release, prebuilt binaries will use AVX instructions. This may break TensorFlow on older CPUs. ®
technics

Technics RS-B965 vs Technics RS-AZ7

Technics RS-B965 vs Technics RS-AZ7 A sound capture card is on my shopping list so I can do these videos properly In this test, ...

Technics SC-EH790. Обзор мощного старичка.

Очень любительский аматорский обзор компонентной системы Техникс.