Copy
Jun 3 · Issue 149

Hey folks,

This week in deep learning, we bring you a literature review of Natural Language Processing bias studies, this lawsuit over privacy concerns around facial recognition, and these NLP and Computer Vision TensorFlow tutorials.

You may also enjoy learning about this AI startup that is exploring biological neural networks or this subtitle translation model from Netflix.

For content related to Reinforcement Learning, check out the DADS unsupervised reinforcement learning method from Google and this tutorial on automating Doom using TensorFlow.

In the image segmentation world, we found these PyTorch implementations of loss functions for image segmentation, and a new method called Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation with the accompanying code.

As always, happy reading and hacking. If you have something you think should be in next week's issue, find us on Twitter: @dl_weekly.

Until next week!

Industry

AI Startup Combines Mouse Neurons With Silicon Chips To Make Computers Smarter, Faster

Cortical Labs is exploring the efficacy of integrating live biological neurons with traditional silicon chips.

 

Microsoft researchers say NLP bias studies must consider role of social hierarchies like racism

Microsoft researchers analyzed 146 NLP bias research papers and concluded that the research field lacks clear descriptions of bias and fails to explain how, why, and to whom that bias is harmful.

 

Netflix Builds Proof-of-Concept AI Model to Simplify Subtitles for Translation

Netflix developed a model that can simplify and translate subtitles from English to multiple languages.

 

ACLU sues facial recognition startup Clearview AI for privacy and safety violations

“If allowed, Clearview will destroy our rights to anonymity and privacy — and the safety and security that both bring. People can change their names and addresses to shield their whereabouts and identities from individuals who seek to harm them, but they can’t change their faces,” the ACLU said in a statement accompanying the lawsuit.

 

Fighting fire with AI: Using deep-learning to help predict wildfires in the US

A new artificial intelligence model could help fire agencies allocate resources to mitigate wildfire risks across the West.

Mobile + Edge

New AI technique speeds up language models on edge devices

Hardware-Aware Transformer models are smaller and can run 3 times faster on devices like the Raspberry Pi 4, as compared to baseline models.

 

Revved Up Retail: Mercedes-Benz Consulting Optimizes Dealership Layout Using Modcam Store Analytics

Retailers bring real-time analytics powered by NVIDIA Jetson Nano to their stores.

 

AI Benchmarks For Mobile Devices And What You Should Know

Making sense of mobile device benchmarks that measure AI and machine learning performance.

Learning

DADS: Unsupervised Reinforcement Learning for Skill Discovery

DADS is a novel unsupervised Reinforcement Learning algorithm for discovering task-agnostic skills, based on their predictability and diversity, that can be applied to learn a broad range of complex behaviors.

 

Analyzing pretraining approaches for vision and language tasks

Facebook researchers show how several simple, infrequently explored design choices in pretraining can help achieve high performance on tasks that combine language and visual understanding.

 

NLP and Computer Vision Tutorials on TensorFlow Hub 

TensorFlow Hub tutorials to help you get started with using and adapting pre-trained machine learning models to your needs.

 

Automating Doom with Deep Q-Learning: An Implementation in Tensorflow

This article explores how Q-learning can be applied to training an agent to play the classic video game Doom.

Libraries & Code

[GitHub] deepmind/acme

A library of reinforcement learning components and agents.

 

[GitHub] JunMa11/SegLoss

Loss functions for image segmentation.

Papers & Publications

Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search

Abstract: Neural Architecture Search (NAS) explores a large space of architectural motifs -- a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluating the network with thousands of domain-specific data samples. Inspired by how biological motifs such as cells are sometimes extracted from their natural environment and studied in an artificial Petri dish setting, this paper proposes the Synthetic Petri Dish model for evaluating architectural motifs. In the Synthetic Petri Dish, architectural motifs are instantiated in very small networks and evaluated using very few learned synthetic data samples (to effectively approximate performance in the full problem). The relative performance of motifs in the Synthetic Petri Dish can substitute for their ground-truth performance, thus accelerating the most expensive step of NAS. Unlike other neural network-based prediction models that parse the structure of the motif to estimate its performance, the Synthetic Petri Dish predicts motif performance by training the actual motif in an artificial setting, thus deriving predictions from its true intrinsic properties. Experiments in this paper demonstrate that the Synthetic Petri Dish can therefore predict the performance of new motifs with significantly higher accuracy, especially when insufficient ground truth data is available. Our hope is that this work can inspire a new research direction in studying the performance of extracted components of models in an alternative controlled setting.

 

Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation

Abstract: Image segmentation is a fundamental vision task and a crucial step for many applications. In this paper, we propose a fast image segmentation method based on a novel super boundary-to-pixel direction (super-BPD) and a customized segmentation algorithm with super-BPD. Precisely, we define BPD on each pixel as a two-dimensional unit vector pointing from its nearest boundary to the pixel. In the BPD, nearby pixels from different regions have opposite directions departing from each other, and adjacent pixels in the same region have directions pointing to the other or each other (i.e., around medial points). We make use of such property to partition an image into super-BPDs, which are novel informative superpixels with robust direction similarity for fast grouping into segmentation regions. Extensive experimental results on BSDS500 and Pascal Context demonstrate the accuracy and efficiency of the proposed super-BPD in segmenting images. In practice, the proposed super-BPD achieves comparable or superior performance with MCG while running at ~25fps vs. 0.07fps. Super-BPD also exhibits a noteworthy transferability to unseen scenes. The code is publicly available at this https URL.

Curated by Matt Moellman

For more deep learning news, tutorials, code, and discussion, join us on SlackTwitter, and GitHub.
Copyright © 2020 Deep Learning Weekly, All rights reserved.