Copy
Twitter
Machine Learning Weekly Digest.
Welcome to this week of the Best of Machine Learning Digest. In this weekly newsletter, we resurface some of the best resources in Machine Learning posted in the past week. This time, we've gotten 41 submissions, including 3 papers.
This newsletter is sponsored by no one ;). Let's change that.

Papers

This week, 3 Papers were posted on Best of ML. In the following, we're showing you the Top 3 posts of this week.
Knowledge Graphs
 
In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data. After a general introduction, we motivate and contrast various graph-based data models and query languages that are used for knowledge graphs. We discuss the roles of schema, identity, and context in knowledge graphs. We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques. We summarise methods for the creation, enrichment, quality assessment, refinement, and publication of knowledge graphs. We provide an overview of prominent open knowledge graphs and enterprise knowledge graphs, their applications, and how they use the aforementioned techniques. We conclude with high-level future research directions for knowledge graphs.
 
Creating High Resolution Images with a Latent Adversarial Generator
 
Generating realistic images is difficult, and many formulations for this task have been proposed recently. If we restrict the task to that of generating a particular class of images, however, the task becomes more tractable. That is to say, instead of generating an arbitrary image as a sample from the manifold of natural images, we propose to sample images from a particular "subspace" of natural images, directed by a low-resolution image from the same subspace. The problem we address, while close to the formulation of the single-image super-resolution problem, is in fact rather different. Single image super-resolution is the task of predicting the image closest to the ground truth from a relatively low resolution image. We propose to produce samples of high resolution images given extremely small inputs with a new method called Latent Adversarial Generator (LAG). In our generative sampling framework, we only use the input (possibly of very low-resolution) to direct what class of samples the network should produce. As such, the output of our algorithm is not a unique image that relates to the input, but rather a possible se} of related images sampled from the manifold of natural images. Our method learns exclusively in the latent space of the adversary using perceptual loss -- it does not have a pixel loss.
 
StyleGAN2 Distillation for Feed-forward Image Manipulation
 
StyleGAN2 is a state-of-the-art network in generating realistic images. Besides, it was explicitly trained to have disentangled directions in latent space, which allows efficient image manipulation by varying latent factors. Editing existing images requires embedding a given image into the latent space of StyleGAN2. Latent code optimization via backpropagation is commonly used for qualitative embedding of real world images, although it is prohibitively slow for many applications. We propose a way to distill a particular image manipulation of StyleGAN2 into image-to-image network trained in paired way. The resulting pipeline is an alternative to existing GANs, trained on unpaired data. We provide results of human faces' transformation: gender swap, aging/rejuvenation, style transfer and image morphing. We show that the quality of generation using our method is comparable to StyleGAN2 backpropagation and current state-of-the-art methods in these particular tasks.
 

Projects

This week, 7 Projects were posted on Best of ML. In the following, we're showing you the Top 3 posts of this week.
Style transfer for MNIST digits
 
My students have re-implemented the algorithm for learning representations of images invariant to the label from "Invariant Representations without Adversarial Training" (NIPS'18). The algorithm is described in detail in Dan Moyer's blog post. In short, it is an autoencoder which "splits" the information about the image into two parts: information about the label vs. the rest. This remaining information can be interpreted as the "style" and can be used to generate an image with another label=digit. The algorithm has access to the original labels of images, but no other supervision (e.g. stylistic features) is given.
 
Pre-Trained Semantic Segmentation Models in TensorFlow 2.X
 
Can anyone recommend ready-to-use pre-trained semantic segmentation models (preferably trained on cityscapes dataset) that are compatible with TF 2.X? I only need high level functionality (i.e. output a mask given an input image).
 
⏩ForwardTacotron - Generating speech in a single forward pass without any attention!
 
We've just open-sourced our first text-to-speech 🤖💬 project! It's also our first public PyTorch project. Inspired by Microsoft's FastSpeech, we modified Tacotron (Fork from fatchord's WaveRNN) to generate speech in a single forward pass without using any attention. Hence, we call the model ⏩ ForwardTacotron.
 

Blog Posts

This week, 28 Blog Posts were posted on Best of ML. In the following, we're showing you the Top 3 posts of this week.
Building a ResNet in Keras
 
In principle, neural networks should get better results as they have more layers. A deeper network can learn anything a shallower version of itself can, plus (possibly) more than that. If, for a given dataset, there are no more things a network can learn by adding more layers to it, then it can just learn the identity mapping for those additional layers. In this way, it preserves the information in the previous layers and can not do worse than shallower ones. A network should be able to learn at least the identity mapping if it doesn’t find something better than that.
 
Imbalanced Classification with the Adult Income Dataset
 
Many binary classification tasks do not have an equal number of examples from each class, e.g. the class distribution is skewed or imbalanced. A popular example is the adult income dataset that involves predicting personal income levels as above or below $50,000 per year based on personal details such as relationship and education level. There are many more cases of incomes less than $50K than above $50K, although the skew is not severe.
 
My first year as a Product Manager for Artificial Intelligence (AI)
 
It has been almost five years since I moved from software engineering into product management and a bit more than a year since I became a product manager in Artificial Intelligence. I can’t express how exciting and self-fulfilling this role has become for me. Because this is a very new role, I will try to explain what AI product management (AIPM) is and reflect on my personal experience of making AI a strategic part of the company roadmap.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Monn Ventures · Winterthurerstrasse 649 · Zürich 8051 · Switzerland

Email Marketing Powered by Mailchimp