Copy
Twitter
ML Digest: Reinforcement Learning in the real world
Welcome to this week of the Best of Machine Learning Digest. In this weekly newsletter, we resurface some of the best resources in Machine Learning posted in the past week. This time, we've gotten 48 submissions, including 4 papers.
We need helping hands! Get involed.

Papers

This week, 4 Papers were posted on Best of ML. In the following, we're showing you the Top 4 posts of this week.
Neural Data Server: A Large-Scale Search Engine for Transfer Learning Data
 
Transfer learning has proven to be a successful technique to train deep learning models in the domains where little training data is available. The dominant approach is to pretrain a model on a large generic dataset such as ImageNet and finetune its weights on the target domain. However, in the new era of an ever-increasing number of massive datasets, selecting the relevant data for pretraining is a critical issue. We introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain. Our NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client, an end-user with a target application with its own small labeled dataset.
 
What Does BERT Look At? An Analysis of BERT's Attention
 
Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention.
 
Challenges of Real-World Reinforcement Learning
 
Reinforcement learning (RL) has proven its worth in a series of artificial domains, and is beginning to show some successes in real-world scenarios. However, much of the research advances in RL are often hard to leverage in real-world systems due to a series of assumptions that are rarely satisfied in practice. We present a set of nine unique challenges that must be addressed to productionize RL to real world problems. For each of these challenges, we specify the exact meaning of the challenge, present some approaches from the literature, and specify some metrics for evaluating that challenge. An approach that addresses all nine challenges would be applicable to a large number of real world problems. We also present an example domain that has been modified to present these challenges as a testbed for practical RL research.
 
ImagineNet: Restyling Apps Using Neural Style Transfer
 
This paper presents ImagineNet, a tool that uses a novel neural style transfer model to enable end-users and app developers to restyle GUIs using an image of their choice. Former neural style transfer techniques are inadequate for this application because they produce GUIs that are illegible and hence nonfunctional. We propose a neural solution by adding a new loss term to the original formulation, which minimizes the squared error in the uncentered cross-covariance of features from different levels in a CNN between the style and output images. ImagineNet retains the details of GUIs, while transferring the colors and textures of the art. We presented GUIs restyled with ImagineNet as well as other style transfer techniques to 50 evaluators and all preferred those of ImagineNet. We show how ImagineNet can be used to restyle (1) the graphical assets of an app, (2) an app with user-supplied content, and (3) an app with dynamically generated GUIs.
 

Blog Posts

This week, 27 Blog Posts were posted on Best of ML. In the following, we're showing you the Top 3 posts of this week.
Alternative NLP Method
 
Natural language processing seems to be a differentiating field when we look at other areas of Artificial intelligence. Methods in natural language processing build on basic ideas or medium ideas with complex computitonal efforts. My effort is based on also basic-medium idea feature extraction. With random forest classifier, by using these feautres the classifier reaches up nearly 0.70 accuracy. If you can use another healthy features and add up to these may you reach an .9 or more accurate results. This study is based on the natural language processing competition in kaggle. What I want to say is that this study is a method that can be used in basic text classification studies.
 
What Is the Naive Classifier for Each Imbalanced Classification Metric?
 
A common mistake made by beginners is to apply machine learning algorithms to a problem without establishing a performance baseline. A performance baseline provides a minimum score above which a model is considered to have skill on the dataset. It also provides a point of relative improvement for all models evaluated on the dataset. A baseline can be established using a naive classifier, such as predicting one class label for all examples in the test dataset.
 
Anomaly Detection with Autoencoders in TensorFlow 2.0
 
In this detailed guide, I will explain how Deep Learning can be used in the field of Anomaly Detection. Furthermore, I will explain how to implement a Deep Neural Network Model for Anomaly Detection in TensorFlow 2.0. All source code and the corresponding dataset is, of course, available for you to download.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Monn Ventures · Winterthurerstrasse 649 · Zürich 8051 · Switzerland

Email Marketing Powered by Mailchimp