Copy
Twitter
ML Digest: Capsule Networks and Adversarial Attacks.
Welcome to this week of the Best of Machine Learning Digest. In this weekly newsletter, we resurface some of the best resources in Machine Learning posted in the past week. This time, we've gotten 55 submissions, including 3 papers.
We need helping hands!. Get involved and inform close to 1,000 ML Engineers every week.

Papers

We found a handful of papers you might find interesting!
Fooling automated surveillance cameras: adversarial patches to attack person detection
 
In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera.
From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.
 
A Framework for Understanding Unintended Consequences of Machine Learning
 
As machine learning increasingly affects people and society, it is important that we strive for a comprehensive and unified understanding of how and why unwanted consequences arise. For instance, downstream harms to particular groups are often blamed on "biased data," but this concept encompass too many issues to be useful in developing solutions. In this paper, we provide a framework that partitions sources of downstream harm in machine learning into five distinct categories spanning the data generation and machine learning pipeline. We describe how these issues arise, how they are relevant to particular applications, and how they motivate different solutions. In doing so, we aim to facilitate the development of solutions that stem from an understanding of application-specific populations and data generation processes, rather than relying on general claims about what may or may not be "fair."
 
Adversarial Learning of Deepfakes in Accounting
 
Nowadays, organizations collect vast quantities of accounting relevant transactions, referred to as 'journal entries', in 'Enterprise Resource Planning' (ERP) systems. The aggregation of those entries ultimately defines an organization's financial statement. To detect potential misstatements and fraud, international audit standards demand auditors to directly assess journal entries using 'Computer Assisted AuditTechniques' (CAATs). At the same time, discoveries in deep learning research revealed that machine learning models are vulnerable to 'adversarial attacks'. It also became evident that such attack techniques can be misused to generate 'Deepfakes' designed to directly attack the perception of humans by creating convincingly altered media content. The research of such developments and their potential impact on the finance and accounting domain is still in its early stage. We believe that it is of vital relevance to investigate how such techniques could be maliciously misused in this sphere. In this work, we show an adversarial attack against CAATs using deep neural networks. We first introduce a real-world 'thread model' designed to camouflage accounting anomalies such as fraudulent journal entries. Second, we show that adversarial autoencoder neural networks are capable of learning a human interpretable model of journal entries that disentangles the entries latent generative factors. Finally, we demonstrate how such a model can be maliciously misused by a perpetrator to generate robust 'adversarial' journal entries that mislead CAATs.
 

Blog Posts

This week, 36 Blog Posts were posted on Best of ML. These are some of our favourites!
A Gentle Introduction to Monte Carlo Sampling for Probability
 
Monte Carlo methods are a class of techniques for randomly sampling a probability distribution. There are many problem domains where describing or estimating the probability distribution is relatively straightforward, but calculating a desired quantity is intractable. This may be due to many reasons, such as the stochastic nature of the domain or an exponential number of random variables.
 
Capsule Neural Networks — The future for autonomous vehicles
 
Capsule networks use capsules, compared to neurons in a standard neural network. Capsules encapsulate all the important information of an image which outputs a vector. Compared to neurons, which output a scalar quantity, capsules have the ability to keep track of the direction of the feature. Therefore, if we start to change the position of the feature, the value of the vector will stay the same but the direction will point in the change in position.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Monn Ventures · Winterthurerstrasse 649 · Zürich 8051 · Switzerland

Email Marketing Powered by Mailchimp