Copy
View this email in your browser

A warm welcome

Hello everyone! Over two years ago ContinualAI emerged from a simple conversation between a few researchers interested in the field. Today, over 700 researchers and AI enthusiasts have joined in on that conversation. Since, ContinualAI has grown into a full fledged research organization, conducted workshops and webinars, contributed code and data-sets, and forming relationships with academic and industry institutions alike. All said, ContinualAI is rapidly positioning itself as the key hub of continual learning research for AI, across the globe. 

On behalf of the board, I would like to thank each and every one of you for your interest, and we aim for this newsletter to serve as a distillation of some of what ContinualAI has been up to in the meantime. 
 

The first 300 ContinualAI member locations

A Few Recent Announcements

  • We announced the ContinualAI Research (CLAIR), a collaborative laboratory endorsed by ContinualAI and with the goal to advance the state of the art in Continual Learning (CL) for AI.
  • We held our first ContinualAI Online Meetup recording on "Generative Models for Continual Learning". You can watch it here: https://www.youtube.com/watch?v=TeYcCuMQ-B0
  • We are sponsoring the Continual Learning workshop “CLVision” at CVPR2020, 13-19 of June 2020 in Seattle, USA.
  • We are sponsoring the Special Session “New Trends in Continual Learning with Deep Architectures” at IEEE EAIS2020, May 27-29, 2020 in Bari, Italy
  • We have published a number of articles on our medium publicationhttps://medium.com/continual-ai (Let us know if you would like to contribute!)
  • There is also now a course on continual learning, by Irina Rish : https://sites.google.com/site/irinarish/continuallearning

Top paper picks: 

A paper we think you should read, as suggested by our community:


Dataset Distillation (Wang et al. 2019)
Model distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60, 000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to original performance with only a few gradient descent steps, given a fixed network initialization. We evaluate our method in various initialization settings and with different learning objectives. Experiments on multiple datasets show the advantage of our approach compared to alternative methods

 

Other Useful Links: 


 

Twitter
Website
Medium
GitHub
Copyright © 2019 ContinualAI, All rights reserved.

Our mailing address is:
contact@continualai.org

Want to change how you receive these emails? ;(
I suppose that you can update your preferences or unsubscribe from this list.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
ContinualAI · Via Caduti della Via Fani, 7 · Bologna, Bo 40121 · Italy

Email Marketing Powered by Mailchimp