Copy
View this email in your browser

Happy 2021 everyone! 

We look forward to the new year with excitement, which perhaps promises a taste or normalcy. Despite the challenges of 2020, we're glad to have our members and those following our efforts continue to push forward. We're also excited for what the new year may bring: new members, new projects, new ideas, new findings. Yet, given how awesome the members of our community have been over the last few years, perhaps this isn't really that new at all. In any case, we hope ContinualAI is a catalyst for this, connecting people and ideas together towards continually learning AI. 

 
For the first time in 25 years, the distinctive vocals of the deceased South Korean superstar Kim Kwang-seok were brought back singing new material, using artificial intelligence. The programs creator was inspired by the legendary Go match between Deepmind's AI and Lee Se-dol. 

 

We're happy to share what we have been working on, towards accelerating research surrounding continual learning AI: a necessary step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't already and feel free to donate if you are passionate about this goal.  



A Few Recent Announcements

 
  • We are excited to announce the next ContinualAI Online Meetup (This Friday 29-1-2021 5.30 PM CET)! This meetup will be about "Rehearsal-Free Continual Learning". See the speakers and topics for the meetup above, and prepare your questions! We will make sure to make a strong panel discussion at the end of the meetup! To join, the Eventbrite link is here and MS Teams link is here! 
 
  • The Workshop "Continual Learning in Computer Vision" supported by ContinualAI, and is planned to be one of the biggest ever organized in our Continual Learning community, make sure to be there! The event has opened its CALL FOR PAPERS (deadline 20 March), so you can submit your work: short and long articles, archival or not, it does not matter, submit your original content to the workshop! 
 
  • Feel free to subscribe if haven't already the open mailing-list Continual Learning for AI (not this one), to be updated about future events and the news around continual learning even from people outside ContinualAI!
 
  • We've maintained a great reading group line up. Can't make this week's reading group? No worries! See the past papers here, and you can also watch the recordings of all the events that we have had.
 
  • The ContinualAI Lab collaborative team is always looking for contributors to the many open-source projects we have under development. Contact us on Slack if you want to learn more about them and join us! We are always looking for motivated people willing to give back to this awesome community!
 
 
Not on our mailing list? Join now!

Top paper picks: 

A paper we think you should read, if you have not yet, as chosen by the community:

IIRC: Incremental Implicitly-Refined Classification

Mohamed AbdelsalamMojtaba FaramarziShagun SodhaniSarath Chandar 

We introduce the "Incremental Implicitly-Refined Classi-fication (IIRC)" setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse) label like "bear" and a low-level (fine) label like "polar bear". Only one label is provided at a time, and the model has to figure out the other label if it has already learnfed it. This setup is more aligned with real-life scenarios, where a learner usually interacts with the same family of entities multiple times, discovers more granularity about them, while still trying not to forget previous knowledge. Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups. These challenges can be motivated by the example "if a model was trained on the class bear in one task and on polar bear in another task, will it forget the concept of bear, will it rightfully infer that a polar bear is still a bear? and will it wrongfully associate the label of polar bear to other breeds of bear?". We develop a standardized benchmark that enables evaluating models on the IIRC setup. We evaluate several state-of-the-art lifelong learning algorithms and highlight their strengths and limitations. For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image. We hope that the proposed setup, along with the benchmark, would provide a meaningful problem setting to the practitioners

Other Useful Links: 

 

Twitter
Website
Medium
GitHub
YouTube
Copyright © 2020 ContinualAI, All rights reserved.

Our mailing address is:
contact@continualai.org

Want to change how you receive these emails? ;(
I suppose that you can update your preferences or unsubscribe from this list.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
ContinualAI · Via Caduti della Via Fani, 7 · Bologna, Bo 40121 · Italy

Email Marketing Powered by Mailchimp