Hi everyone!
If for nothing else, let this be a gentle reminder to take a few moments to take a much needed breath. You deserved it. The summer months are almost upon us in the northern hemisphere, which means a bit sunnier drive or walk to work (if you happen to no longer be sitting in your home office). With summer too comes a hint of normalcy, for many of us, and a period of transition. So it may be nice to take a breath, recoup, and hit the next few months running.
Here at ContinualAI, we plan to do the same. We recently took stock of many of the projects we have ongoing, and made plans for the future which we're excited to share now and over the next few months: whether it be new platforms to accelerate research, an open code base of continual learning tutorials, platforms for ways to get your questions answered, tools to help navigate the sprawling literature, open courses for hundreds of new students to learn about continual learning AI for the first time, and more. We're lucky to be part of such an wonderful community pitching in to help one another, and as we move on through the next season and the middle of the year. A moment to catch our breath, and subtle reflection, reminds us of that.
|
|
We're also happy to share what we have been working on recently, towards accelerating research surrounding continual learning AI: a necessary step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't already and feel free to donate if you are passionate about this goal.
|
|
Announcements!
- ContinualAI and Neuromatch are joining forces! We'll be provide support the the Neuromatch Academy this year with a short course about continual learning! Stay tuned for more updates about the course, and other open continual learning educational materials. If you don't want to wait, head over to our Colab github repo, where you can find all sorts of interesting continual learning code and tutorials.
- Our ContinualAI Meetup on "Continual Learning at the Edge: On-Device training without forgetting" was an awesome event. Be on the lookout for the recording on our youtube channel if you missed it, and stay tuned on slack for the next meetup, or subscribe to the open CL mailing-list Continual Learning for AI, for updates on meetups and more events (including those external to CLAI).
- We've maintained a great reading group line up. Can't make this week's reading group? No worries! See the past papers here, and you can also watch the recordings of all the events that we have had.
- The ContinualAI Lab collaborative team is always looking for contributors to the many open-source projects we have under development (including Avalanche). Contact us on Slack if you want to learn more about them and join us! We are always looking for motivated people willing to give back to this awesome community!
|
|
|
A few months ago, we announced one shining example of what our community can accomplish with Avalanche, an end-to-end library for continual learning! We have been blown away by the overwhelmingly positive response we have received, and look forward to further developing the project. With Avalanche, we aim to push continual learning to the next level, providing a shared, open source (MIT licensed) and collaborative library for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
If you haven't yet, we encourage everyone to find out more about the project at the links below. Try out the new codebase and read up on how it can catalyze your research, share any issue you may run into, or even help contribute.
GitHub: http://github.com/ContinualAI/avalanche
Official Website: http://avalanche.continualai.org
Api-Doc: http://avalanche-api.continualai.org
|
|
Top paper picks:
James Smith, Jonathan Balloch, Yen-Chang Hsu, Zsolt Kira
Rehearsal is a critical component for class-incremental continual learning, yet it requires a substantial memory budget. Our work investigates whether we can significantly reduce this memory budget by leveraging unlabeled data from an agent's environment in a realistic and challenging continual learning paradigm. Specifically, we explore and formalize a novel semi-supervised continual learning (SSCL) setting, where labeled data is scarce yet non-i.i.d. unlabeled data from the agent's environment is plentiful. Importantly, data distributions in the SSCL setting are realistic and therefore reflect object class correlations between, and among, the labeled and unlabeled data distributions. We show that a strategy built on pseudo-labeling, consistency regularization, Out-of-Distribution (OoD) detection, and knowledge distillation reduces forgetting in this setting. Our approach, DistillMatch, increases performance over the state-of-the-art by no less than 8.7% average task accuracy and up to 54.5% average task accuracy in SSCL CIFAR-100 experiments. Moreover, we demonstrate that DistillMatch can save up to 0.23 stored images per processed unlabeled image compared to the next best method which only saves 0.08. Our results suggest that focusing on realistic correlated distributions is a significantly new perspective, which accentuates the importance of leveraging the world's structure as a continual learning strategy.
|
|
|
|