Hi folks,
For the northern hemisphere, Spring is upon us. New flowers are starting to bloom and the coldest days are waning. This month also is the harrowing one year anniversary of the first major uptick in COVID-19 cases. Yet, while the pandemic is not behind us yet, increasing vaccine roll-outs in many parts of the world give a glimpse of hope that life will return back to normal.
Despite the challenges that 2020 may have brought, we are excited to share some exciting news from ContinualAI that has been long in the making throughout the past year. Our members have been hard at work on projects to aid the research community, and our work has been recognized as helping push the field forward.
First, we're excited to announce that ContinualAI has been awarded the 2020 Best International AI Research Organization by Wealth&Finance. This is one acknowledgment for the work that our community is doing, collaborating and sharing knowledge to advance continual learning AI.
We are also excited to announce an example of what a collaboration of this sort may create. After over a year of effort, ContinualAI is overjoyed to release the alpha version of Avalanche, an end-to-end library for continual learning, to our community and beyond. We'll talk more about Avalanche later in this newsletter, so read on to find out more about this exciting project!
Indeed, we hope Avalanche will initiate a positive feedback loop for the research community, and while Spring may be upon us, maybe a bit more of Winter into your terminals also. We thank the community again and again for all the amazing work we have accomplished, and what we will accomplish in the future!
|
|
|
ContinualAI was recognized as the 2020 Best International AI Research Organization
|
|
We're also happy to share what we have been working on, towards accelerating research surrounding continual learning AI: a necessary step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't already and feel free to donate if you are passionate about this goal.
|
|
After over a year of hard work by a dedicated group of developers in our community and beyond, ContiunalAI is excited to announce Avalanche, an end-to-end library for continual learning!
With Avalanche, we aim to push continual learning to the next level, providing a shared, open source (MIT licensed) and collaborative library for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
We also aim to trigger a positive reinforcement loop within our community and beyond, moving towards a more collaborative and inclusive way of doing research, allowing us tackle bigger problems faster and together. (You could almost say that we're trying to trigger an avalanche of continual learning research, get the idea yet? ;) ). Top Continual Learning labs across the world have already joined the project, and you can see them and our other contributors and partners here.
We encourage everyone to find out more about the project at the links below. Try out the new codebase and read up on how it can catalyze your research, share any issue you may run into, or help contribute to the codebase.
GitHub: http://github.com/ContinualAI/avalanche
Official Website: http://avalanche.continualai.org
Api-Doc: http://avalanche-api.continualai.org
While a community effort from the start, with many contributors, a extra special thank you is due to Vincenzo Lomonaco (Lead Maintainer), Lorenzo Pellegrini (Lead for the Benchmarks module), Andrea Cossu (Lead for the Evaluation module), Gabriele Graffieti (Lead for Automation & Tests), Antonio Carta (Lead for the Training module)!
Most importantly, we hope all of you will help us shape and achieve Avalanche's bold vision as well. We want you to join our Team! We are always looking for new maintainers and contributors!
Lastly, join us this Friday to hear more about the project at our reading group.
|
|
Announcements!
- The CL competition of the Workshop "Continual Learning in Computer Vision" supported by ContinualAI, is now OPEN!, you can already submit your strategy thanks to the awesome work of the workshop chairs! The event is planned to be one of the biggest ever organized in our community, make sure to be there!
- We had wonderful continual learning meetups the last couple of weeks view the recording of the meetups here) Stay tuned on slack for the next meetup, or subscribe to the open CL mailing-list Continual Learning for AI, for updates on meetups and more events (including those external to CLAI).
- We've maintained a great reading group line up. Can't make this week's reading group? No worries! See the past papers here, and you can also watch the recordings of all the events that we have had.
- The ContinualAI Lab collaborative team is always looking for contributors to the many open-source projects we have under development (including Avalanche). Contact us on Slack if you want to learn more about them and join us! We are always looking for motivated people willing to give back to this awesome community!
|
|
Top paper picks:
Nan Pu, Wei Chen, Yu Liu, Erwin M. Bakker, Michael S. Lew
Person ReID methods always learn through a stationary domain that is fixed by the choice of a given dataset. In many contexts (e.g., lifelong learning), those methods are ineffective because the domain is continually changing in which case incremental learning over multiple domains is required potentially. In this work we explore a new and challenging ReID task, namely lifelong person re-identification (LReID), which enables to learn continuously across multiple domains and even generalise on new and unseen domains. Following the cognitive processes in the human brain, we design an Adaptive Knowledge Accumulation (AKA) framework that is endowed with two crucial abilities: knowledge representation and knowledge operation. Our method alleviates catastrophic forgetting on seen domains and demonstrates the ability to generalize to unseen domains. Correspondingly, we also provide a new and large-scale benchmark for LReID. Extensive experiments demonstrate our method outperforms other competitors by a margin of 5.8% mAP in generalising evaluation.
|
|
|
|