Copy
View this email in your browser

CAIDP PANEL AT THE COMPUTERS, PRIVACY, AND DATA PROTECTION CONFERENCE

BOSTON, January 25, 2021: The Boston Global Forum (BGF) and the Michael Dukakis Institute today announced an initiative to establish a worldwide agreement to promote the transformative benefits of artificial intelligence and safeguard against potential abuses. The inclusive global conversation is designed to ensure the responsible use of AI technologies by governments and business while protecting democratic values.

“Artificial intelligence is transforming every aspect of our lives,” said Governor Michael Dukakis in announcing the initiative. “Our goal is to stimulate a global conversation that will make sure AI is used responsibly by governments and the private sector around the world.”

Determining how to govern AI has attracted thoughtful proposals and standards from a wide range of organizations, from the United Nations and the Organization for Economic Cooperation and Development to the European Union and the African Union. The new initiative plans to build on the earlier work and expand collaboration on an international scale.

The Artificial Intelligence International Accord Initiative begins with three primary goals:

  • Build a consensus around the framework for an international accord on AI standards. 
  • Establish a Democratic Alliance on Digital Governance to support the accord and its companion document, the Social Contract for the AI Age.
  • Create a monitoring system to observe the uses and abuses of AI by governments and businesses and record violations of the accord and the Social Contract.

COUNCIL OF EUROPE PROPOSES BAN ON FACIAL RECOGNITION TECHNIQUES

In a press statement on January 28, 2021, the Council of Europe called for strict limits on facial recognition technologies. Furthermore, the Council stated, certain applications of facial recognition should be banned altogether to avoid discrimination. The Council cited risks to privacy and data protection.
In a new set of guidelines addressed to governments, legislators and businesses, the 47-state human rights organisation proposes that the use of facial recognition for the sole purpose of determining a person’s skin colour, religious or other belief, sex, racial or ethnic origin, age, health or social status should be prohibited.
 
This ban should also be applied to “affect recognition” technologies – which can identify emotions and be used to detect personality traits, inner feelings, mental health condition or workers´ level of engagement – since they pose important risks in fields such as employment, access to insurance and education.
 
“At is best, facial recognition can be convenient, helping us to navigate obstacles in our everyday lives. At its worst, it threatens our essential human rights, including privacy, equal treatment and non-discrimination, empowering state authorities and others to monitor and control important aspects of our lives – often without our knowledge or consent,” said Council of Europe Secretary General Marija Pejčinović Burić.
 
“But this can be stopped. These guidelines ensure the protection of people’s personal dignity, human rights and fundamental freedoms, including the security of their personal data.”
 
In the 2020 report Artificial Intelligence and Democratic Values, the CAIDP identified facial surveillance, the use of facial recognition for mass surveillance, as among the most controversial application of Artificial Intelligence. The CAIDP report noted that many NGOs in Europe were pushing for a prohibition.

The COE January 28th announcement also marked the 40th anniversary of the original Council of Europe Convention 108, known as The Privacy Convention. The modernized Convention, “COE 108+,” explicitly addresses new challenges associated with AI deployment. The current COE Secretary General, Marija Pejčinović Burić, came into office in 2019. She is the former Deputy Prime Minister and Minister of Foreign and European Affairs for Croatia.

Announcements

Marc Rotenberg, Director
Center for AI and Digital Policy at Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy.

THIS WEEK IN THE HISTORY OF AI AT AIWS.NET - UNIMATE WAS THE FIRST INDUSTRIAL ROBOT TO WORK

This week in The History of AI at AIWS.net - Unimate, an industrial robot developed in the 50s, becomes the first to work in New Jersey in 1961.

Unimate was invented by George Davol, who filed the patent in 1954. Davol met Joseph Engelberger in 1956, and the two paired up to found Unimation, the first robot manufacturing company. Davol and Engelberger promoted Unimate at The Tonight Show. Engelberger then exported industrial robotics to outside the US as well.

The Unimate worked at a General Motors assembly line at the Inland Fisher Guide Plant in New Jersey. The robot transported die castings from asssembly lines and welded parts on autos. It did this job because it was considered dangerous for human workers, due workplace hazards such as toxic fumes. The robot had the appearance of a box connected to an arm, with systematic tasks stored in a drum memory.

Although this machine was not directly connected to Artificial Intelligence, it was a precursor to developments in that field. By implementing a robot that can do tasks, this project was taking the first steps towards AI. Thus, the HAI initiatve considers this a milestone in the History of AI.

JUDEA PEARL AND UCLA JOIN TOYOTA RESEARCH INSTITUTE'S AI PROGRAM

Judea Pearl, Chancellor’s Professor of Computer Science at the UCLA Samueli School of Engineering, has been selected to join the Toyota Research Institute’s program on artificial intelligence. With more than $750,000 in funding over the next two years, his research will focus on transforming data science from its current paradigm, “data fitting,” to one that leads to robust and useful data interpretation.

Through this program, the Toyota Research Institute — the automotive giant’s U.S.-based research arm — will lead 35 joint research projects aimed at achieving breakthroughs in automated driving, robotics and machine-assisted cognition. The program will include an investment of more than $75 million in UCLA and 15 other academic institutions.

According to Pearl, “data fitting” is the term used to characterize the current thinking that dominates both statistics and machine-learning cultures. This view is driven by a belief that the secret to rational decisions lies within the data themselves and requires sophisticated data mining. Whereas, those who subscribe to the school of data interpretation view data as an auxiliary means for interpreting reality, the processes that generate the data.

“Deep Understanding” by Professor Judea Pearl was recognized as a piece of the History of AI 2020, and he is the head of Causal Inference Modern at AIWS.net.

NGUYEN ANH TUAN PRESENTED THE AI INTERNATIONAL ACCORD INITIATIVE AT THE 14TH COMPUTERS, PRIVACY, AND DATA PROTECTION CONFERENCE

On January 28, 2021, Mr. Nguyen Anh Tuan, CEO of the Boston Global Forum presented AI International Accord Initiative at the panel “Toward an International Accord on AI”, together with
Marit Hansen, State Data Protection Commissioner of Land Schleswig-Holstein (Germany); Eva Kalli, MEP (EU), Malavika Jayaram, Digital Asia Hub (Hong Kong), Marc Rotenberg, Center for AI and Digital Policy at Michael Dukakis Institute.
Here is Tuan presentation:

  1. Scope:
  • To create a framework for the AI International Accord
  • To establish a Democratic Alliance on Digital Governance in order to support the AI International Accord and Social Contract for the AI Age
  • To frame a Monitoring System that observes governments and big tech companies in uses and abuses of AI and notable violations of AI International Accord and of Social Contract for the AI Age.
  • To practice the Framework at AIWS City
  1. Key concepts and content of AI International Accord (AIIA):
Among the key topics for discussion will be:
- The current legal frameworks for AI and the essential elements of a global AI legal framework.
- Responsibilities of governments and companies in protecting data and privacy
- Preventing abuses by governments and businesses in AI, Data, Digital Technology, Cyberspace, including attacking companies, organizations, and individuals on the Internet.
- Creating norms to manage robotics and cyber-security
- Protecting Social Contract for the AI Age, democratic values, transparency, and accountability while ensuring equal opportunities across the socio-economic landscape.
- Collaboration and responsibility between governments and businesses in preventing international cybercrimes and criminals.
- Punishing governments and businesses which violate AI International Accord and/or Social Contract for the AI Age.
  1. Process:
  • AIIA Team create a Framework
  • Discussion:
+ AIIA Panels organized by Boston Global Forum and Michael Dukakis Institute
+ Quad Roundtables organized by Boston Global Forum and Riga Conference 2021
  • Connect with governments and international organizations
  • Set up alliances to implement and practice
  • Practice at AIWS City
  1. Endorse:
  • First step: Quad group and EU
  • Second step: OECD countries
  • Third step: United Nations
  • Fourth step: Russia and China
  1. Host and Partners:

Host: Boston Global Forum and Michael Dukakis Institute
Partners:
World Leadership Alliance-Club de Madrid
Riga Conference 2021
United Nations Academic Impact
Potential Partners:
European Commission
US, Japan, Australia, India, Sweden, Latvia Governments

Timeline of key events:
Quad Roundtable April, 2021
AIIA
Riga Conference 2021
Session about AIIA
World Leadership Alliance – Club de Madrid, September 2021
World Leaders and AIIA
AIWS City as the place to apply and respect AIIA
Boston Global Forum December 12, 2021
Announce the AIIA Accord and present World Leader for Peace and Security 2021 who made significant contributions to AIIA

6. Leaders: Governor Michael Dukakis, President Ursula von der Leyen
 
7. Team:
- Nazli Choucri
- Tuan Nguyen
- Thomas Patterson
- David Silbersweig
- Marc Rotenberg
- Merve Hickok
- Alex Sandy Pentland
- Vint Cerf
- Prime Minister Zlatko Lagumdzija
- President Vaira Vike-Freiberga,
- Prime Minister Esko Aho,
- Yasuhide Nakayama
- Douglas Frantz
- P.S Raghavan
- Kimberley Kitching
- Zaneta Ozolina
- Stavros Lambrinidis
- Judea Pearl
- Randall Davis
 
Assistants:  Sandis Sraders, Larissa Zutter, Minh Nguyen

Official Online Host of events and discussions
AIWS Palace at AIWS City

DEPUTY MINISTER NGUYEN VAN THANH WRITES ABOUT AIWS CITY AND SMART CITIES

The world is entering the fourth industrial revolution (4.0) with an expectation that humanity will gain new achievements, bringing people life with high satisfaction. However, this revolution also poses challenges from non-traditional security threats in four major groups of problems: environmental degradation, climate change, epidemics and international terrorism to every country, especially developing ones, which will face multiple challenges in terms of socio-economy, science – technology and ecological environment.

Towards the founding anniversary of 100 years of the United Nations (2045), in the UN Roundtable 2045, Ramu Damodaran, Chief of the United Nations Academic Impact initiative, Editor-in-Chief of the United Nations Chronicle and Vint Cerf, Father of the Internet discussed new model aimed at: “the human-centric economics, the internet ecosystem and new artificial intelligence for work and life “. The concept of AIWS City appears in the model of Smart city.

FIVE WAYS BUSINESSES CAN MAKE AI MORE ETHICAL

As nations across the world slowly reopen their economies after extended lockdowns, businesses will need to hit the ground running to operate in a new abnormal. One of the ways companies can count on meeting the acceleration with safety is by adopting smart tech, especially tools and platforms enabled by artificial intelligence.

However, because these tools and platforms are built on algorithms, there is concern that the use of AI technology might unconsciously result in and perpetuate biases. When it comes to this area, a business’s commitment to ethical operation is a must in a more transparent world where consumers are keenly aware of a company’s track record and business conduct.

What can businesses do to effectively tackle this challenge? How can organizations safely deploy platforms enabled with AI to do more with less while ensuring that they are always doing the right thing?

Enterprises can undertake five best practices to ensure the adoption of AI does not go against the established rules of ethical corporate behavior.

First, organizations must have a clear understanding of what practicing ethical AI means to them and communicate this clearly to stakeholders. These communications should convey the core values that define a business, whether it is transparency, customer delight, or people-focus. An ethical application of AI will then mean that none of these values are compromised or watered down irrespective of the corporate function that executes it.

Second, businesses must invest in ensuring a more ethical application of AI by employing a chief AI ethics officer. This clearly defined position would be in charge of overseeing, limiting, and assessing how AI is embedded into an enterprise system. This would help to restrain a company from allowing AI to be used to carry out inappropriate or controversial functions, such as facial recognition.

Third, companies need to incorporate an AI ethics and quality assurance review as an integral part of the product development and release life cycles, including a focus on various use case scenarios and resulting outcomes. Every new AI-enabled product should be examined from an ethical lens to confirm that it adheres to established protocols around data safety and compliance.
Fourth, enterprises can ensure ethical AI deployment by turning to the customer. A cross-section of experts selected from a company’s advisory council can be leveraged in the testing of newly created AI enabled tools. Their inputs and experience can be funneled back into the product cycle to help the solution become more transparent, fair and impartial, safe, and ethical.
Finally, the fifth way for a business to adopt AI while staying within the confines of regulatory compliance and ethics is to remain transparent about how data is used to build algorithms. Since these algorithms tend to be opaque and complex, an enterprise, while balancing IP interests, may consider going beyond the call of duty to explain and describe to its customers what data is being sourced, and for what purpose.

Making a clearer link between the value data offers to build efficient algorithms and its potential ability to deliver superior customer experience will go a long way in assuaging customer concerns about the ethical deployment of AI to deliver products and services.

To support for AI technology and development for social impact, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Regarding to AI Ethics, AI World Society (AIWS.net) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.

Website
Twitter
Facebook
LinkedIn
Copyright © 2021 Boston Global Forum, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp