Copy
Plus, New Zealand’s Algorithm Charter, the House Intel Authorization Act and ODNI’s AI principles
Having trouble reading this email from CSET? View this email in your browser
policy․ai  
Biweekly updates on artificial intelligence, emerging technology and security policy. View at policy.ai or download a pdf. Forward to a friend, or sign up here.
Forward Forward Share Share Tweet Tweet
Wednesday, August 5, 2020
by REBECCA KAGAN
Worth Knowing

Intel Considers Outsourcing More Chip Manufacturing Amid Delays: On July 23, Intel CEO Bob Swan announced the company was considering contracting out some of its chip manufacturing given delays with its 7-nanometer chip. Outsourcing cutting-edge chips marks a significant change: Intel, the world’s largest semiconductor firm, is known for both designing and manufacturing its own chips. The company’s stock fell nine percent on the news, while U.S.-based competitor Advanced Micro Devices’ stock rose six percent. Taiwan Semiconductor Manufacturing Company, the world’s largest contract chipmaker, is a likely contender for the outsourced work and already manufactures AMD’s chips; TSMC’s shares rose 9.5 percent.
New ML Training Benchmarks Set, Led by Nvidia: Industry consortium MLPerf released the results of its third annual competition in machine learning training, with records set by Nvidia and Google. MLPerf measured the time it takes hardware to train one of eight machine learning models — for tasks including image classification, translation and playing Go — to a specific level of performance. Among commercially available systems, Nvidia set 16 records using its DGX SuperPOD supercomputer and A100 GPU; Google led among non-commercially available systems with its newest TPU v4. The gains are due in part to chip improvements, as well as larger systems using more chips in parallel. The second MLPerf benchmark, which measures how quickly trained models can make predictions, will be published later this year.
New Zealand Publishes Multi-Agency Algorithm Charter: New Zealand Minister of Statistics James Shaw launched an Algorithm Charter to guide government agencies. So far, 25 agencies have signed on, including the New Zealand Defence Force and the Ministry of Justice. The charter requires its signatories to have plain-English explanations of their algorithms, plans to mitigate bias and human rights violations, consultations with communities affected by the algorithms and channels for appealing algorithm-informed decisions. It also includes a risk matrix to evaluate algorithmic impact and must be applied to all high-risk processes within 12 months. Shaw believes this document is the first time a country has provided standards to guide its entire government’s use of algorithms. 
Investments in AI Startups Decline in Q2: According to CB Insights, global venture capital investment in AI-related startups declined in the second quarter of 2020 amid the economic downturn, although the tech sector may still be outperforming the global economy. The number of investments in AI companies dropped to a three-year low of 458 in Q2; total funds raised by AI startups also slipped from $8.4 billion in Q1 to $7.2 billion in Q2. The quarter had relatively fewer seed funding rounds, with a greater portion of investments going to mature companies. Global VC investment across sectors shows a similar decline. The tech sector’s economic indicators outperformed many other sectors in Q2, with unemployment rates continuing to decline and the largest tech companies reporting significant profits.
Government Updates

NSCAI Releases Second Quarter Recommendations: The National Security Commission on Artificial Intelligence published its Second Quarter Recommendations on July 22. It proposed establishing an accredited university within the federal government to meet the government’s need for digital expertise and creating a National Reserve Digital Corps, modeled after the military reserves, as a part-time service option for private sector talent. In total, the NSCAI proposed 35 recommendations across six areas including advancing the DOD’s internal AI R&D capabilities, accelerating the application of AI and expanding export controls and investment screenings. The report also included key considerations for responsibly developing and fielding AI.

State Department Loosens Restrictions on Drone Exports: The United States has eliminated the blanket denial of exports of some types of drones, the State Department announced on July 24. The policy shift is a reinterpretation of the 33-year-old Missile Technology Control Regime, an arms control pact with 35 nations that includes a “strong presumption of denial” for international sales of Unmanned Aerial Systems capable of carrying 500 kilograms for more than 300 kilometers. Critics of the decision, including Senate Foreign Relations Committee Ranking Member Menendez, said the move undermines the MTCR and increases the likelihood of U.S. weapons being exported to human rights abusers. Proponents of the decision argued that the status quo allowed China — which does not participate in the MTCR — to capture a large part of the international drone market.

House Intelligence Authorization Act Progresses With AI Provisions: The House Permanent Select Committee on Intelligence approved the FY21 Intelligence Authorization Act in an 11-8 vote on July 31. The bill includes several sections on AI, one of which emphasizes the value of consolidating AI efforts across the Intelligence Community and tasks the Director of National Intelligence with identifying and developing plans for AI projects that advance the mission of the IC. The IAA also directs the Intelligence Advanced Research Projects Activity to award grants and contracts that encourage microelectronics research. Other provisions require improvements to STEAM education and place the Director of Science and Technology directly under the Director of National Intelligence.

ODNI Releases AI Ethics Principles and Framework for the IC: On July 24, the Office of the Director of National Intelligence published Principles of AI Ethics and an AI Ethics Framework outlining norms to guide the Intelligence Community’s use of AI. The principles call for AI’s use to be legal, transparent, objective, human-centered, secure and informed by science. The corresponding framework offers a series of questions to help implement the principles and to document relevant considerations involved in procuring, designing and using AI.

In Translation
CSET's translations of significant foreign language documents on AI


Lessons From U.S. Military-Civil Fusion: Characteristics of, and Lessons From, the U.S. Legal System for Military-Civil Fusion. This 2018 article by a Chinese state think tank praises the U.S. integration of the military and civilian industrial bases as a model for China. The article argues that China can learn much about "military-civil fusion" from U.S. legislation on this issue.

PRC Export Control Law: Export Control Law of the People's Republic of China (Draft) (Second Version). This draft export control bill was being considered by China’s parliament, the National People’s Congress, as of July 2020. The bill limits exports of dual-use items, military equipment, nuclear materials and other goods of counterproliferation concern. It also sets penalties for Chinese exporters who violate the provisions of the bill.

What We’re Reading

Report: Face Recognition Accuracy With Masks Using Pre-COVID-19 Algorithms, National Institute of Standards and Technology (July 2020)

Report: Emerging Military Technologies: Background and Issues for Congress, Congressional Research Service (July 2020)

Paper: Towards a New Generation of Artificial Intelligence in China, Fei Wu et al, Nature (June 2020)

What’s New at CSET

REPORTS PUBLICATIONS FORECASTS
CSET has launched a crowd forecasting platform. Sign up as a forecaster, and take a look at some of the predictions so far:
IN THE NEWS
READ MORE
Events

What else is going on?
Suggest stories, documents to translate & upcoming events here.

policy․ai is written biweekly by Rebecca Kagan and the CSET staff.  Share your thoughts or get in touch with tips, feedback & ideas at rebecca.kagan@georgetown.edu. Want to talk to a CSET expert? Email us at cset@georgetown.edu to be connected with someone on the team.
The Center for Security and Emerging Technology (CSET) at Georgetown’s Walsh School of Foreign Service is a research organization focused on studying the security impacts of emerging technologies, supporting academic work in security and technology studies and delivering nonpartisan analysis to the policy community. CSET aims to prepare a generation of policymakers, analysts and diplomats to address the challenges and opportunities of emerging technologies.

 
We're Hiring!
LEARN MORE
Website LinkedIn Twitter






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Center for Security and Emerging Technology · 37th And O Street NW · Icc 301 · Washington, DC 20057-0001 · USA