Copy
View this email in your browser

AI in U.S. election campaigns


Introduction: The candidates

Presidential candidates Trump and Biden are polar opposites in many respects, but their campaigns share a common ground that, surprisingly, is rarely discussed by media pundits. The 2020 Presidential elections could be decided by vast data-gathering efforts by both parties that have harnessed the dangerous capabilities of communication tools powered by artificial intelligence (AI). 

Chatbots

For example, the chatbot—short for “chat robot”—is one such AI-powered tool programmed to simulate and engage in conversations with voters, informed and guided by everything worth knowing about each voter’s life and profile. Leveraging the capabilities of algorithms and natural language  processing (NLP) for the purpose of microtargeting voters, conversations between chatbots and voters are continuously enhanced and fine-tuned through automated Big Data integrations that include voters’ opinions, moods, likes, and dislikes. It is a surprise that AI--enabled microtargeting by political campaigns, involving millions of potential voters in 2020, has not gained more visibility in recent media.  

Chatbots and other such AI tools used for political microtargeting purposes have been developed over the years by tech and marketing companies to acquire and retain customers and develop brand and product loyalty. The digital advertising ecosystem is therefore a perfect toolset for altering negative views of candidates and inducing more positive ones. In addition, these tools and products enable the connection of voters to online communities of candidate supporters to share and reinforce opinions, perspectives, etc... Chatbots can even provide virtual “coaches” to mentor voters or communities of voters to enable them to become more effective advocates for candidates and achieve political-organizing goals.  

Click here to continue reading

YouTube short video: 
Christopher Wylie explains how AI
can manipulate political discourse

In this very brief interview, Mindf*ck author Christopher Wylie (our book review author) explains just how subtle the impact of AI permeating our information landscape gets, drawing on a real-world example of digital news coverage of the Notre Dame cathedral fire.

 

The contrast worth highlighting is that while his book focuses on deliberate efforts by Cambridge Analytica to influence elections, in this he broadens the scope to point out that malicious intent is hardly necessary. We have created a digital world that incentivizes reactions first and foremost, so the negative impacts are an inevitable result, not just a risk. It truly is on us as consumers to try to shift the tide, or at least in the meantime be very conscious of the things we are "served" online. 

As he says in his opener: none of it is random.

Our recommended book this month

Mindf*ck: Cambridge Analytica and the Plot to Break America by Christopher Wylie

This book (and its author) has received some criticisms for lack of objectivity, of references and citations, possible inaccuracies, and so forth, but nonetheless makes for compelling and persuasive reading about how the work of Cambridge Analytica (CA) influenced the 2016 U.S. elections as well as the Brexit Leave campaign. The author is a CA insider who worked on many of its programs, got to know key CA players like the infamous Steve Bannon, and became a whistleblower after leaving the company.  

Revelations about Facebook’s role in the 2016 campaign, intentional or otherwise, periodically crop up in the media as Zuckerberg & company struggle to clean up their act, but Wylie’s book helps to illuminate the enormity and complexity of the challenge. As a refresher, CA created an app that mined the data of close to 90 million Facebook users during the 2016 presidential campaign, and in the process identified those that could best be targeted by odious messages designed to increase their anger and divisiveness, thereby driving them to vote for the alt-right’s candidate.

Wylie’s book makes it clear that our increasing reliance on technology and the Internet has opened a window for mathematicians and data researchers to gaze through into our lives. Using the data they are constantly collecting about where we travel, shop, what we buy, and what interests us, they can predict our daily habits and make us more vulnerable to political influence and manipulation. Beyond simple invasion of privacy, and mostly unknown by the very individuals they are targeting, people in the U.S. and around the globe are relinquishing their political and general decision-making to algorithms.

While Wylie exposed Cambridge Analytica, which was an example of intentional abuse of the Facebook advertising system, the New York Times has recently been reporting on Facebook’s efforts to crack down on activity connected to QAnon. Described as a “conspiracy theory” or “mass delusion” claiming that a satanic cabal runs the world, this situation is made more complicated by the fact that it mostly involves individual users using Facebook as intended—as a social network. However, worse than just propagating this theory, according to the N.Y. Times, different movements within the orbit of QAnon are using Facebook to call for armed conflict, or claiming to expose human and child trafficking. Worse still, Facebook’s own recommendation engine and its potent algorithm apparently has been pushing large numbers of people towards promoters of QAnon conspiracies, in spite of reassurances to the contrary from Facebook. Now, weekly comments, likes, and posts from followers of QAnon groups amount to more than 600,000.  

All of this has been happening under new Facebook rules aimed at limiting the spread of QAnon’s content (by banning QAnon-themed groups and pages as of Oct. 6) along with content from other extremist groups. None of this was supposed to happen. The key question is what Facebook’s leadership actually is doing to prevent it ahead of the Nov. 3 election. For QAnon groups, President Trump is their hero and his declarations about conspiracies and misinformation about widespread voter fraud are being amplified. The New York Times has researched the ways in which the QAnon movement has been evading detection on Facebook, even with the platform’s new restrictions. Facebook claims that they have been evaluating ways to disrupt such activity. Twitter seems to have adopted more aggressive action to remove thousands of QAnon accounts, but a great many QAnon accounts still manage to return.

Wylie’s exposé paints Facebook as the connection between Cambridge Analytica and other even-darker stories pertaining to the QAnon conspiracy theory movement and countless militarized social movements and anarchist groups enabled by Big Tech’s social media platforms. Most concerning for us all should be: the failure of new policies from Facebook and Twitter aimed at cracking down on movements leveraging social media to celebrate violence; and the fact that for years social media technology and its potential for abuse has outpaced the capabilities of Congress to regulate or control both Big Tech and mayhem inspired by outliers like QAnon. As we mention in our “AI in U.S. Election Campaigns” headline article, we should not stop holding their feet to the fire to find a solution, but we as consumers also cannot afford to depend on big tech companies to eliminate these threats. It will be on us as consumers be watchful and mindful of the content we consume, and more importantly share.

 

Arnold Schuchter photo for his blog
ARNOLD’S ANALYSIS

Microtargeting

By ARNOLD SCHUCHTER, St. James Faith Lab Tech Editor

Elections in a democracy are supposed to be the time when the people of a country or political jurisdiction exercise the power to choose their next government and bestow power on their elected leaders. Every voter has his or her own ideologies and expectations that they would like to see a candidate fulfill. The main objective of political parties is to influence or sway the minds of voters to vote for their respective candidates. Traditionally politicians achieve this objective by meeting voters in person and through mass media advertising, public rallies, social media campaigns, etc. In recent years, however, technology has drastically transformed the whole approach to campaigning for public office.

Politicians now rely heavily on Big Data, AI, machine learning and analytics to connect and engage with voters. Software programmed to analyze the online behavior of voters, their data consumption and social media patterns, and a host of other factors enable creation of unique psychographic and behavioral voter profiles. Thus microtargeted advertising campaigns can be streamed to each voter based solely on their unique individual profiles. Every voter can get a different version of each candidate that echoes and is in line with key aspects of the voter’s personal history and psychographic profile. Automated social media bots can be used to increase, for example, targeted Twitter traffic favoring a candidate or delivering cleverly manipulated narratives aimed at damaging a candidate.

It is likely that a key to President Trump’s reelection will be motivating uncertain voters to vote. For Joe Biden the election may depend on persuading swing voters that he is not too liberal. For both candidates, AI-driven microtargeted campaigning is the perfect strategy for influencing iffy voters who are on the edge or nervous about a candidate’s ideology or are leaning one way or another. Microtargeting enables individualized messages tailored to the political priorities of each voter—COVID-19, the economy, healthcare, law and order, racial justice, climate change, immigration, etc.—using demographic and other data.

Another recent example of AI gone wrong is its use in U.S. elections to generate “deepfakes.” Deepfakes are audio or video generated by AI which shows someone saying or doing something that they did not actually say or do. Thankfully this has not been as big an issue this election cycle as some experts had predicted. So far it seems deepfakes only constitute a tiny fraction of the automated personalized messages that are part of the billions spent on digital media in U.S. presidential elections.

In the near and distant future, both commercial and political organizations no doubt will harness cloud-based analytical computing power to accomplish their goals. Organizations of every size and financial wherewithal will be able to inexpensively leverage AI and draw on tremendous cloud computing power. How do we know that? Savvy investors in the stock market have shown that they know, which accounts for the recent astronomical jump in the IPO price of Snowflake. Investors in Snowflake (including Warren Buffett) were betting on a future in which businesses (and political organizations) of all sizes can use Big Data and AI to make every kind of decision and also personalize all online interactions with customers—and voters!—in real-time.

Helpful terms and topics

We have prepared a glossary of helpful terms and topics, from artificial intelligence all the way to 5G, which you can find at our website by clicking the above link.

 
Copyright © 2020 St. James Faith Lab, All rights reserved.


Our website is https://www.stjamesfaithlab.org

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp