Copy
Have you ever built something you were ashamed of?

As developers, we are often one of the last lines of defense against potentially dangerous and unethical practices.

We’re approaching a time where software will drive the vehicle that transports your family to soccer practice. There are already AI programs that help doctors diagnose disease. It’s not hard to imagine them recommending prescription drugs soon, too.”  
 

Bill Sourour wrote these words in an essay that went viral recently, entitled “The code I’m still ashamed of.” In it, he recounts the story of a job that asked him to build a website surreptitiously promoting a pharmaceutical drug, and the consequences of the site’s unethical design. His essay precipitated a series of confessional posts by other engineers on Hacker News and Reddit, discussing their previous struggles with what they perceived to be unethical uses of their skills.

 

The conversation about the ethics of technology development has captured our cultural zeitgeist. From the pages of Hacker News and the New York Times to the programming of HBO and Netflix, humans are grappling with whether they should do everything they could do when it comes to developing new technology. It is hard to say why this conversation suddenly seems ubiquitous, but there is no denying its urgency.

 

For example, D/I Advisor and MSFT Researcher Kate Crawford went on a brief tweetstorm recently, cautioning technologists to consider their ethical limits in building new technology for the government. “The tech industry already builds tools for predictive policing, criminal justice risk scores, and tracking refugees,” she says, citing examples of technologies that are already ethically problematic at best. What else might they be asked to do, and where will they draw the line?

 

Recent research, including work done by D/I Professor Michael Luca, demonstrates that the algorithms that underlie those kinds of technology actually encode and reinforce human biases. He spoke recently about his research into algorithmic bias on the Airbnb platform with Boston’s WBUR.



In some cases, you might even call algorithms “weapons of math destruction,” as Harvard PhD Cathy O’Neil does in her recently published book. Cathy spoke with our friend, HBR Senior Associate Editor Walt Frick, on the HBR Ideacast about the ways that algorithms are conducting critical HR functions - and building in algorithmic bias that locks people out of jobs for ill-advised, and sometimes illegal, reasons.


In pop culture, Netflix’s Black Mirror continues to shock us with all-too-realistic visions of the impacts technology could have in the near future. Meanwhile, HBO’s new hit show, Westworld, touches on a particularly robust social debate about artificial intelligence. It features humans grappling with the implications of building artificially-intelligent, nearly “real” robots and then forcing them to serve as willing victims of humanity’s darkest desires.
 


While it might seem to be rich fodder for a sci-fi show, artificial intelligence at that level might be closer than we think: Google’s Neural Machine Translation system seems to have invented its own language, and some of society’s most brilliant people are warning us about the risks of unconstrained AI and backing active research in how to manage its development so that it doesn’t pose a threat to humans.

 

Code, algorithms, and artificial intelligence only begin to scratch the surface of the ethical challenges posed by new technology development. We’ll cover some others in upcoming newsletters, but in the meantime, we’d love to hear from you:
 

  • What are some of the technologies that concern you the most?
  • How might we make sure that technology is developed ethically?

Tweet your answers at @dighbs to be included in next week’s newsletter. For deeper thinking about the nature of the problem and potential solutions, check out this piece by Mark Bessoudo: Plato for Plumbers.
Competing on a Common Platform: Explaining Firm Engagement in Collectively Managed Ecosystems
Siobhán O'Mahony and Rebecca Karp of Boston University
Digital Seminar Series
Harvard Business School Campus
Did you like this week's newsletter?

From now until the holiday break, we'll be testing new formats to find the style that's most valuable for our community. Click once below to let us know.

Click here if YES

Click here if NO
Share
Tweet
Forward
Subscribe
The Harvard Business School Digital Initiative studies the digital transformation of the economy, and seeks to shape it by equipping leaders, building community, and conducting leading-edge research. This newsletter is produced by Matt R. Tucker, Platform Manager at the Digital Initiative.

This Bi-Weekly Newsletter by the Harvard Business School Digital Initiative is licensed under a Creative Commons Attribution 4.0 License.


unsubscribe from this list    update subscription preferences