Copy
View this email in your browser
Out of Sync on Powerful New Technologies (& Other Important Things)
At this one-year anniversary of Russia’s invasion of Ukraine I’m still struck, fairly regularly, by the disparity between America’s response to it and the far more tepid reactions of Ukraine’s European neighbors.
 
France, Germany—they’re closer to the carnage after all, remember the fall of the Iron Curtain across their continent, the Russians marching into Prague. They’ve had a cat-bird seat while their Bear of a neighbor nearly annihilated Chechnya, Syria and now cities like Mariupol. They couldn’t have missed how these invaders found gardens but left deserts.
 
Do they really think this can’t happen again in Dresden or on Rue St. Germain?  They must not, because a grand alliance will stop it from happening (“not one inch of NATO soil,” intones Jens Stoltenberg).  But that German-inflected bravado is mostly because of the U.S. and the tens of billions it is willing to spend to champion its prerogatives on the world’s stage, while almost every European country (except those who seem to know better on Russia’s frontline) have been happy to let taxpayers in Philly and Newark and LA foot the tab for security in their neighborhood. 
 
It’s the wooden strut of Biden’s aviators on the one hand, more than a little free-loading on the other—and ominously out-of-sync. 
 
This is how columnist (and former Brit) Gerard Baker described their side of the disparity this week: “Europeans… continue to rule an empire of their own mind—a curious realm that combines ‘imagine there’s no countries’ post-historical pacifism with cynical economic opportunism” that, for decades now, the U.S. has indulged.
 
It’s really pretty startling, but then again, the disparate reaction to Russia’s “special military operation” even from our closest friends is only one place where the allied world’s response to impending dangers is troublingly “misaligned” today, where one side seems to “see things clearly” while the other sees an opportunity to indulge their fantasies and more than a little profit-taking.
 
To help maintain this mildly apocalyptic tone, the images this week are care of artist (and poet) William Blake, that conjurer of humans touching powers that appear far beyond their control. The famous image above is called “The Ancient of Days” (1794) where Urizen, who masquerades as the embodiment of reason and law, actually turns out to be a repressive force who is trying to impose his brand of uniformity upon mankind. The image below is Blake’s “Jerusalem” (1804), where a figure carrying a mysterious orb invites us through a door that could lead to unspeakable dangers and maybe to death itself. And finally, near the conclusion, is “Urizen in Chains,” no longer able to harm anyone once the subjugated finally woke up (also from1794).
A second, equally profound disparity between Americans and Europeans is in how we differ in our embrace of rising, transformative technologies before we understand their potential downsides. Teenager-toxic social media, distraction-inducing smart phones, boyhood-addicting video games: Europeans have been far quicker than Americans to say “Hold on a minute” and then to build a regulatory regime around “these marvels” so that their harms to specific groups (or to just about everybody) can be addressed.  
 
(Of course, the cynic in me wonders whether these differences arise because its American, instead of French or German companies, like Facebook, Apple and Google that are profiting the most from these innovations, but it’s far more than that I think. Unlike the “freedom-loving individualism” of the U.S., the EU countries have generally built family-friendly safety nets around their citizens—think subsidized daycare, family leave, cradle-to-grave health care—so protecting their citizens from tech-related harms is consistent with social priorities that these countries have long embraced because they cherish collective over individual well-being.)
 
By contrast, the American approach to a transformative technology is more like the ostrich with its butt in the air and its head in the sand. I described the early thrill and belated horror that’s often been ours at some length in October 2020:
 
“Given the speed of innovation and the loftiness of its promises to improve our comfort or convenience, we often embrace a new technology long before we experience its most worrisome consequences.  As consumers, we are pushed to adopt new tech (or tech-driven services) by advertising that ‘understands’ our susceptibilities, by whatever the Joneses are buying into next door, and by the speculation ‘that somehow it will make our lives better.’ The sticker shock doesn’t come until we realize that our natural defenses have been overwhelmed and we’ve been herded by marketers like so many sheep….
 
“[Moreover] as consumers, we feel entitled to make decisions about tech adoption on our own, not wishing to be told by anybody that ‘we can buy this but can’t buy that,’ let alone by authorities in our communities who are supposedly keeping ‘what’s good for us’ in mind. Not only do we reject a gatekeeper between us and our ‘Buy’ buttons, there is also no Consumer Reports that assesses the potential harms of these technologies to our autonomy as decision-makers, our privacy as individuals, or our democratic way of life — no resource that warns us ‘to hold off’ until we can weigh the long-term risks against the short-term rewards. As a result, we defend our unfettered freedom until we start discovering just how terrible our freedom can be.”

 
Of course, on the other side of the consumer equation are American businesses driven by unimaginable profits. In that regard, over the past several weeks, we’ve been witnessing the next “transformative” technology being peddled to consumers by the tech giants as well as a couple of well-positioned smaller companies. No longer the stuff of back-room laboratories, generative AI (or artificial intelligence programs "that can think for themselves”) have finally come out of the closet in ChatGPT most prominently, but also in lesser-known but equally powerful applications like Midjourney.  
 
“Everyone’s experimenting with them!” exclaimed the Wall Street Journal on Tuesday’s front page, an outlet that knows only too well that beneath the froth that they’re reporting lurks potential dangers. 
 
“In the past, AI was hidden within layers of back-end infrastructure for streamlining logistics or automating content moderation. Now, applications like ChatGPT and the image-generator Midjourney have placed the technology directly into the hands of individuals and small businesses who are using the tools to see if they can automate laborious tasks or speed up creative processes. Some are driven by the thrill of being able to do things not previously possible; others by an existential push to master the nascent technology so they don’t fall behind…. 
 
“AI experts caution, however, that such tools should only be used to support people who are already experts in their domain. Generative AI has been shown to spew disturbing content and misinformation, while other concerns have surfaced over intellectual property theft and privacy.
 
“‘The purpose that it is serving is not to inform you about things you don’t know. It’s really a tool for you to be able to do what you [already] do better,’ said Margaret Mitchell, chief ethics scientist at AI research startup Hugging Face.”

 
("things you don't already know about" and a start-up called Hugging Face!) 
 
Even at this early stage, some of the benefits and likely burdens seem clear. For example, if you’re an architect, you can take a client’s sketches for a new wing or outpost, feed that information into a Midjourney application, and (Voila!) it will “generate” new variations of the concept using different materials while keeping within the client’s original design specifications. 
 
On the other hand, some white-collar professionals (including lawyers, accountants and, I suppose architects) fear that their jobs will be replaced by programs like these in the same ways that robots are replacing many factory jobs. Others speculate about “public comments” on, say, pending regulations or even bulletin boards like Yelp, being tainted by AI-driven but human-sounding feedback. And that’s in addition to the privacy and IP concerns and the credible harms that we haven’t had the time to imagine yet.
 
Because today, one of the challenges to cost-benefit analysis is the speed in which we have to perform it. 
 
It's a point that's been driven home by those who have studied the pace and impact of new technologies on our societies in the past. Given how quickly artificial intelligence can "generate" new products will accelerate product impacts that used to take generations for us to experience and then domesticate—like the railroad, telephone or automobile—forcing us to cope with its new burdens (in particular) in something that approaches “real time.” 
 
Here is one voice that has considered the advent of generative AI against the pace and impact of innovation historically:
 
“One reason why artificial intelligence is such an important innovation is that intelligence is the main driver of innovation itself. This fast-paced technological change could speed up even more if it’s not only driven by humanity’s intelligence, but artificial intelligence too. If this happens, the change that is currently stretched out over the course of decades might happen within very brief time spans of just a year. Possibly even faster.
 
“…As this technology is becoming more capable…it can give immense power to those who control it (and it poses the risk that it could escape our control entirely) [or the dystopian future that was imagined by the Terminator movies]….
 
“Because of the immense power that technology gives those who control it, there is little that is as important as the question of which technologies get developed during our lifetimes. Therefore I think it is a mistake to leave the question about the future of technology to the technologists. Which technologies are controlled by whom is one of the most important political questions of our time, because of the enormous power that these technologies convey to those who control them.
 
 “We all should strive to gain the knowledge we need to contribute to an intelligent debate about the world we want to live in. To a large part this means gaining the knowledge and wisdom on the question of which technologies we want [and, of course, which ones we don’t want].”
 
On this author's point about not trusting our technologists, you can check out a piece in the Times this week about ChatGPT called History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot.
 
We're also back in the world of politics to solve one more looming problem with massive consequences. But no worries. Our elected representatives will surely “have our backs.”  And in that half-minute before midnight, when Skynet is about to take over and Arnold Schwarzenegger is not around to save us, our Freedom-Loving People will certainly rouse themselves (like Ukrainians) into doing something/anything to pull us back from the brink of disaster. 
 
But all kidding aside, where is our government today on the next opportunity that's threatening us?
 
Well last Tuesday, as luck would have it, someone at Fast Company was also wondering Will Congress Miss Its Chance to Regulate Generative AI Early?  Its author no doubt remembers that this is the same legislative body that was caught on TV asking Mark Zuckerberg how Facebook made its money as recently as the Trump administration. Unfortunately, the consequences of AI are no more of a comedy than social media's were. Still, the Fast Company article starts out hopefully.
 
“Many in Washington now believe that an effective regulatory regime must be put in place at the beginning of new technology waves to push tech companies to build products with consumer protections built in, not bolted on.”
 
For example, California Congressman Ted Lieu, one of only three members in Congress with a computer science background, recently introduced a resolution directing the House to open a broad study on generative AI technology “in order to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans . . .” For his part, Oregon Senator Ron Wyden spearheaded a bill last year that requires tech companies to file “impact assessments of automated decision systems and augmented critical decision processes” powered by AI to the Federal Trade Commission before they are commercialized. But most in Congress are out of their depth when it comes to cutting edge (and maybe any kind of) technology because of the steep learning curve and their other preoccupations.
 
Sadly, that will likely leave generative AI, “the next big wave” of innovation, to be commercialized without regard to “the public’s interest” in its impacts.
 
“If social media relied on a person’s friends and family to customize and personalize the content a user gets from the internet, the thinking goes, then future apps might use generative AI to create endless amounts of customized and personalized content out of whole cloth. Future generative AI apps may generate any kind of multimedia experience (chatbots that sound like your best friend, create-your-own-plot movies, custom games, etc.) the user can think to ask for, and some they could never think of.” 
 
Shouldn’t somebody, anybody (besides the tech companies that stand to profit) be putting up a few guardrails before we get here?
If not shackles, at least some domestication.
The 2020 post I quoted from above was about some of my most inspired neighbors. It was called:  The Amish Test & Tame New Technologies Before Adopting Them: We Can Learn How to Safeguard What’s Important to Us Too.  After the Amish, I also proposed “technology testers” and ultimately "gatekeepers" like these Pennsylvania Dutch farmers and manufacurers use, to ensure that a new technology (think a smart phone or the internet itself) “serves the human purposes that are most important to the group while also recommending suitable safeguards (like age or use restrictions).” 
 
This kind of screening process wouldn’t stop an innovation—the Amish aready use to their advantage a startling mix of modern technologies—just slow down acceptance of "the next, shiny new thing" until there is a better understanding of its likely consequences from the community’s perspective. 
 
In that same what’s-the-hurry spirit, I also proposed a crowd-sourced equivalent of Consumer Reports, which could publish “its assessments on a quality-controlled Wikipedia-type page” that every consumer could see and potentially add to “with the aim of laying out the risks (as well as rewards) of new technologies before they’re widely adopted.”
 
The only problem is that these kinds of community-protecting initiatives would have a far better chance of taking off in Europe than they would here in “don’t tread on me or my freedoms" America. But just as Europe may be a somewhat spineless battle partner in Ukraine, it may be the more constant of the two of us when it comes to staying on (if not ahead of) the curve on potentially world-changing technologies.
 
Given all of the publicity around ChatGPT and generative AI this week, I was hardly surprised to read in the Wall Street Journal that “Privacy Regulators Step Up Oversight of AI Use in Europe.” Given the sudden growth of AI business applications and the EU's imminent rule-making on technology’s impacts more generally, Europe’s governors are pushing privacy regulators to open dedicated AI units and hire new staff. 
 
They're almost the last things we’d think of doing here.
 
Maybe a ground-breaking technology, as well as when it comes to confronting Russia in Ukraine, we could begin to acknowledge our respective strengths and learn some much-needed prudence from one another.
 
+ + +
 
Have a good week. With any luck, I’ll see you next Sunday.
It’s always good to hear from you. Just hit “Reply.”
Twitter
Facebook
Website
Copyright © 2023 David Griesing, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp