ARTIFICIAL INTELLIGENCE:  WHAT I WORRY ABOUT

This is the title I chose for my personal blog, which is meant to give me an outlet for one of my favorite crafts – writing – plus to use an image from my favorite sport, golf.  Out of college, my first job was as a reporter for the Daily Astorian in Astoria, Oregon, and I went on from there to practice writing in all my professional positions, including as press secretary in Washington, D.C. for a Democrat Congressman from Oregon (Les AuCoin), as an Oregon state government manager in Salem and Portland, as press secretary for Oregon’s last Republican governor (Vic Atiyeh), and as a private sector lobbyist.  This blog also allows me to link another favorite pastime – politics and the art of developing public policy – to what I write.  I could have called this blog “Middle Ground,” for that is what I long for in both politics and golf.  The middle ground is often where the best public policy decisions lie.  And it is where you want to be on a golf course.

Will artificial intelligence change the world?

And, if it does, will it be for the better or for the worse?

From what little I know, I worry about the worse.

If left unchecked, AIcan spread disinformation, allow companies to hoard users’ personal data without their knowledge, exhibit discriminatory bias, or cede countless human jobs to machines.

There also are worries that AI systems will result in unfair incarceration, spam and misinformation, cyber-security catastrophes, and eventually a “smart and planning” AI that will take over power plants, information systems, hospitals, and other institutions.

Who knows?

I don’t.  Nor, for example, do Washington Post editorial writers who recently commented on nascent efforts in Congress to regulate AI.  They favor what I would call “smart regulation.”

Here is a summary of what the writers said:

Error! Filename not specified.

“The conversation about artificial intelligence tends to devolve into panic over humanity’s eventual extinction, or at the very least subjugation:  Will robot overlords one day rule the world?  

“But machine-learning is more than a hypothetical, and it presents plenty of immediate problems that deserve attention, from the mass production of misinformation, to discrimination, to the expansion of the surveillance state.

“These harms — many of which have been with us for years — ought to be the focus of AI regulation today.

“The good news is that Congress is on guard, holding hearings and drafting bills that attempt to grapple with these new systems that can absorb and process information in a manner that has typically required human input.  Bi-partisan legislation is under discussion, spearheaded by Senate Majority Leader Charles E. Schumer (D-New York).

“The bad news is that nothing so far is close to comprehensive — and piecing these ideas together with steps the White House and federal agencies have already taken entails some conflict and confusion.  Before the country can even start to agree on a single, clear set of rules for these rapidly evolving tools, regulators need to agree on some basic principles.”

So says the Post.  Here are the points it says should be part of the discussion about smart regulation.

AI systems should be safe and effective

This one is pretty basic.  Anyone designing these tools should conduct a thorough evaluation of any harm they might cause, take steps to prevent it and measure the rate at which that harm occurs.  Guarding against misuse or abuse could be trickiest of all.  Already, con artists are using AI apps to simulate the voices of victims’ loved ones to persuade them to fork over cash; deepfake videos of celebrities and political candidates could threaten reputations or even democracy.

AI systems shouldn’t discriminate

This principle nicely ties in with the safety and effectiveness guarantee — impact assessments, for instance, can help guard against discrimination if they measure effects by demographic group.

But to root out bias, it will also be essential to examine the data used to train these algorithms.  Consider data drawn from criminal justice databases where higher arrest rates of minorities are baked in.  Reusing those numbers, for example, to predict a convict’s chances of recidivism could end up reinforcing racist policing and punishment.

AI systems should respect civil liberties

As always when personal data is involved, privacy is key.  Essentially, what companies can and can’t do should depend on what consumers would reasonably expect.

Then, there’s the question of privacy in how these systems are used.  The Chinese Communist Party has notoriously installed more than 500 million cameras around the country; it’s impossible to hire 500 million people to monitor them, so AI does the job.

AI systems should be transparent and explainable

People also need to know when they’re interacting with an AI system, period — not only so no one falls in love with their search engine, but also so, if one of these tools does cause injury, whoever has been hurt has an avenue to seek recourse.  That’s why it’s important for AI systems to explain both that they’re AI and how they work.

Putting principles to work

AI isn’t one thing — it’s a tool that allows for new ways of doing many things.  Applying a single set of requirements to all machine-learning models wouldn’t make much sense.  But to figure out what those requirements should be, case by case, the country does need a single set of goals.

Then, the Post makes a cogent argument against coming up with what it calls “stringent AI regulation” because, it adds, “these technologies are going to exist regardless of whether the United States allows them.

“Instead, it will be countries such as China that build them, without the commitment to democratic values that our nation could ensure.  Certainly, it’s better for the United States to be involved and influential than to bow out and sacrifice its ability to point this powerful technology in a less terrifying direction.

“But that’s exactly why these principles are the essential place to begin:  Without them, there’s no direction at all.”

So, all of this does not end up causing me to be comfortable with AI, given, especially, that I don’t understand it well enough to be comfortable.  It just underlines a simple, yet complicated, word – balance.

We need to find balance in the regulations that are promulgated.  Regulation with a purpose.  Not overly stringent.  Not overly relaxed.

As always, BALANCE REQUIRES A BALANCING ACT.

Leave a comment