Skip to main content

We’re Unprepared for the A.I. Gold Rush

March 3, 2023

I think I know why artificial intelligence is breaking our all-too-human brains. It's coming at us too fast. We don't understand what's happening inside the black boxes of A.I., and what we don't understand, we understandably fear. Ordinarily we count on lawmakers and regulators to look out for our interests, but they can't keep up with the rapid advances in A.I., either.

Even the scientists, engineers and coders at the frontiers of A.I. research appear to be improvising. Early this month, Brad Smith, the vice chair and president of Microsoft, wrote a blog post describing the surprise of company leaders and responsible A.I. experts last summer when they got their hands on a version of what the world now knows as ChatGPT. They realized that "A.I. developments we had expected around 2033 would arrive in 2023 instead," he wrote.

There are two potential reactions. One is to slam on the brakes before artificial intelligence subverts national security using deep fakes, persuades us to abandon our spouses, or sucks up all the resources of the universe to make, say, paper clips (a scenario some people actually worry about). The opposite reaction is to encourage the developers to forge ahead, dealing with problems as they arise.

Adam Thierer, an innovation and technology policy analyst at the free-market R Street Institute, labels the cautious approach as "anticipatory ethics" and the less cautious one as "evasive entrepreneurism." He leans toward the latter camp, which sometimes goes by the slogan, "Better to seek forgiveness than permission."

"By acting as entrepreneurs in the political arena, innovators expand opportunities for themselves and for the public more generally, which would not have been likely if they had done things by the book," Thierer wrote last week in a Medium post, quoting from his own 2020 book, "Evasive Entrepreneurs and the Future of Government."

I sympathize with Thierer to an extent. I have my doubts about overly cautious approaches such as the precautionary principle, a concept in European Union law that says that "if it is possible that a given policy or action might cause harm to the public or the environment and if there is still no scientific agreement on the issue, the policy or action in question should not be carried out." Cass Sunstein of Harvard Law School wrote in 2008 that the precautionary principle is "deeply incoherent" because precautions themselves create risks "and hence the principle bans what it simultaneously requires." Precautions against A.I. could prevent advances that save lives through better agriculture, medicine and, some day, driverless cars.

On the other hand, when things are changing as fast as they are, I don't feel comfortable with Silicon Valley bros telling us to mind our own business while they do their A.I. thing (and potentially reap millions or billions of dollars in the process). It's OK to move fast and break things, but it's not OK to move fast and possibly break the world.

Moving slowly to allay critics' fears was an easy choice when A.I. was only marginally profitable. It's getting harder now that real money is at stake. The Washington Post reported last month that some people at Meta, the parent of Facebook, aren't happy that its Blenderbot chatbot, released last summer, was constrained to stay away from controversy. "The reason it was boring was because it was made safe," said Yann LeCun, Meta's chief artificial intelligence scientist. ChatGPT is more fun, albeit also more bizarre. Some Meta employees have urged the company to speed up approvals to take advantage of the latest technology, one told The Post.

The Times reported last month that Alphabet's Google unit created a "Green Lane" program to fast-track the review of A.I. initiatives for fairness and ethics.

The new rush to market could backfire if it's overdone. If regulators believe that practitioners are getting reckless, they will be more likely to react with draconian regulation. If instead the practitioners prove they're prudent, regulators can relax a bit, and everyone will be better off.

So far, regulators and lawmakers have mostly steered a middle course. The European Commission's draft Artificial Intelligence Act leans away from a strict use of the precautionary principle by seeking to regulate specific uses of A.I. rather than the technology itself. Government-run social scoring, in which a government gives demerits to citizens for bad behavior, would be banned. Using A.I. rather than human beings to score job-seekers' résumés would be regulated. The Europeans are still debating whether to ban or regulate emotion recognition, in which a computer reads a person's nonverbal signals for commercial or other purposes, according to a report on the Euractiv website.

In the United States, the National Institute of Standards and Technology has issued a dry but thorough Risk Management Framework for A.I. that many companies, including Google and Amazon Web Services, have signed on to. It says A.I. should be valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced and fair, with harmful bias removed.

The White House's Office of Science and Technology Policy came out last year with a more pointed blueprint for an A.I. Bill of Rights that, while nonbinding, contains some intriguing concepts, such as, "You should be able to opt out from automated systems in favor of a human alternative, where appropriate."

In Congress, the House has an A.I. Caucus with members from both sides of the aisle, including several with tech skills. Jay Obernolte, a co-chair, is a Republican from California who is a video game developer and has a master's degree in artificial intelligence. Ted Lieu, a Democrat from California, wrote a Times guest essay last month saying he will introduce legislation for a nonpartisan commission to recommend "how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply."

That's all to the good, but regulators and lawmakers will always be on the outside looking in. It's best if the creators of artificial intelligence take responsibility for doing things right in the first place. I give the makers of ChatGPT at OpenAI credit for acknowledging that ChatGPT can "generate outputs that are untruthful, toxic or reflect harmful sentiments" — and what's more, trying to do something about it. OpenAI's newer InstructGPT "uses human preferences as a reward signal to fine-tune our models," the company says. InstructGPT models "also make up facts less often, and show small decreases in toxic output generation," it says.

One risk is that the race to cash in on artificial intelligence will lead profit-minded practitioners to drop their scruples like excess baggage. Another, of course, is that quite apart from business, bad actors will weaponize A.I. Actually, that's already happening. Smith, the Microsoft president, wrote in his blog last month that the three leading A.I. research groups are OpenAI/Microsoft, Google's DeepMind and the Beijing Academy of Artificial Intelligence. Which means that regulating A.I. for the public good has to be an international project.

"New technologies unfortunately typically bring out both the best and worst in people," Smith wrote. It will take an all-out effort to beat back the worst and bring forth the best.

Elsewhere: Consumer Borrowing Rises

Strong consumer spending has helped keep the economy aloft, but new data from the Federal Reserve Bank of New York's Household Debt and Credit Report raises questions about whether consumers are about to be tapped out.

For a while, Americans paid down their credit cards with money from the Covid-19 stimulus, but their balances have risen strongly in recent quarters. Balances on home equity lines of credit also ticked up in the last half of 2022, though less strongly. "If households were swimming in ample savings, why would they tap revolving credit at the fastest pace in 18 years?" David Rosenberg, the founder and president of Toronto-based Rosenberg Research, wrote in a note to clients on Tuesday. "The most likely explanation is that the financial health of households isn't as rosy as depicted in the media, and Americans are trying to offset their declining real disposable incomes by resorting to debt."

Quote of the Day

"At no time will we argue that your risk in the stock market goes away or even diminishes over time. This popular and rather seductive belief is a fabrication based on misconception, illusion and confusion. Stocks generally have higher returns than lower-risk investments. The premium compensates you for taking risk. There is nothing that magically happens over time to remove the risk from the picture."

— Zvi Bodie and Rachelle Taqqu, "Risk Less and Prosper: Your Guide to Safer Investing" (2011)