I thought you all might be interested in my little contribution to a colloquy in Reason magazine's 2024 issue. You can find the whole thing here: 
https://web.archive.org/web/20240506084929/https://reason.com/2024/05/05/ai-is-like/

AI Is Like the Dawn of Modern Medicine

By Mike Godwin

When I think about the emergence of "artificial intelligence," I keep coming back to the beginnings of modern medicine.

Today's professionalized practice of medicine was roughly born in the earliest decades of the 19th century—a time when the production of more scientific studies of medicine and disease was beginning to accelerate (and propagate, thanks to the printing press). Doctors and their patients took these advances to be harbingers of hope. But it's no accident this acceleration kicked in right about the same time that Mary Wollstonecraft Shelley (née Godwin, no relation) penned her first draft of Frankenstein; or, The Modern Prometheus—planting the first seed of modern science-fictional horror.

Shelley knew what Luigi Galvani and Joseph Lister believed they knew, which is that there was some kind of parallel (or maybe connection!) between electric current and muscular contraction. She also knew that many would-be physicians and scientists learned their anatomy from dissecting human corpses, often acquired in sketchy ways.

She also likely knew that some would-be doctors had even fewer moral scruples and fewer ideals than her creation Victor Frankenstein. Anyone who studied the early 19th-century marketplace for medical services could see there were as many quacktitioners and snake-oil salesmen as there were serious health professionals. It was definitely a "free market"—it lacked regulation—but a market largely untouched by James Surowiecki's "wisdom of crowds."

Even the most principled physicians knew they often were competing with charlatans who did more harm than good, and that patients rarely had the knowledge base to judge between good doctors and bad ones. As medical science advanced in the 19thcentury, physicians also called for medical students at universities to study chemistry and physics as well as physiology.

In addition, the physicians' professional societies, both in Europe and in the United States, began to promulgate the first modern medical-ethics codes—not grounded in half-remembered quotes from Hippocrates, but rigorously worked out by modern doctors who knew that their mastery of medicine would always be a moving target. That's why medical ethics were constructed to provide fixed reference points, even as medical knowledge and practice continued to evolve. This ethical framework was rooted in four principles: "autonomy" (respecting patient's rights, including self-determination and privacy, and requiring patients' informed consent to treatment), "beneficence" (leaving the patient healthier if at all possible), "non-maleficence" ("doing no harm"), and "justice" (treating every patient with the greatest care).

These days, most of us have some sense of medical ethics, but we're not there yet with so-called "artificial intelligence"—we don't even have a marketplace sorted between high-quality AI work products and statistically driven confabulation or "hallucination" of seemingly (but not actually) reliable content. Generative AI with access to the internet also seems to pose other risks that range from privacy invasions to copyright infringements.

What we need right now is a consensus about what ethical AI practice looks like. "First do no harm" is a good place to start, along with values such as autonomy, human privacy, and equity. A society informed by a layman-friendly AI code of ethics, and with an earned reputation for ethical AI practice, can then decide whether—and how—to regulate.

Mike Godwin is a technology policy lawyer in Washington, D.C.