November 22, 2024 | V. "Juggy" Jagannathan
Led by Stanford and fashioned after the Federalist Papers of the 18th century, a dozen articles featuring thinkers from diverse disciplines prognosticate on an artificial intelligence (AI) future. In my last blog I highlighted the importance of six essays. All of these essays have an audio version, almost all of them are about half an hour long (the time it takes to vocalize a 4,000-word essay). Let’s look at the next six.
John Cochrane makes a full-throated plea to not regulate AI. The following prompt that he would like to issue to a fictitious and more powerful ChatGPT says it all: “ChatGPT-5, please write 4,000 words on AI regulation, society and democracy, in the voice of the ‘Grumpy Economist’…” You cannot regulate something until you know well the risks and rewards of using AI. Seatbelt regulation was only enacted after the dangers of auto crashes highlighted the need. As such, AI technology is in a nascent state and we still don’t have a full grasp of what its true potential is or what dangers it actually creates. Instead of regulating, allow competition to thrive – that will automatically correct the problems. Regulations and regulators can wait.
The article attempts to assess the impact of AI on democracy. It invokes Yogi Berra’s famous quote: “It’s difficult to make predictions, especially about the future.” AI’s impact on democracy is different than social media – better in some ways (it can identify disinformation), worse in other ways (it can create disinformation). The author advocates vigilance, competition and proactive measures in dealing with the ever-changing landscape of AI evolution.
This article is expressly tied to generative AI capabilities – particularly when it comes to shaping public opinion. Google search implicitly supports a “user sovereignty model” – basically listens to the user’s desire. But with gen AI with guardrails, it may turn into a ”public safety and social justice model” – where the AI company implements various guardrails whose intent and focus are not transparent. It is one thing to prevent answering about how to make a bomb and a different thing to oppose presenting a particular point of view – be it liberal or conservative. Political power concentrated with a few tech giants can be dangerous. Antidote? Competition.
The authors “one Muslim, one Christian and one Jew …” did not walk into a bar, but brought their collective and diverse views on AI and democracy to this article. They bucket the ideologies inspired by tech into two categories: One that allows a centralized AI and AGI (artificial general intelligence) to gain supremacy over people, dubbed “technocracy.” The other inspired by cryptocurrencies and metaverse to a decentralized ascendancy of ”libertarianism.” These models fundamentally ignore the humanity and agency vested in people and nature. As mentioned in the prior article, Taiwan showcased a successful utilization of digital democracy. Their thesis: Any tech that ignores humanity and nature is bound to lead to a dystopian future.
The authors make a comparison between GPS and gen AI functionality. Before GPS, people struggled with paper maps to navigate. Though GPS tech was available to the military by the 80s, it was not commercially available as the Department of Defense (DoD) thought it was a security risk. That stricture was removed in 2000, and the technology was made freely available to the private sector. Now we cannot live without turn-by-turn navigation. It is quite ubiquitous.
Large language models (LLMs) serve a similar function in the informational sphere. They enhance everyone’s agency – help them upskill and learn about new things by interacting with a chatbot. They help in navigating the informational landscape! It can be abused for sure, but positive uses abound. How to apply for a college, how to interpret the term “rent arrearage,” help with understanding your clinical conditions. The list is endless. It restores agency to individuals.
In this concluding essay, the author, James Manyika, addresses the following proposition: “It’s the year 2050. AI has turned out to be hugely beneficial to society and generally acknowledged as such. What happened?” Essentially asking what must have happened to harness the power of AI to benefit society while mitigating the negative impacts. Five categories of problems and risks are discussed which range from developing trustworthy AI, human-assistive, socially non-disruptive (not causing massive unemployment) and tech that is aligned with societal values.
The Digitalist Papers is an interesting collection of essays and particularly germane at this point in the evolution of AI. A recurring theme here is let competition thrive, do not over-regulate (or under-regulate). How exactly we are going to arrive at a Goldilocks formulation of regulation is anybody’s guess.
“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.