Skip to main content

Led by Stanford and fashioned after the Federalist Papers of the 18th century, a dozen articles featuring thinkers from diverse disciplines prognosticate on an artificial intelligence (AI) future. It is at the intersection of technology and democracy – quite apropos to discuss in this pivotal transition point our nation is facing. It is labeled “Volume 1,” hinting that a second volume is being contemplated. Let’s dive in and see what they are sharing now. 

“Protected Democracy” — Lawrence Lessig

Lawerence’s thesis is that we currently live in an “unprotected democracy.” This is a democracy in which a confluence of factors – corrupting influence of money in politics, a polarized media preaching to echo chambers and AI – are being increasingly used to fuel engagement and not discourse. This is a prescription for dystopia. What is a possible solution? Create a “random and representative sample, a protected assembly.” What is this protected assembly? A set of people chosen, in a manner like a jury for a case, to deliberate specific policy position in a fair and unbiased manner. Their recommendations can be put forth to the people for adoption. Lawrence views this as the antidote for AI influence in politics and polarization.

“A Vision of Democratic AI” — Divya Siddarth, Saffron Huang and Audrey Tang

The authors start with the premise, “people are fundamentally educable … able to rationally and competently govern themselves.” Their approach to how to engage people: Alignment assemblies. These are digital gatherings of people to articulate views on AI that are then used up-stream to shape LLM model behavior and creation. The authors discuss several successful use cases of such assemblies, I highlight a few below.

able to navigate to a new place without GPS? We have become dependent on this technology now. Is AI in the same category? Authors used an “alignment assembly” to come up with a participatory AI risk prioritization – which highlighted “overreliance on AI” as a high priority item to address. 

Another use case involved working with Anthropic to shape its LLM model behavior. Anthropic had famously developed “Constitutional AI” – using an alignment assembly, the model output was made better.

Perhaps the most remarkable use case was attempted in Taiwan – where the “alignment assembly” was used to allow people to imagine a digital future. 

Eine ikonografische Darstellung eines spiralgebundenen Notizbuchs mit einem Rechteck auf dem Einband und einem Schreibgerät rechts neben dem Arbeitsbuch. Dunkelgrüne und blaugrüne Positiv-Farbpalette.

“Rediscovering the pleasures of pluralism: The potential of digitally mediated civic participation” — Lily L. Tsai and Alex Pentland

We live in a world of acute polarization. Social media platforms are currently tools that aggravate the situation further. The authors explore the question, “How can one develop a platform which promotes civil discourse that incorporates all points of view?” They detail an interesting experiment in Romania – “AI-enabled public engagement in the school of possibilities.” 

The idea in a nutshell – how to engage students in a policy debate regarding school reform. The solution? An AI-bot that talks to each student individually. Their conversation is summarized, aggregated, clustered and presented in a global blackboard visible to all. The student can choose to engage extensively, just briefly or not at all. Interesting experiment. They go over what it would take to implement such an idea to promote civil discourse across the political spectrum.

“The Potential for AI to Restore Local Community Connectedness, The Bedrock of a Healthy Democracy” — Sarah Friar and Laura Bisesto

The authors seek to correct the “epidemic of loneliness” caused by excessive use of social media and disengagement from their local communities. Being connected to neighbors at the local level has many benefits including forming the bedrock foundation for democracy. The authors talk about an AI-based tool NextDoor which focuses on connecting local communities and promote healthy conversations utilizing ChatGPT to help rewrite posts to be respectful and engaging within communities.

“AI Meets the Cascade of Rigidity” — Jennifer Pahlka

The author lends a cautionary note on how the government deals with AI – primarily within the government itself. Being totally risk averse can lead to underutilization of AI. AI competency and allowing decision making at the worker level would be needed for better and more optimal use of AI within the government.

“Democracy 2.0” — Eric Schmidt

Eric Schmidt, former CEO of Google, talks about “algocracy’– rule by algorithms.” Not to indicate that we should lose agency, but use AI as true assistants in decision making. It can help judges, policymakers and legislators with background research and analysis. It can even help executive leadership in situation room crisis handling. But the key is judicious use of AI to improve democratic principles while ensuring efforts to engender trust in the decision-making process. Getting people to trust AI is going to be an uphill battle.

Concluding thoughts

The Digitalist Papers is an interesting collection of essays. We will cover the next 6 essays in the series in my next blog.

 

“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.