Publications
Digital Homunculi: Reimagining Democracy Research with Generative Agents
Petr Špecián's preprint "Digital Homunculi: Reimagining Democracy Research with Generative Agents" is available on arXiv.
The paper deals with a new and exciting way to improve democracy using artificial intelligence (AI). The gist of the argument is as follows:
Our democratic systems are struggling to keep up with rapid technological changes.
It is hard to test new ideas for improving democracy because it's slow, expensive, and risky to experiment with real people and institutions.
The paper suggests using AI to create "digital homunculi" that can simulate how real people might behave in different social and political situations.
This AI-powered approach could allow researchers to quickly and safely test new ideas for democratic systems on a large scale.
While there are some potential risks (like AI bias), the benefits of this method could be huge if used carefully.
The paper calls for experts from different fields to work together on developing and using these AI tools in democracy research.
The main idea is that by using AI to create realistic simulations, we might be able to find better ways to run our democracies much faster than we can today. This could help our democratic systems catch up with the rapid pace of technological change.
Machine Advisors: Integrating Large Language Models Into Democratic Assemblies
Petr Špecián's paper "Machine Advisors: Integrating Large Language Models Into Democratic Assemblies" was published in a top-tier interdisciplinary journal Social Epistemology (link). The paper deals with the potential impact of using large language models (LLMs) as advisors in democratic decision-making processes. It explores how these AI systems might enhance the problem-solving capabilities of democratic assemblies by providing improved access to expertise.
The study acknowledges both the promise and challenges of this approach, discussing issues such as LLM hallucinations and value alignment. Petr argues that existing democratic procedures, like deliberation and voting, could serve as effective safeguards against both human and AI advisor shortcomings. The paper also proposes innovative solutions, such as customized LLM training and adversarial proceedings, to mitigate risks and maximize benefits.
Ultimately, the paper suggests that careful integration of LLMs into policymaking could significantly augment democracy's capacity to address complex social issues, potentially leading to more informed and higher-quality decisions in governance.
Give the machine a chance, human experts ain’t that great…
A mini-paper by Petr Špecián and Lucy Císař Brown was published in AI & Society journal. Full text is available here.
Petr and Lucy argue that, despite their flaws, large language models (LLMs) deserve a fair chance to prove their mettle against human experts, who are often plagued with biases, conflicts of interest, and other frailties. For epistemically unprivileged laypeople struggling to access expert knowledge, the accessibility advantages of LLMs could prove crucial. While concerns about LLMs' inconsistencies and arguments for human superiority are often justified (for now), they may distract from the urgent need to prepare for the likely scenario of LLMs' continued ascent. Experimentation with both the capabilities and institutional architecture of LLMs is necessary. As LLMs are here to stay and they keep improving, it is high time we started thinking about how to navigate the impending wave of their proliferation.
A Case for Democracy’s Digital Playground
Petr Špecián has published the outline of his idea on using digital worlds to expedite institutional innovation on The Loop, a blog by The European Consortium for Political Research (ECPR). You can read his essay here.