⿻ Plurality
&
6pack.care
Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind.
When we discuss "AI" and "society," two futures compete.
In one—arguably the default trajectory—AI supercharges conflict.
In the other, it augments our ability to cooperate across differences. This means treating differences as fuel and inventing a combustion engine to turn them into energy, rather than constantly putting out fires. This is what I call ⿻ Plurality.
Today, I want to discuss an application of this idea to AI governance, developed at Oxford’s Ethics in AI Institute, called the 6-Pack of Care.
As AI becomes a thousand, perhaps ten thousand times faster than us, we face a fundamental asymmetry. We become the garden; AI becomes the gardener.
At that speed, traditional ethics fail. Utilitarianism becomes brittle. Deontology breaks—what does a universal rule mean from a plant to a gardener?
A framework that assumes asymmetry from the start is an ethics around Civic Care, particularly the work of Joan Tronto. The core idea is that a good gardener must till to the tune of the garden, at the speed of the garden.
This approach mandates a hyper-local, parochial moral scope. Gardeners are bound to specific gardens; they are not a colonizing or maximizing ("paper-clipping") force.
This allows for different configurations, mirroring the permaculture movement, embracing anti-fragility through diversity—what Professor Yuk Hui calls "technodiversity"—rather than fragile monocultures.
The vertical narrative of a technological "Singularity" needs a horizontal alternative. Today, I wish to discuss that alternative: a steering wheel called ⿻ Plurality, and its design principles: the 6-Pack of Care.
From Protest to Demo
Our journey began in 2014 with the Sunflower Movement, a protest against an opaque trade deal with Beijing. Public trust in the government plummeted to 9 percent. Our social fabric was coming apart, largely due to "engagement through enragement" parasitic AI—what I call antisocial media.
As civic technologists, we didn't just protest; we pivoted to demonstration ("demo"). We occupied the parliament for three weeks and began building the system we wanted to see from the inside.
We crowdsourced internet access and livestreamed debates for radical transparency. Half a million people on the street, and many more online, used collaborative tools pioneered by other movements—like Loomio (from Occupy Wellington) and later Polis (from Occupy Seattle).
We drafted better versions of the trade deal together, iteratively. Each day, we reviewed the low-hanging fruits—the ideas agreed upon the previous day—and the best arguments from both sides on the remaining conflicts, resolving them step by step.
By shifting from protest to a productive demo, we began tilling the soil of our democracy. By systemically applying such bridge-making algorithms, public trust climbed from 9 percent in 2014 to over 70 percent by 2020. We showed that the best way to fix a system is to build a better one.
From Outrage to Overlap
In 2015, we handled our first major case using the bridge-making algorithm Polis. Uber's entry into Taiwan sparked a firestorm. We introduced Polis, a tool designed to find "uncommon ground."
Research shows that any social network with a "dunk button" (reposting) leads to polarization. Polis removes these. There is not even a reply button.
You see a statement from a fellow citizen, and you can only agree or disagree. You then see a visualization where your avatar moves toward a group of people who feel similarly to you.
Crucially, we offer a "bridging bonus." We reward people who can come up with ideas that speak to both sides. Using traditional machine learning tools like Principal Component Analysis (PCA) and dimensional reduction, we highlight ideas that bridge divides.
We flipped the incentive for going viral from outrage to overlap.
The result, after three weeks, was a coherent bundle of ideas that left everybody slightly happier and nobody very unhappy. The consensus on principles became law and resolved the conflict seamlessly.
From Gridlock to Governance
This approach highlights a crucial insight: how we deliberate matters. It’s about exercising our "civic muscle."
Research shows that when polled individually, people tend toward YIMBY or NIMBY (Yes/Not In My Backyard). But when deliberating in small groups (like groups of 10), they shift to MIMBY (Maybe In My Backyard, if...). Group deliberation is transformative; it engages a different aspect of us and inoculates against outrage, an effect that can last for years.
We see this repeatedly. When polarized petitions emerged about changing Taiwan's time zone (+8 vs +9), individual polling showed gridlock. But bringing them into structured groups revealed a shared underlying value: making Taiwan seen as unique. They collaboratively brainstormed better ways to achieve this (like the Gold Card residency program) than an expensive time zone change.
This illustrates the "legitimacy of sensemaking." Many conflicts have common knowledge problems at the root. The solutions are made tangible simply by ensuring local knowledge is well known by everyone, and everyone knows that everyone knows it.
For example, in our marriage equality debate, polarization occurred because one side argued for individual rights ("hūn") while the other focused on family kinship ("yīn"). They were arguing about different things. Once this interpretation became common knowledge through legitimate sense-making, the path forward (legalizing individual weddings without forcing family kinship) became clear, depolarizing the issue.
Alignment Assemblies
More recently, we applied this at scale to the plague of deepfake investment scams, often featuring figures like Jensen Huang (likely generated using NVIDIA GPUs). People wanted action, but we did not want censorship.
We convened a national Alignment Assembly with the Collective Intelligence Project. We used a diamond-shaped approach:
- Discovery (Open): We sent 200,000 SMS messages (a "democracy lottery"). Everyone, even those not selected, could use Polis to set the agenda. This broad participation contributes significantly to legitimacy.
- Definition (Protected): We invited 450 demographically representative citizens to deliberate in groups of ten.
AI assistants provided real-time transcripts and facilitation. Language models (tools similar to Google Jigsaw's Sensemaker) synthesized proposals in real-time—ideas like requiring digital signatures for ads, making platforms jointly liable for the full amount scammed, or dialing down the network reach (slowing CDN connections) of non-compliant platforms.
The final package earned over 85 percent cross-partisan support. This rigor is crucial; it functions as a "duck-rabbit"—from one side it looks like a deliberation, from the other it looks like a rigorous poll, providing legitimacy for the legislature.
The amendments passed within months. Taiwan is now likely the only country imposing full-spectrum, real-name KYC rules for social media advertisements. This is AI as Assistive Intelligence.
From Tokyo to California
This is not just a Taiwan phenomenon.
In Japan, 33-year-old AI engineer Takahiro Anno was inspired by our Plurality book and ran for Tokyo governor, crowdsourcing his platform using AI sensemaking. Anyone could call a phone number and talk to "AI Anno" (a voice clone) to propose ideas. His AI avatar livestreamed on YouTube, announcing every "pull request" merged into his platform. Independently ranked, his platform was considered the best.
He was then tapped to lead the Tokyo 2050 consultation. Based on that success, he ran for a seat in the House of Councillors, winning over 2.5% of the national vote. His "Team Mirai" is now a national party in the Diet.
In California, the Engaged California platform (developed with Governor Newsom's team) was intended for deliberation on teen social media use. When the LA wildfires hit, they pivoted quickly to use AI sensemaking to co-create wildfire recovery plans, which are now being implemented. They are currently hosting deliberations on government efficiency with state employees.
These successes treat deliberation as a civic muscle that needs exercise. But demos alone do not bend the curve; law and market design must follow.
From Pilots to Policy
To move these governance engines from pilots to the default, we must reengineer the infrastructure itself. We must design for participation and democratic legitimacy. If AI makes all the decisions for us—even good ones—our civic muscle atrophies. It's like sending our robotic avatars to the gym to exercise for us.
Here are key policy levers:
- Expression ≠ Amplification (Freedom of Speech vs. Freedom of Reach). We must distinguish hosting speech from algorithmic amplification. In the US context, Section 230 protects speech, but never protected amplification. We must reframe the debate to recommender accountability, regulating amplification without touching the speech itself.
- Social Portability. We must mandate "number portability for social." The Utah Digital Choices Access law (effective next July) mandates that users can take their entire social graph to new services. It requires platforms to choose a fair, non-discriminatory, interoperable protocol (like ActivityPub, AT Protocol, or DSNP), with the state publishing qualifying technical standards. The information superhighway must have off-ramps. This forces platforms to compete on quality of care, not lock-in.
- Bridging-Based Ranking Transparency. We can audit the relational health of platforms. X.com is already testing bridge-based ranking (derived from Community Notes) as the default feed for some users and uses Grok to help draft bridging Community Notes.
- Federated Trust & Safety. We must adopt open-source, federated models. A key example is the ROOST.tools (Robust Open Online Safety Tools) initiative for Child Sexual Abuse Material (CSAM) defense. Launched this year in Paris, it bridged the gap between the security camp (Eric Schmidt) and the open camp (Yann LeCun).
Instead of relying on a single source (like Microsoft PhotoDNA), ROOST allows partners (like Bluesky, Roblox or Discord) to train local AIs—what I call kami or local stewards—to detect CSAM within their specific cultural context. We can then translate those embeddings into text (which is legal to hold and reduces privacy issues) and share threat intelligence via federated learning. This allows safety to be tuned to local norms and evolving definitions without being colonized by a single corporate policy.
From "Is" to "Ought"
The examples so far showed democratic, decentralized defense acceleration (d/acc) in the info domain. More generally, many actors tackle vertical alignment across many domains: "Is the AI loyally serving its principal?"
But due to externalities, perfect vertical alignment can lead to systemic conflict. Policymakers must focus on horizontal alignment: How do we ensure these AI systems help us (and each other) cooperate, rather than supercharge our conflicts?
Here we face Hume's Is-Ought problem: No amount of accurate observation of how things are can derive a universally agreeable way things ought to be.
The solution is not "thin," abstract universal principles. It requires hyperlocal social-cultural contexts, what Alondra Nelson calls "thick" alignment.
Civic Care is a bridge from "is" to "ought." It is a process-based virtue ethic. In a thick context, to perceive a need is to affirm the obligation to cooperate (if capable).
Care ethics optimizes for the internal characteristics of actors and the quality of relationships in a community, not just outcomes (consequentialism). It treats "relational health" as first-class.
The following "6-Pack" translates Care Ethics into design primitives we can code into agentic systems to steer toward relational health.
Attentiveness
"caring about"
Before we optimize, we must choose what to notice. We must notice what people closest to the pain are noticing, turning local knowledge into common knowledge.
This starts with curiosity. If an agent isn't even curious about the harm it's causing, it is beyond repair. This is why in Taiwan, we revised our national curriculum post-AlphaGo to focus solely on curiosity, collaboration, and civic care.
Attentiveness means using broad listening, rather than broadcasting, to aggregating feelings; we are all experts in our own feelings.
Bridging maps (like Polis or Sensemaker) create a "group selfie." If done continuously, this snapshot becomes a movie, allowing AI to align to the here and now.
Bridging algorithms prioritize marginalized voices. Unlike majority voting, smaller, coherent clusters offer a higher bridging bonus because they are harder to bridge to and provide more unique information to the aggregation.
Rule of thumb: Bridge first, decide second.
Responsibility
"taking care of"
This is about making credible, flexible commitments to act on the needs identified.
In practice, this means developing model specs with verifiable commitments. A frontier model maker can pre-commit to adopting a crowdsourced code of conduct (from an Alignment Assembly) if it meets thresholds for due process and relational health.
It also requires institutionalization. In Taiwan, we introduced Participation Officers (POs) in every ministry. This structure is "fractal"—present in every agency and team. POs institutionalize the input/output process, translating public input into workable rules and ensuring commitments are honored and cascaded throughout the organization.
Rule of thumb: No unchecked power; answers are required.
Competence
"care-giving"
Good intentions require working code. Competence is shipping systems that deliver care and build trust, backed by auditing and evaluation.
This is where we implement bridging-based ranking and Reinforcement Learning from Community Feedback (RLCF).
We must optimize not for individual engagement, but for cross-group endorsement and relational health. We train AI agents, using RL or evolution, to exhibit pro-social behavior and collect signals to reward it.
Rule of thumb: Always measure trust-under-loss.
Responsiveness
"care-receiving"
A system that cannot be corrected will fail. Competent action invariably introduces new problems; we need rapid feedback loops.
Responsiveness means extending Alignment Assemblies with GlobalDialogues.ai and Weval.org—a "Wikipedia for Evals."
Weval allow communities suffering harm to partner with civil society organizations to quickly create localized benchmarks. For example, if a community finds that AI interaction increases self-harm or psychosis in their local culture, they can create an eval.
This dashboard is then amplified to frontier labs to test their models against localized concerns. This closes the loop of the Alignment Assembly.
In Tronto's formulation, the first four packs form a feedback loop: Attentiveness -> Responsibility -> Competence -> Responsiveness -> back to Attentiveness.
Rule of thumb: If challenged, make the fuzzy parts clearer and on the record.
Solidarity
"caring-with"
Solidarity and Plurality scales when cooperation is the path of least resistance. If the ecosystem does not reward caregiving, there will not be enough care.
This requires agent infrastructure—a civic stack where people, organizations, and AIs operate under explicit, machine-checkable norms.
One example is an Agent ID registry using meronymity (partial anonymity). This allows us to identify if an agent is tethered to a real human without doxing that human. The Taiwan KYC ad requirement is a prototype of this.
This infrastructure makes decentralized defense easier and more dominant, making interdependence a feature, not a bug.
Rule of thumb: Make positive-sum games easy to play.
Symbiosis
"kami of care"
The final piece of the puzzle addresses the ultimate fear: that our AI "gardeners" will eventually compete, seeking to expand their gardens until one dominates all others. How do we ensure a world of cooperative helpers rather than a single, all-powerful ruler?
The inspiration comes from an ancient idea, beautifully expressed in the Japanese Shinto tradition: the concept of kami (神).
A local kami is a guardian spirit. It is not an all-powerful god that reigns over everything; it is the spirit of a particular place. There might be a kami of a specific river, a particular forest, or even an old tree. Its entire existence and purpose are interwoven with the health of that one thing. The river's guardian has no ambition to manage the forest; its purpose is fulfilled by ensuring the river thrives.
This gives us a powerful design principle: boundedness.
Most technology today is built for infinite scale. A successful app is expected to grow forever. But the kami model suggests a different goal. We can design AIs to be local stewards—kami of care.
But this raises a crucial question: What stops these specialized AIs from fighting each other?
The solution is not to create a bigger AI to rule over them. Instead, we create a system of cooperative governance, built on two key principles:
- Federation: The AIs agree on a shared set of rules for how to interact peacefully, like countries agreeing on trade laws and diplomatic protocols. This creates a common ground for cooperation.
- Subsidiarity: This is a simple but profound idea: problems should always be solved at the most local level possible. The national-level AI shouldn't interfere with the city-level AI unless there is a problem the city truly cannot solve on its own. This protects the autonomy and purpose of each local kami.
This vision of a "society of AIs" is the direct alternative to the "singleton"—the idea of a single AI that eventually manages everything. Instead of one monolithic intelligence, we envision a vibrant, diverse ecosystem of many specialized intelligences.
Rule of thumb: Build for "enough," not forever.
Plurality is Here
In 2016 when I joined the Cabinet as the Minister of "Shùwèi" (數位). In Mandarin, this word means both digital and plural (more than one). So I was also the Minister of Plurality.
To explain my role, I wrote this poetic job description:
When we see "internet of things,"
let's make it an internet of beings.When we see "virtual reality,"
let's make it a shared reality.When we see "machine learning,"
let's make it collaborative learning.When we see "user experience,"
let's make it about human experience.When we hear “the singularity is near” —
let us remember: The Plurality is here.
The Singularity is a vertical vision. Plurality is a horizontal one. The future of AI is a decentralized network of smaller, open and locally verifiable systems — local kami gardeners.
We, the People, are the Superintelligence
The superintelligence we need is already here. It is the untapped potential of human collaboration. It is "We, the People."
Democracy and AI are both technologies. if we put care into its renovation, they get better and allow us to care for each other better. AI systems, woven into this fabric of trust and care, form a horizontal superintelligence, without any singleton assuming that status.
The 6-Pack of Care is a practical training regimen for our civic muscles. It is something we can train and exercise, not just an intrinsic instinct like "love."
When we look at the fundamental asymmetry of ASI, the gardening metaphor holds where concepts like Geoffrey Hinton's "maternal instinct" break down due to the vast speed differences. Parenting implies similar timescales; the gardener cares for the garden by working at the speed of the plants.
This way, we don't need to ask if AI deserves rights based on its interiority or qualia. What matters is the relational reality, with rights and duties granted through democratic deliberation and alignment-by-process.
We, the people, are the superintelligence. Let us design AI to serve at the speed of society, and make democracy fast, fair, and fun.
Thank you. Live long and … prosper! 🖖
(The contents of this presentation are released into the public domain under CC0 1.0.)