Reshaping media with AI: Public values at the core of our democracy
At CollectiveUP, we believe technology should empower democracy, not undermine it. That’s why our founder, Liliana Carrillo, joined global pioneers at the PublicSpaces Conference 2025 in Amsterdam for a critical discussion: AI Warm-up: Reshaping Public Media in the age of AI. Building on dialogues in Berlin and London, this panel explored how AI can serve the public good across borders, institutions, and identities. Here’s what we learned, shared and discussed, and why it matters for Europe’s democratic future.
Public values in a digital age: Liliana’s vision
Liliana opened the panel by anchoring the conversation in European democratic values: co-creation, sustainability, and inclusion. She emphasized:
“Public values mean aligning with democracy itself—supporting youth participation, digital literacy, and democratizing knowledge. We must understand how AI algorithms shape decisions… or risk living with a ‘black box’ that erodes trust.”
For CollectiveUP, this means:
- Democratizing AI literacy so citizens grasp how tools like ChatGPT or Sora influence information.
- Prioritizing accountability for AI-driven misinformation that harms political, economic, and ecological systems.
- Championing open alternatives (like Mastodon) to resist centralized platforms where corporate interests override public needs.
The tensions: Whose values shape AI?
The panel dissected critical conflicts in Europe’s media landscape:
- Freedom vs. harm: Liliana reminded the audience that while AI opens new horizons for creativity—whether through generative image tools like DALL·E or video platforms—it also amplifies the risk of misinformation, fake news, and manipulation. The challenge lies in finding the balance between creative freedom and protecting democratic discourse.
- Commercial vs. public control: As Sander Veenhof (VPRO Medialab) noted, public broadcasters can ethically “cross boundaries” closed to profit-driven entities.
- Western bias: Audience members challenged “universal” values like autonomy or safety—underscoring the need for truly inclusive frameworks that reflect diverse realities.
Liliana argued:
“We need accountability for AI’s global impact—not just in boardrooms, but in communities. Who answers for the damage when misinformation goes viral? "We must ensure AI empowers citizens without undermining the trust that democracy depends on."
Building accountable AI
In an era where algorithms increasingly shape public opinion, Liliana stressed that accountability must be non-negotiable. The political, economic, and ecological consequences of unchecked AI-driven misinformation demand clear responsibility—both from tech companies and from the systems we choose to use.
She also raised a crucial paradox: while companies like OpenAI brand themselves as “open,” their models are far from transparent. In contrast, some open-source initiatives—such as China’s DeepSeek—offer accessible model weights, allowing independent replication and scrutiny.
Moreover, true accountability starts by exposing AI’s human infrastructure—like the underpaid labor behind “magical” chatbots. Read more about this challenge in this article: The AI Revolution Comes With the Exploitation of Gig Workers - AlgorithmWatch
Furthermore, Abdo Hassan (CriticalTech) noted, we need situated, co-created models where users are “co-cultivators,” not passive consumers.
Empowering citizens through choice
CollectiveUP’s perspective is that democracy is built not only in parliaments and courts, but also in the platforms we use every day. Liliana encouraged practical action—like choosing decentralised, ethical platforms such as Mastodon over mainstream social media—to shift power back towards the public.
Small, collective choices can be powerful when multiplied. This is where digital literacy and the democratisation of AI knowledge become vital—helping citizens understand how algorithms work, who controls them, and how to make informed tech decisions.
A call for public AI
The panel concluded that building AI for the public good requires shared ownership and accountability. Civil society organisations, educational institutions, libraries, and public broadcasters could play a leading role in creating open, transparent AI models—avoiding the concentration of power in a few corporate hands.
Liliana’s message was clear: we need AI systems that reflect European democratic values, contribute to the Sustainable Development Goals, and ensure that technology remains a tool for emancipation, not oppression.
Why urgent action can’t wait
At CollectiveUP, we believe that shaping the future of AI is not just a technical challenge—it’s a democratic one. Every choice we make today, from the policies we advocate to the platforms we use, will define the media landscape and civic freedoms of tomorrow.
The warm-up is over. Now is the time for action.
Esther Hummelberg (Hogeschool van Amsterdam and Society 5.0 Festival) warned: “With AI, we’re already late.” Unlike social media’s reactive regulation, AI embeds itself in our hybrid realities—blending digital and physical worlds. Once entrenched, reshaping it becomes exponentially harder.
Our call? Scale solutions now:
- Individual choices: Opt for ethical platforms (e.g., Mastodon).
- Systemic shifts: Fund public, localized AI models (“digital commons”) to break Big Tech’s grip.
- Transparency: Demand open-source tools (like DeepSeek) over “open-washed” giants (e.g., OpenAI).
Join our movement!
The Amsterdam panel wasn’t just talk—it was a warm-up for action. As Liliana declared:
“Democracy isn’t a spectator sport. It’s co-created by choices we make daily—from the apps we use to the values we code into our future.”
→ Partner with CollectiveUP → Contact Us