The recent shutdown of Poland's AI-powered radio station has sparked further debate about artificial intelligence in media. But the real story isn't about AI's capabilities—it's about human judgment and our tendency to blame technology for human decisions.

In late October, OFF Radio Krakow made headlines by dismissing its journalists and launching what it called "the first experiment in Poland" with AI-generated presenters. Within a week, the station had pulled the plug on the project amid public outcry. While media coverage has largely framed this as another cautionary tale about AI's pitfalls, the reality is more nuanced—and more human.

The station's experiment began boldly enough. Three AI-generated Gen Z presenters—Emilia, Jakub, and Alex—were introduced to revive a station that, according to Radio Krakow's head Marcin Pulit, had very few listeners. But it was the station's decision to air an "interview" with deceased Nobel laureate Wisława Szymborska that proved most controversial.

The heart of the matter is the issue wasn't the AI's capability to generate convincing voice synthesis, but rather the human decision to use it in this way.

The media coverage of this event exemplifies our cultural tendency toward binary narratives. AI is bad. Perhaps the commentary needs to lean towards the fact that AI is a tool, and perhaps we shouldn't apply it in certain ways. This is not the first time that 'fantasy' interviews of questionable taste have entered the spotlight, as demonstrated by the Schumacher family.

The technology itself—a combination of ChatGPT, ElevenLabs' speech synthesis, and Leonardo.Ai's image generation—isn't revolutionary. What caused the backlash was the human decision-making around its use.

This case offers valuable lessons for the future of AI in media:

  1. Ethical Considerations First: The mantra "just because you can, doesn't mean you should" needs to be at the forefront of AI implementation decisions.
  2. Human Judgment Matters: The success or failure of AI initiatives often hinges not on the technology itself, but on how humans choose to deploy it.
  3. Beyond the Binary: We need to move past simplistic "AI good" versus "AI bad" narratives to have meaningful discussions about implementation.

The OFF Radio Krakow experiment didn't fail because of AI's limitations—it failed because of human choices. As more organisations explore AI integration, the key question isn't "What can AI do?" but rather "What should we do with AI?"

As we continue to integrate AI into media and other industries, we need to shift our focus from technological capabilities to human decision-making frameworks. This means developing clear ethical guidelines, understanding audience expectations, and recognising that AI is a tool whose success depends entirely on how we choose to use it.

The real story of OFF Radio Krakow isn't about AI's pitfalls—it's about the need for thoughtful human guidance in technological implementation.



Share this post
The link has been copied!