top of page
Search

When Radio Dreams - Butterflies, Synthetic Audio, and Parallel Realities

  • Writer: Nicholas Glean
    Nicholas Glean
  • Jan 9
  • 9 min read

Our first blog of 2026, and I'm starting with a confession: I'm not sure what to call the thing I've been listening to and creating.


It sounds like a radio drama. It has all the hallmarks—voices, music, narrative flow, that intimate quality that makes audio feel like it's speaking directly to you. But the more I think about it, the more I realise: this isn't radio drama at all. It's something else entirely. Something new.


And here's the strange part: both things can be true at once.


Eye-level view of a brainstorming session with sticky notes and sketches
The butterfly dream mirrors the change taking place between AI audio and radio, creating multiple realities existing simultaneously in the same space, each equally real depending on which signal consciousness receives.

The Radio That Isn't Radio


Let me start with something most of us do without thinking: opening Spotify or another streaming service. You press play, and what do you get? Music flows. A voice introduces the next track. There's curation, rhythm, a sense of someone (or something) guiding your experience. It feels like radio.


But here's what's actually happening: algorithms are analysing your listening history, predicting what you want to hear, and generating a personalised stream that exists only for you. There are no radio waves. No broadcast tower. No electromagnetic signals bouncing through the atmosphere. The word "radio" has become a kind of costume that streaming platforms wear to make us comfortable.


Marshall McLuhan famously said, "The medium is the message. But what happens when the medium pretends to be another medium? When what we're experiencing is a simulation of radio rather than radio itself?


This isn't just semantic fussiness. It matters because we import assumptions based on what we think we're experiencing. When I say "I heard it on the radio," I'm invoking a particular kind of authority, a particular relationship to reality. But if what I actually experienced was an algorithmic generation, that changes everything.


Two Butterflies: The Coexistence of Realities


There's an ancient Chinese story about the philosopher Zhuangzi, who dreamed he was a butterfly. When he woke up, he couldn't tell: was he Zhuangzi who had dreamed of being a butterfly, or was he a butterfly now dreaming of being Zhuangzi?


The relationship between traditional radio and AI-generated audio reminds me of this paradox—but with a twist. They're not one thing, uncertain of their identity. They're two separate things that can both exist simultaneously, each creating its own valid reality.

Traditional radio exists: electromagnetic waves, broadcast transmission, and human voices captured and sent out into the world. It's indexical—meaning it bears a direct, physical trace of the reality it represents. When you hear a voice on traditional radio, someone actually spoke those words into a microphone.


AI-generated audio exists too: synthetic voices, algorithmically generated soundscapes, narratives that adapt in real-time. But it operates in what the philosopher Jean Baudrillard called the realm of simulacra—representations that have no original, copies without a source. The voice you hear was never spoken. The soundscape was never recorded. They were generated.


Neither replaces the other. They coexist as parallel forms, separate realities that we navigate simultaneously.


Narada's Dream: When the Synthetic Becomes Real


There's another story I keep thinking about—this one from ancient Indian folklore. The sage Narada asks the god Vishnu about the nature of maya (illusion). In response, Vishnu transforms Narada's experience: he lives an entire lifetime, falls in love, builds a family, and experiences joy and tragedy. Then suddenly he's back with Vishnu, and only moments have passed. The entire life was an illusion—but the emotions, the experiences, the meaning he found? Those were real.


This is what AI audio drama does. It creates an experience that is, in one sense, entirely synthetic—no human actors spoke the lines, no physical soundscapes were recorded. But the experience itself? The emotional response, the narrative immersion, the meaning you derive? Those are genuinely real.


The question isn't whether AI audio drama is "fake" or "real." It's whether synthetic experiences can create their own reality—a reality that doesn't mirror our physical world but exists as a separate, parallel space of meaning and experience.

I think they can. And I think that means we're witnessing the birth of an entirely new medium.


The Reinvention of Radio Drama


Here's where this gets really exciting for me: AI isn't just changing how we listen to music or news. It's breathing new life into radio drama—transforming it from a largely historical art form into something dynamic, responsive, and genuinely new.

Think about what becomes possible:


Synthetic voice actors can generate any character you need, with any accent, any emotional register, any vocal quality—without booking a recording session or hiring an ensemble cast. You can iterate endlessly, adjusting performances until they're exactly right.


Dynamic soundscapes respond to narrative beats in real-time. As tension builds in the story, the audio environment intensifies automatically. As characters move through space, the sonic landscape shifts seamlessly. It's not just sound effects added in post-production—it's an environment that lives with the narrative.


Adaptive storytelling means the narrative can respond to listener choices, creating genuinely interactive drama. Not just branching paths recorded in advance, but stories that generate themselves based on your decisions.


This isn't traditional radio drama with better tools. It's a fundamentally different form.


Why We Need a New Language


Here's where I need to be precise about something: calling this "AI radio" or "AI radio drama" is misleading in the same way that calling AI-generated moving images "AI film" is misleading.


Film captures continuous motion—24, 30, 60 frames per second of actual events that occurred in physical space. What AI generates is closer to what we used to call photodrama: sequential stills with interpolated motion between them, creating the appearance of continuous movement without actually capturing it.


Similarly, radio broadcasts transmit signals through space that receivers pick up. What AI creates might be better described as synthetic audio or generative soundwork—content that simulates the form and feel of radio while operating on entirely different principles.


This isn't pedantry. Language shapes how we think. If we keep calling synthetic audio "radio," we import assumptions that don't apply. We think about broadcast regulations, transmission rights, and capturing reality. But synthetic audio doesn't capture anything. It generates.


We need new terms for new things.


The Multiple Realities of Synthetic Sound


Here's something that keeps me up at night: when you listen to an AI-generated podcast host, what exactly are you experiencing?

That voice is simultaneously:


Computationally real—it exists as code executing on servers, processing data, consuming electricity. You can measure it, analyse it, and debug it.


Phenomenologically real—you experience it as a voice, as presence, as something that feels like encountering another person. Your emotional responses are genuine.


Functionally real—it does the job of a host. It introduces content, creates flow, and establishes mood. It works.


But ontologically, it's something else entirely. There's no person behind that voice. No consciousness. No one went home after recording and made dinner.


The French philosopher Maurice Merleau-Ponty wrote about how our experience of reality is fundamentally embodied—we encounter the world through our physical presence in it. But what does it mean to experience "presence" from an entity that has no body, no physical existence beyond flickering bits and pixels?


This is what I mean by coexisting realities. The synthetic host and the human host are both "real" in some registers and fundamentally different in others. They don't cancel each other out. They exist in parallel.


The Question of Authenticity


So if synthetic audio creates its own reality—if the experiences it generates are genuinely meaningful even though they don't represent physical events—what does "authenticity" even mean anymore?


Traditionally, we judged media by how faithfully it represented reality. A photograph was credible because light from a real scene caused the image to form. A radio broadcast was trustworthy because real people spoke those words into real microphones.


But synthetic audio doesn't have that indexical relationship to reality. The American philosopher Charles Sanders Peirce distinguished between different kinds of signs: icons (which resemble what they represent), symbols (which have arbitrary relationships), and indexes (which have direct, causal connections). A photograph is an index—light bounced off a scene and left a trace.


Synthetic audio is an icon. It resembles audio that was captured, but it has no causal connection to any physical sound event.


Does this make it inauthentic?


Not necessarily. What if we shift from judging authenticity by origin to judging it by effect? If an AI-generated audio drama moves you emotionally, teaches you something, creates genuine meaning in your life—is it inauthentic? The experience is real, even if the source is synthetic.


This is what I call functional authenticity: measuring media not by where it comes from, but by what it accomplishes, how it makes us feel.


Why Both Can Exist


Here's what I don't think: I don't think AI audio will replace traditional radio and audio drama any more than cinema killed theatre or television killed radio.


What happens instead is that new forms establish themselves alongside old ones. They serve different purposes, create different experiences, and appeal to different contexts and needs.


Traditional radio has qualities that synthetic audio can't replicate: the knowledge that a real human is speaking in real-time, the indexical connection to events happening in the world, and the particular intimacy of human presence and vulnerability.


Synthetic audio has qualities that traditional radio can't match: infinite scalability, perfect consistency, adaptive responsiveness, and the ability to create experiences that would be impossible or impractical with human performers.


They're not in competition. They're two different butterflies, both equally real in their own way.


What This Means for Creators


If you're someone who makes audio content—whether radio, podcasts, audio drama, or sound art—this transformation matters deeply.


You're not being replaced. But you are being invited to work with an entirely new medium, one that operates according to different rules and offers different possibilities.


The challenge is learning to think of synthetic audio not as a cheaper, easier version of traditional production, but as something categorically different. It's not about using AI to simulate what humans do. It's about exploring what becomes possible when audio can be generative, responsive, and adaptive.


The future likely isn't "human audio" or "AI audio." It's the creative interplay between both artists who understand traditional production and synthetic generation, who can move fluidly between capturing reality and generating new worlds.


The Ethical Dimension


Of course, none of this happens in a vacuum. When synthetic audio can perfectly mimic human voices, when AI can generate performances that sound indistinguishable from recorded actors, we face serious ethical questions:


Transparency: Do listeners have a right to know when content is synthetic? I think they do. Not because synthetic is inferior, but because the ontological status matters for how we interpret and trust what we're hearing.


Consent: When AI systems are trained on recordings of human voices, did those people consent to their voices being used to generate new content? The question of training data and attribution remains largely unresolved.


Labour: What happens to voice actors, sound designers, and audio producers when AI can generate similar outputs faster and cheaper? How do we ensure that technological advancement serves human flourishing rather than just efficiency?


Authenticity in journalism: If synthetic audio can generate fake interviews or news reports that sound completely real, what happens to audio as evidence? How do we maintain trust in audio journalism?


These aren't hypothetical concerns. They're immediate, practical questions that the audio community needs to grapple with now.


Living in the Hyphen


The philosopher Bruno Latour taught us to pay attention to hybrids—things that don't fit neatly into our categories, that exist in the spaces between nature and culture, human and machine, real and artificial.


Synthetic audio is a hybrid. It's not quite radio, not quite something entirely new. It exists in the hyphen between capture and generation, between representation and creation, between indexical and iconic.


And maybe that's okay. Maybe we don't need to resolve the paradox or choose one reality over another. Maybe the future of audio is learning to navigate that hybrid space with clarity, creativity, and care.


Moving Forward


So where does this leave us?


We're living through a moment when the medium we thought was radio has split into two parallel streams. One continues the tradition of broadcast, of electromagnetic waves, of human voices captured and transmitted. The other opens up something new—synthetic, generative, creating its own realities rather than representing ours.

Both are valuable. Both are real. Neither replaces the other.


The challenge isn't to resist this transformation or cling to old definitions. It's to develop new ways of understanding, creating, and engaging with these parallel realities. It's to build the literacy and ethics that allow us to navigate synthetic spaces responsibly while preserving what's irreplaceable about human connection and creativity.


Like Narada emerging from his illusion, we might discover that the synthetic experience was real in ways we didn't expect. The life he lived wasn't "real" in the physical sense, but the meaning, emotion, and transformation were. Those were entirely genuine.


And like Zhuangzi and the butterfly, we might stop asking "which is the real radio?" and start appreciating that both realities can exist simultaneously, each valid in its own register.


The question isn't whether this is good or bad. It's already happening. The question is: how do we learn to be good creators, thoughtful listeners, and ethical participants in these new synthetic spaces?


That's the conversation I want to have. That's what this blog is about.

 

Welcome to 2026. The butterflies are multiplying, and I, for one, find that fascinating.


What's your experience been like? Have you noticed this shift? Are you creating with synthetic audio? Let me know in the comments—I'd love to hear what you're discovering.

 
 
 

Comments


bottom of page