Episode #112 – Aurélien Hérault: How Deezer Is Fighting AI Spam and Rethinking Music Recommendations

Streaming platforms have become gatekeepers of global music discovery,  but with that power comes urgent responsibility. In this episode #112 of Sound Connections, Jakob Wredstrøm sits down with Aurélien Hérault, Chief Innovation Officer at Deezer, to explore how one of the world’s top DSPs is tackling ethical AI in music recommendation systems. The conversation moves from the technical sophistication behind personalization engines to the growing threat of synthetic music spam and royalty fraud.

What makes this episode unique isn’t just Deezer’s use of AI, it’s their ethical stance. From rejecting fully AI-generated songs in recommendations to prioritizing transparency in streaming platform algorithms, Deezer is quietly setting standards other DSPs will have to follow. For founders, DSP operators, and investors, this conversation offers a blueprint for building trustworthy, human-first, and future-proof streaming infrastructure.

Ethical AI in Music Recommendation Systems

 In a digital landscape dominated by personalization engines, ethical AI in music recommendation systems is no longer optional,  it’s essential. Aurélien Hérault, explains how the platform has evolved from basic genre-tagging into a dynamic, ethics-driven ecosystem that personalizes based on behavior, acoustics, time of day, and mood.

Deezer now creates a "galaxy" model,  a space-like architecture where tracks are clustered based on shared traits. Users are positioned within this galaxy based on their behavior and context, ensuring that recommendations feel timely and relevant. This approach represents ethical AI in music recommendation systems because it doesn’t just optimize for engagement,  it aims for deeper personalization that respects user intent and musical diversity.

In a digital landscape dominated by personalization engines, ethical AI in music recommendation systems is no longer optional,  it’s essential. Aurélien Hérault, explains how the platform has evolved from basic genre-tagging into a dynamic, ethics-driven ecosystem that personalizes based on behavior, acoustics, time of day, and mood.

Deezer now creates a "galaxy" model,  a space-like architecture where tracks are clustered based on shared traits. Users are positioned within this galaxy based on their behavior and context, ensuring that recommendations feel timely and relevant. This approach represents ethical AI in music recommendation systems because it doesn’t just optimize for engagement,  it aims for deeper personalization that respects user intent and musical diversity.

This model allows for multiple algorithms tuned to different goals,  like discovery mode vs. lean-back listening,  and different evaluation metrics (e.g. skip rate, retention, and collection rate). These methods reflect a recognition that recommendation systems carry not just technical, but cultural and ethical weight.

This system design avoids the algorithmic bias that can easily arise when popular or legacy content is overexposed. Instead, Deezer uses AI ethically to balance personalization with fairness, ensuring new artists are placed properly within the recommendation system. That placement matters — a song incorrectly placed may be skipped, and then deprioritized, which the platform acknowledges can unfairly suppress quality music.

This point reflects a growing industry-wide concern: when AI is not designed carefully, it can reinforce inequalities or obscure diverse voices,  a pattern noted by researchers studying algorithmic amplification in music discovery.

Furthermore, Deezer collaborates with academic institutions to monitor and correct bias in its recommendation models. Aurelien emphasizes transparency and research as pillars of their R&D process.

This decision positions Deezer among a small group of music platforms taking an evidence-based approach to algorithm governance, an essential move as AI-generated music floods streaming platforms and systems grow more complex. The Verge

Deezer AI Detection of Generative Music

Deezer has taken decisive steps in the battle over Deezer AI detection of generative music, rolling out a sophisticated system to identify and exclude fully AI-generated tracks before they corrupt listener experience or dilute artist payouts.

According to Deezer’s own figures, roughly 18% of all new uploads, over 20,000 tracks per day, are fully AI-generated, a sharp rise from the 10% reported just a few months earlier. In response, Deezer deployed its cutting-edge AI detection tool in January 2025, complete with two patent filings, capable of identifying fully synthetic music from dominant models like Suno and Udio, even when datasets are limited.

The detection system tags flagged tracks and deprioritizes or excludes them entirely from algorithm-based and editorial recommendations, safeguarding both discovery quality and artist equity.

Deezer’s approach is significant for two reasons.

  1. It confirms that Deezer AI detection of generative music can catch fully synthesized content at scale.

  2. This effort aligns with industry-wide needs to maintain catalog integrity as AI generation accelerates.

In summary, Deezer’s proactive detection pipeline, powered by real-time scanning, model retraining, tagging, and recommendation exclusion, sets a new standard for ethical music streaming infrastructure. It confirms that DSPs can build robust defenses against synthetic noise without compromising innovation.

Combating Royalty Fraud in Music Streaming

Deezer is tackling the growing crisis of combating royalty fraud in music streaming by combining machine learning detection with strategic policy choices—a strategy rooted in insights from Aurélien Hérault and reinforced by peer-reviewed industry findings.

Fraudulent AI-Generated Streams

Aurélien Hérault warns that a large portion of AI-generated tracks aren't intended for actual listeners. Instead, they’re being used by bots both to upload and to simulate plays, designed purely to siphon streaming royalties. He describes this as “stream manipulation… bots generating music and listened to by other bots just to capture royalties,” emphasizing that it’s not a fringe issue, but a real danger for the industry.

Tracking plays alone is no longer sufficient to determine genuine artist engagement. Platforms like Deezer must actively analyze listening behaviors to distinguish between authentic user activity and manipulated data.

The Guardian supports this, revealing that many generative-music tracks on Deezer are artificially streamed, and the DSP is now actively preventing these from earning royalties.

Scale of AI Uploads and Implications

Aurélien also notes the rapid rise of AI-generated tracks. This jump shows that almost one out of every five new uploads on Deezer is fully synthesized, produced by AI rather than humans.

With such a high volume, platforms must scale detection and enforcement systems rapidly, or risk being overwhelmed by synthetic noise and exploitation.

Detection, Deprioritization, and Prevention

Deezer’s response is clear: they’re not banning AI-generated music outright, but they have established firm boundaries. Aurélien explains that while content remains available to users, it’s “not promoted” through recommendations or editorial playlists.

  • Why this matters: Removing these tracks from algorithmic feeds preserves the integrity of discovery tools, so genuine artists aren’t overshadowed by spam.

  • Strategic transparency: Publicizing this policy signals to artists and listeners that Deezer values fairness and quality.

Music Business Worldwide details Deezer’s tagging pipeline and confirms that flagged tracks are excluded from both editorial and algorithmic placements.

Why Deezer’s Approach Matters

  1. Preserves ecosystem fairness: By denying promotional visibility to AI-generated spam, Deezer protects emerging artists and ensures real listener behavior drives exposure metrics.

  2. Discourages malicious tactics: Fraudsters know their tracks won’t reach playlists or revenue metrics, reducing the incentive for automated content farming.

  3. Maintains user trust: Deprioritizing questionable content ensures a better listener experience, where recommendations feel high quality, not flooded with noise.

  4. Scales responsibly:  As AI upload volumes surge, an automated detection-and-tagging system allows Deezer to manage fraud at scale, without resorting to absolute bans that may stifle innovation.


Human vs Algorithm Curation in Music Discovery

While algorithms excel at processing vast datasets, human vs algorithm curation in music discovery underscores the importance of expert intuition, cultural sensibility, and contextual nuance that AI alone can’t replicate. Aurélien Hérault emphasizes this in the podcast: despite Deezer’s richly layered algorithms—which blend acoustic embeddings, user behavior, and contextual metadata—human editors play a pivotal role in steering discovery where the machine falls short.

Hybrid Curation: Expertise + Automation

Aurélien notes that Deezer’s systems are crafted to “go deeper in granularity,” clustering tracks based on sound features, listening context, mood, and location. But this “deep embedding” model isn’t designed to replace humans; rather, editorial teams act as essential gatekeepers. They step in where algorithmic blind spots exist, for instance, by spotlighting local talent, curating niche genres, or responding to cultural trends in ways AI can’t perceive.

Deezer’s algorithms discover broadly, but human curators refine, contextualize, and correct.

Why Human Curation Still Matters

1. Emotional and Cultural Context

Human curators add emotional depth and cultural resonance, qualities absent from data-driven analysis alone. A playlist curated by hand can balance mood, lyrical themes, or regional flavor in ways algorithmic rules simply can’t match.

2. Mitigating Bias and Blind Spots

Even sophisticated AI can favor mainstream or popular content, inadvertently reinforcing inequalities. Aurélien highlights that Deezer collaborates with academics to cultivate balanced visibility for emerging and underrepresented artists, a task where human listening and judgment help correct algorithmic bias. Similar concerns are echoed in UK government findings, which link reduced cultural diversity to overly opaque recommendation algorithms.

3. Responding to Trends

When a local hit breaks or fresh creator emerges, human eyes and ears spot potential faster than AI models can retrain. Many DSPs, including Apple Music and Netflix, continue investing heavily in editorial teams because curated signals often drive trends faster and more meaningfully than algorithms alone.

Results of a Hybrid Model

Deezer’s combined strategy delivers measurable benefits:

  • Improved discovery diversity: Independent and local artists are regularly surfaced.

  • Reduced skip rates: Curated lists tuned to mood and context retain listeners better.

  • Increased trust: Transparency in curation builds confidence among users and creators.

Transparency in Streaming Platform Algorithms

Deezer recognizes that in today’s AI-driven music landscape, transparency in streaming platform algorithms is no longer optional, it’s essential for building trust. Aurélien Hérault stresses that opaque recommendation models can undermine user confidence. His insight underscores Deezer’s commitment to explainability, not just performance.

Deezer sets out to demystify why certain songs appear in Flow or suggestions, integrating contextual cues such as mood, acoustics, and listener behavior into user explanations. Aurélien explains that while their system uses deep embedding models, they are actively working on making these choices legible to users, a move that emphasizes transparency in streaming platform algorithms without compromising user experience.

Aurélien mentions that Deezer partners with universities to audit bias in recommendation outputs. This audit process is part of a broader push toward transparency in streaming platform algorithms, ensuring independent verification of fairness and diversity. According to Aurélien, such partnerships help identify blind spots and reinforce the integrity of Deezer's system.

While preserving proprietary technology, Deezer has publicly shared outlines of its detection system, for example, describing what constitutes a fully AI-generated track. Aurélien explains this contributes to transparency in streaming platform algorithms by revealing key decision criteria without compromising IP.

Deezer’s commitment to transparency in streaming platform algorithms was a central theme in the podcast:

  • Users gain clearer understanding of why tracks are recommended.

  • Third-party audits actively diagnose and correct biases.

  • Deezer is walking a line between openness and safeguarding proprietary systems.

Stream Manipulation Prevention in DSPs

Aurélien Hérault underscores a vital focus in the podcast: stream manipulation prevention in DSPs isn’t a nice-to-have, it’s essential to preserve trust and fairness. He warns that the rise of AI-generated content isn’t just a creative shift, but a playground for fraud.

Aurélien emphasizes that AI-generated songs are being used to game the system: these tracks are not for listeners, but for fraudulent plays. He states that Deezer actively detects these schemes and excludes that music from being promoted, a commitment to preventing bot-driven exploitation.

The threat isn’t theoretical, it’s happening daily, and DSPs must treat stream manipulation prevention in DSPs as a top priority. The Guardian confirms this surge, reporting that Deezer found up to 70% of streams on AI-generated tracks were fraudulent, and the platform is excluding them from royalty payouts and recommendations

Technical Solutions Scaling with the Threat

According to Aurélien, detection systems at Deezer are designed to scale as upload volume climbs, tagging non-legitimate tracks and ensuring they never surface in algorithmic or editorial feeds.

Deezer's AI systems continuously learn and adapt, fighting fraud by disrupting promotion pipelines, not deleting content indiscriminately. Deezer’s January 2025 release confirms their detection tool can spot AI-generated music from tools like Suno or Udio, tagging them for non-promotion.

Building Trust Through Transparency & Partnerships

Aurélien highlights that fairness means more than rules, it requires knowledge. Deezer openly labels AI-generated content and ensures users are informed, building a transparent ecosystem where stream manipulation prevention in DSPs aligns with listener experience.

Transparency isn’t just ethical, it’s strategic. Labeling builds trust and education.

What This Means for the Industry

  1. Systems must block first, investigate later – By removing trees (promotion), not forests (every upload), Deezer neutralizes fraud at scale.

  2. Models must evolve with threats – As Aurélien states, they’re “fighting AI with AI,” retraining detection systems to stay ahead.

  3. Educate users while enforcing rules – Labeling informed listeners; blocking protected content ensures trust.

Deezer’s handling of AI-generated content, fraud detection, and algorithmic fairness shows that DSPs must evolve beyond engagement metrics. The platform’s infrastructure is deliberately designed to protect trust, by filtering synthetic content from recommendations and royalties without outright censorship. This ensures genuine artists are not buried beneath algorithmic noise.

Rather than relying on automation alone, Deezer’s model embraces human oversight. Editors complement AI by highlighting local and emerging talent, correcting bias, and maintaining cultural depth in discovery. This hybrid approach enhances both relevance and fairness, proving that human judgment remains essential in music curation.

Transparency is also core to Deezer’s design. By explaining how recommendations work and partnering with researchers to audit its systems, Deezer builds confidence with artists and listeners alike. In an industry under increasing scrutiny, openness becomes a competitive advantage.

For founders and DSP operators, the takeaway is clear: platform success now depends on integrity. Fraud detection, algorithmic explainability, and editorial investment are no longer optional, they’re foundational. Deezer shows that ethical streaming isn’t just idealistic, it’s operationally achievable, and increasingly, a strategic imperative.

Conclusion: What Deezer’s Strategy Reveals About the Future of Ethical Music Streaming

Deezer’s handling of AI-generated content, fraud detection, and algorithmic fairness shows that DSPs must evolve beyond engagement metrics. The platform’s infrastructure is deliberately designed to protect trust, by filtering synthetic content from recommendations and royalties without outright censorship. This ensures genuine artists are not buried beneath algorithmic noise.

Rather than relying on automation alone, Deezer’s model embraces human oversight. Editors complement AI by highlighting local and emerging talent, correcting bias, and maintaining cultural depth in discovery. This hybrid approach enhances both relevance and fairness—proving that human judgment remains essential in music curation.

Transparency is also core to Deezer’s design. By explaining how recommendations work and partnering with researchers to audit its systems, Deezer builds confidence with artists and listeners alike. In an industry under increasing scrutiny, openness becomes a competitive advantage.

For founders and DSP operators, the takeaway is clear: platform success now depends on integrity. Fraud detection, algorithmic explainability, and editorial investment are no longer optional, they’re foundational. Deezer shows that ethical streaming isn’t just idealistic, it’s operationally achievable, and increasingly, a strategic imperative.