• Home
  • Article
  • Terms
  • Privacy Policy
  • More
    • Home
    • Article
    • Terms
    • Privacy Policy
  • Home
  • Article
  • Terms
  • Privacy Policy

ARTICLE

Music Was Always Supposed to Be for Everyone

By Simon Watson


A few years ago, if you loved music but couldn't play an instrument, couldn't read notation, and couldn't afford a producer, your options for making your own music were narrow to nonexistent. You could hum into your phone. You could write lyrics in a notebook and hope someone with the technical skills you didn't have would one day turn them into something real.


That gap — between people who love music and people who can make it — has been one of the great quiet exclusions of the last century. Music became something most people consume but few people produce. The means of musical creation belonged to a class of people with training, equipment, time, and access. Everyone else got to listen.


AI music generation is changing this, and the change is bigger than most people realize.


I'm not going to pretend the songs Suno and its peers produce are technically equivalent to what trained musicians create. They're not — yet. What they are is the first time in a long time that an ordinary music lover, with no formal training and no professional setup, can sit down and make something they actually want to hear. The barrier to creation has collapsed for a population the music industry has effectively been telling for decades: this is not for you.


That collapse is what makes the moment interesting. And it's also what makes the moment fragile.


There are two distinct threats to this opening up of music creation.


The first is the obvious one — the corporate and legal pressures already in motion. Major labels are filing lawsuits. Platforms are quietly modifying how user inputs get processed (Suno's recent server-side rewriting of style descriptions, which I wrote about earlier this year, is one example). Copyright frameworks designed for a different era are being applied to new technology in ways that will probably narrow what's possible over the next few years. These are real and they deserve serious attention.


The second threat is less obvious and, in some ways, more harmful: a pattern of cultural dismissal from inside music itself.


The pattern goes like this. Someone shares an AI-generated song. A trained musician or serious enthusiast responds, often with one word: "slop." That's the entire critique. No specifics about what failed, no description of what would have made it work, no engagement with the song as a piece of music. Just dismissal.


The dismissal isn't taste. Taste articulates itself. A real critic can tell you exactly why a melody falls flat, where a chord progression cheats, what the production is trying to hide. Specific critique helps people improve. The undifferentiated "slop" response does something different — it tells the person who shared the song that what they made isn't worth talking about, and it tells everyone watching that this whole category of work isn't worth talking about either.


The cost is enormous. People who were just beginning to discover that they could make music — that they had musical ideas worth expressing — pull back. They stop sharing. They stop trying. Often they stop creating altogether. The dismissal works exactly as it's designed to work, even if no one designed it.


What I'd ask of anyone reading this who finds themselves dismissing AI music generally: try saying specifically what's wrong. If you can articulate it, the articulation will help. If you can't articulate it — if "slop" is the entire critique you have — that's worth noticing.


But I don't want this article to land as another argument about who's right. The more useful thing to point out is what we already know about how openings like this play out, because we've been here before.


The clearest example is photography. For most of the 19th century, photography required expensive equipment, chemical knowledge, darkroom access, and technical training. Photographers were a small professional class. Then Kodak put cameras in ordinary people's hands, and the snapshot revolution opened the medium to everyone. Professional photographers and art critics dismissed amateur photography for decades — too easy, too imitative, lacking craft, not real photography. The dismissal didn't stop the expansion.


Then digital cameras and smartphones democratized photography to a degree the snapshot generation couldn't have imagined. Everyone became a photographer in some functional sense. Billions of photos are taken every day. And here's the part that matters for this conversation: wedding photographers still make a living, often a better one than before. Fine art photography still has galleries, collectors, and prices. Photojournalism still exists, and the masters of the medium are still studied and revered. What disappeared was specific roles — one-hour photo labs, professional film developers, certain camera manufacturers — and specific business models. What didn't disappear was photography as a craft, an art, or a profession. The democratization expanded the field. It didn't replace what came before.


There's no reason to think music will play out differently. People will still love live concerts. They'll still pay for albums by musicians whose distinct creative voice they want to hear. They'll still value virtuosity, performance, and the specific feeling of music made by a particular human with a particular vision. AI music adds a new lane. It doesn't close the existing one.


I want to acknowledge that the new lane creates real economic pressure on working musicians, particularly those who depend on session work, sync licensing, and background-music commissions. That's not nothing, and I don't want to minimize it. But the disruption is happening whether anyone approves of it or not. The question worth asking isn't whether AI music should exist — that question's already been answered by the millions of people using these tools. The question is what to do now.


For trained musicians and serious music enthusiasts, this is where I think the most interesting opportunity sits, and the one most often missed. A surprising number of music experts haven't yet realized what AI music tools could do *for them*. Most of the conversation has been about what AI music does *to* music — the threats, the displacements, the legitimacy questions. Less attention has been paid to a quieter possibility: that the people with the deepest knowledge of how music works could, for the first time, translate that knowledge into actual music with a fraction of the production overhead it used to require.


Someone who has spent decades understanding harmony, voice leading, what makes a hook stick, why certain bridges land and others don't — that person, equipped with AI music tools, can do something genuinely new. Compositional sketches that take an afternoon instead of a week. Real audible output of theoretical ideas that previously had no efficient path to being heard. Faster iteration on creative work that used to require studios and session musicians and weeks of post-production.


And here's what I think gets missed: the gap between bad AI music and good AI music is mostly bridged by the kinds of things experts know. Knowing why a particular chord change feels earned. Hearing that a melody is one note away from working. Understanding what an arrangement needs to give a section emotional weight. These contributions, applied as guidance to AI generation, are exactly what turns generic output into music that actually moves people. Experts aren't obsolete in this new world. They're disproportionately useful in it.


I came to AI music as a music lover, not a musician. My musical knowledge is broad rather than deep — I love music spanning Baroque through Romantic through Motown through glam rock through synth-pop, but I'm not credentialed in any of it. I came at AI music outputs the way the anti-snob comes at wine: if it tastes good, I drink it. I don't need a sommelier to tell me whether what I'm hearing is allowed.


What I noticed, though, was that the gap between bad AI music and good AI music is enormous, and most of that gap is prompt engineering. The same generator can produce something that sounds like generic AI slop or something that sounds like an actual song, and the difference often comes down to how carefully the prompt was constructed. Most casual users haven't learned this. They generate, they're disappointed, they assume the technology can't do better, and they leave.


That's the gap I've been trying to close. SongSyntax is a brand I've built around the idea that ordinary music lovers deserve tools that help them get the best out of AI music platforms. The first product, Spectrum, is a precision prompt builder for Suno — it pairs a curated database of musical references with structured output discipline so that someone with no production training can generate prompts that consistently produce music that sounds like what they wanted to make. It's the kind of tool that should exist for the people who weren't part of the music-making world before and are showing up now.


Music was always supposed to be for everyone. We forgot that for a while. We get to remember now — and the people who know the most about how music works have a real opportunity to be part of that remembering, on terms that actually use what they know.


If any of this resonates and you'd like to learn more — as a user, a contributor, or just someone curious about where this is going — please get in touch. I'd be glad to hear from you.


---


Simon is the founder of SongSyntax, a brand of Mosswood HR Solutions LLC. SongSyntax builds curated prompt-engineering tools for AI music creators. The first tool, Spectrum, is now available.

Man conducting an orchestra on a computer screen in a home office.

Copyright © 2026 SongSyntax - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept