
A marathon through the AiMCharts catalog — the gems, the garbage, the uncanny valley moments, and the three songs that made me feel something I wasn't ready for.
I told myself I'd be objective. Clinical, even. One hundred AI-generated songs, back to back, rated on a 10-point scale, notes taken in a spreadsheet. A controlled experiment. Music criticism as data collection.
By hour six, the spreadsheet was abandoned. By hour twelve, I was lying on my floor with the lights off, letting a song called "Velvet Meridian" by an artist I'm 90% sure doesn't exist wash over me like it was the most important piece of music ever created.
It wasn't. But at that point, I couldn't tell anymore. That's the whole story.
Let me be honest about the first 40 songs: most of them are bad. Not charmingly bad. Not interesting-bad. Just... empty. Competent chord progressions, serviceable melodies, lyrics that sound like they were generated by asking an AI to "write a song about love and loss" with zero follow-up.
I counted 11 songs in the first batch that used the phrase "dancing in the moonlight" or a close variant. Seven had a bridge that started with "but then I realized." Four were acoustically indistinguishable from each other.
This is the AI music that critics point to when they say the whole thing is a waste of time. And they're not wrong — about this specific tier. It's content. It fills playlists. It racks up streams from people who aren't really listening. It exists because it can, not because it should.
Rating average for songs 1-40: 3.2 out of 10.
Somewhere around song 45, things got weird. Not bad-weird. Disorienting-weird.
I hit a run of tracks that were technically impressive — clean production, interesting arrangements, vocals that sat perfectly in the mix. But something was off. A phrasing choice no human singer would make. A lyrical transition that felt logically correct but emotionally wrong. A chorus that peaked at exactly the right moment but didn't earn it.
This is the uncanny valley of AI music. It's close enough to pass on first listen. It's wrong enough to haunt you on the third. Your brain knows something is off but can't articulate what. It's like talking to someone who smiles at all the right times but never blinks.
I started second-guessing myself. Was that vocal run too perfect? Was that drum fill too precisely placed? I caught myself rating a song lower not because it sounded bad, but because it sounded too good in a way that made me suspicious.
That's when I realized: my standards had shifted. I wasn't evaluating music anymore. I was evaluating authenticity. And I wasn't qualified to tell the difference.
Song 67 stopped me cold.
It was a lo-fi R&B track — nothing special on paper. Simple piano loop, understated vocal, a lyric about waiting for a phone call that never comes. But the vocal performance had this crack in it, this micro-hesitation before the second verse, that felt so human I paused the track and looked up the artist.
AI-generated. Suno v5. Released three weeks ago.
I played it again. The crack was still there. It still worked. I rated it an 8.
Then I spiraled: Was I moved by the music, or by the illusion of vulnerability? If an AI produces a sound that mimics emotional fragility, and I respond to it emotionally, is my response real? Does it matter?
I don't have an answer. I'm not sure anyone does.
Between songs 60 and 85, I found five more tracks that genuinely surprised me. A gospel-influenced piece with harmonies that gave me chills. An ambient track that built tension better than most film scores I've heard this year. A punk song — an actual AI punk song — that was sloppy and loud and kind of perfect.
Rating average for songs 60-85: 6.1 out of 10.
By the final stretch, my ears were cooked. Everything blurred. I couldn't tell if song 94 was brilliant or if I'd simply lost all critical faculty. My notes devolved from structured observations to things like "this one has a flute?" and "why am I crying."
But here's what the data showed when I compiled it the next morning:
A 3% keeper rate. Out of 100 songs, three earned a spot in my actual rotation. For context, if I listened to 100 random songs on Spotify's Discover Weekly, I'd probably keep 4-5. The gap is narrower than you think.
The conversation about AI music is stuck in a binary: it's either a threat to human creativity or it's a revolutionary tool. After 24 hours and 100 songs, I think both positions are lazy.
Most AI music is forgettable. That's not a condemnation — most human music is forgettable too. The difference is volume. An AI can produce forgettable music at a rate no human can match, which means the ratio of noise to signal is catastrophically worse.
But the signal exists. It's real. Song 67 was real. The gospel harmonies were real. The stupid punk song was real. Not because a human made them — but because they did something to a human who heard them.
The question isn't whether AI can make good music. After 100 songs, I can tell you definitively: it can. Rarely. Inconsistently. Surrounded by mountains of mediocrity. But it can.
The question is whether we'll build systems that help people find the 3% — or whether we'll let the other 97% drown everything in noise.
That's not a music problem. That's a discovery problem. And right now, nobody's solving it well enough.
Written by
Editor
Keep Reading

A quarter billion tracks. 60,000 new ones daily. The average listener plays 40 songs on repeat. Discovery is broken. Here's what the data says about fixing it.

Spotify's royalty threshold keeps rising. AI uploads keep flooding. The math isn't complicated — it's devastating. We ran the numbers.

K-Pop has the most organized fanbases in music. They're also the loudest critics of AI. But AI K-Pop is one of the fastest-growing genres on the charts. The data and the outrage don't match.