Hi
I'm Manu. I listen to a huge range of music. Kishore Kumar to Kendrick. Jimi Hendrix, Travis to Seedhe Maut. I play guitar, uhm to the best of my ability I'll say. I DJ at my friends' house parties, messing up transitions and song switches most of the time, but I enjoy every second of it.
I am not a musician. I'm a software engineer who has spent the past few years building products in crypto, finance and now AI. But music has been there through all of it, not as a hobby but something I'm always in.
And right now, something massive is happening to it. AI is changing how music is made, who makes it, and how much of it exists. That part is exciting. What concerns me is what might get lost in the process. Not that AI makes bad music. But that it makes average music at infinite scale, slowly drowning out the weird, specific, human stuff that actually matters.
Before any tool, before any plugin or model or workflow, there was intent. An artist hearing something in their head and wanting to make it real. That intent, that fingerprint, is what I think we need to talk about protecting.
the fingerprint test
Okay so for anyone who has ever been deep in a fandom, or just listened to one artist way more than the rest, try this. Close your eyes. Imagine you are put in a situation where someone plays you a song you've never heard from your favourite artist. There's a very high chance that within seconds, you know who made it. How?
That instant recognition, that's the fingerprint. It's not just talent. It's the sum of a thousand decisions made by a human who has lived a specific life and made certain choices that shaped his creative decision making over the years.
- The Weeknd: you hear 5 seconds and you know it. The dark, reverb drenched, 80s synth haze. The falsetto floating over something that feels like 2 AM. This isn't production technique you can checkbox and replicate. It's Abel's inner world made audible. This identity is atmospheric, holistic. It's the vibe architecture, not any single knob or setting.
- Metro Boomin: strip the "Metro Boomin want some more" tag. Within 8 bars, you still know it's Metro. The cinematic darkness. The 808 patterns that feel like a heartbeat in a thriller. The way the beats breathe. Production fingerprints are pattern based, exactly what AI is best at replicating on the surface. But the choices, why this 808 here and silence there, that's taste. Taste isn't a parameter.
- Taylor Swift: every Swiftie can spot a Taylor lyric in the wild. The hyper specific storytelling. The bridge breakdowns that hit like plot twists. She makes the deeply personal feel universal. That's not craft alone, that's a specific emotional vocabulary built from a life actually lived.
Artist identity isn't one thing. It's sonic. It's structural. It's emotional. It's the sum of a thousand tasteful decisions. Any tool that claims to serve artists must treat that identity as sacred, not as a parameter to optimise.
So when AI enters the studio, the question isn't can it make music. It's: can it make music without erasing the soul of the artist who made it?
The Weeknd
atmospheric · sonic
Metro Boomin
rhythmic · structural
Taylor Swift
lyrical · emotional
a new dawn; with real weather
The shift is undeniable. And it's not coming. It's here.
the sunrise
60 million people used AI tools to create music in 2024. Let that number sit for a second. That's not a rounding error or a Silicon Valley hype stat. That's 60 million people who opened a tool, gave it some input, and got music back. (Source: IMS Business Report 2025)
A recent LANDR study of 1,200 artists found that 87% have incorporated AI into at least one part of their process. (Source: LANDR, Oct 2025). But here's the part that matters most to me: only 13% are using AI to generate a full song. 29% use it for parts, vocals, instrumentals, beats to layer into something they're already building. One respondent put it perfectly: "I use AI as if it was a band of session musicians." (Source: Hypebot)
Artists aren't handing over the wheel. They're using AI to move faster. The session musician who's available at 3 AM when the idea hits. The mixing ear that gives you a solid starting point so you can focus on the creative decisions. That's not replacement. That's democratisation done right.
the weather
But here's the other side.
Around 50,000 fully AI generated tracks are uploaded to Deezer every single day. That's 34% of all daily uploads to the platform. (Source: Deezer, Nov 2025)
In a study of 9,000 listeners across eight countries, 97% couldn't tell the difference between AI generated music and human made music. More than half said that made them uncomfortable. And 70% said they believe fully AI generated music threatens the livelihoods of artists. (Source: Deezer/Ipsos, Nov 2025)
When a team at MIT Media Lab analyzed 10,000 AI generated tracks, they found that over 70% shared nearly identical chord progressions. (Source: via Bensound) That's a real risk.
And yet, 74% of content creators say they prefer licensing music from identifiable human composers. (Source: SyncVault 2025) The market is already telling us something. Authenticity isn't just a philosophical value. It has economic weight.
The shift is real. The question is: can we build tools that raise the ceiling for every creator without averaging out the very thing that makes each creator unique?
97% of listeners can't tell.
But 70% are worried.
87% of artists use AI. Only 13% want it to make the whole song.
what figma understood
I keep thinking about a story from a different creative industry. Not music. Design.
In 2016, if you designed interfaces for a living, you used Sketch. There wasn't much debate. Sketch held about 70% of the market. It was fast, elegant, purpose built. By 2022, Figma had flipped that to 90% in its favour. The company went from $700K in revenue in 2017 to over a billion dollars by 2025 and IPO'd at a $70 billion market cap. (Source: UX Tools survey, CNBC)
One of the fastest market share inversions in software history. And it happened not because Figma was a better design tool. It happened because Figma understood something deeper about what designers actually needed. Dylan Field, Figma's co-founder, described his vision as wanting to "do for interface design what Google Docs did for text editing." (Source: TechCrunch, 2015) The revolution wasn't about capability. Sketch was plenty capable. It was about reducing friction. Designers were already great at their jobs. The tools around them were just getting in the way.
Sketch was powerful but siloed. Mac only. File based. Single player. To get from a design to a shipped product, you needed Sketch for the design itself, InVision for prototyping, Zeplin for developer handoff, and Abstract for version control. Four separate tools to do one job. Figma collapsed all four into a single browser based surface. But here's the part that matters most: designers still did the same work. They still drew frames, built components, iterated on layouts. The craft was identical. The friction was gone.
Figma never tried to design for you. It made your process faster, more collaborative, and more fluid. The tool became invisible so the work could be visible. No installs, no file syncing, no version conflicts. You opened a link and your work was just there. And because it was that frictionless, adoption happened from the bottom up. 70% of Figma's enterprise deals originated from individual designers who started on the free tier, not from sales calls or procurement teams. One designer would use it, share a link with their team, and the file itself became the pitch. (Source: How Figma Grows)
I think about this story constantly. Because the parallels to music are hard to ignore.
A DJ making an extended intro doesn't need a new creative process. They need the existing process, find a track, separate the stems, arrange them, mix it down, export, to not require five different apps, $300 paid to a producer, and three weeks of back and forth. The intent is already there. The taste is already there. The tools just need to get out of the way.
Figma didn't make designers obsolete. It made them faster, freer, and more collaborative. That's the blueprint. The question is: can someone do the same for music without killing the soul of the people creating it?
what I believe
I've spent the last few months not just reading about this shift, but sitting with it. Thinking about what kind of tool I'd actually want to exist. And I keep coming back to three things that I believe are non negotiable.
belief 1: the fingerprint must survive
If you hear a song made with my tool and can't tell whose creative vision it carries, I've failed. A Weeknd song made with AI assisted tools should still be unmistakably, irreplaceably Abel's. AI helps with the how. The artist owns the what and the why.
belief 2: AI is an upgrade, not a replacement
All creative decisions stay with the artist. AI handles the tedious stuff, stem separation, mixing starting points, arrangement scaffolding. The taste is always human. Creators already think in drops, builds, intros, energy curves. The tool's job is to speak their language, remove the friction between intent and output, and make it easy to create together. Don't change the process. Just make it better.
I am calling it Recut.
I'm still researching, talking to artists, and building the first version. I'm not ready to show you the product yet. But I wanted to say all of this out loud because I believe the conversation matters as much as the tool. Maybe more, at this stage.
If any of this resonates, if you're a creator who has been thinking about these same questions, I'd genuinely love to hear from you.
— Manu

if any of this resonates
free to start. no credit card.