
As a musician, I like to build community with fellow artists and grow our audiences together through playlists. I maintain a submission form where musicians can share tracks for playlist consideration. One such submission — your track “All I’m After” by DarkHome — caught me completely off guard.
It sounded like it could have come out of a multimillion-dollar studio, with a female lead vocal worthy of the Billboard Top 10. That vocal, as it turns out, was AI-generated, while the human behind DarkHome handled the other creative aspects of this striking indie anthem.
I was inspired to create this playlist Man+Machine: Collabs with AI on Spotify because of DarkHome's offering.
Naturally, I was curious about how this artist brought it all together, so I had some questions for the human half of this collaboration.
You’ve been an experienced recording artist long before this current wave of AI tools. Can you tell me more about your earlier (all-human) musical work?
I grew up in the New England Emo Punk Rock and Hardcore music scene in the early ’90s. As a kid, attending shows and feeling that energy, I knew I had to pursue music. I started playing guitar, and as a “cursed lefty,” I had to flip my guitars upside down like Hendrix because no stores carried left-handed models back then. As my love for music grew, I kept trying new instruments and eventually became a multi-instrumentalist. Once I had a few “chops,” as the kids say, I began writing songs and played in a few bands that meant a lot to me—Passenger Westbound and High School Sweethearts. In these bands, I played guitar and sang, but in High School Sweethearts, I was a co-vocalist and loved every show we played. During these times, I developed a passion for creating and writing my own music, and I realized that music was the ultimate release. I’ve been hooked ever since.
Is that earlier music available for streaming? If so, where can listeners find it?
Unfortunately, no, we didn’t release anything on streaming platforms, but that’s a great idea. At the very least, we could upload the songs to YouTube. When I do, I’ll let you know so we can provide an update to your followers!
What inspired you to involve AI in the recording process for “All I’m After”?
Great question. I decided to experiment with AI because some of the songs I write evoke a certain feeling in my mind, like my other song “One Last Time". I experimented with my own vocals but didn’t think they fit the atmosphere I was trying to create. So, I decided to try a female vocalist and immediately knew it was the right choice. This was the same feeling on "All I'm After". Personally, that’s how my ear works—I’m a self-taught musician, play by ear, and trust my gut when it comes to what I like. When everything falls into place, I stop tinkering and follow that feeling until the song is complete. AI vocals have removed limitations for me and opened up more possibilities for how I can create.
Which AI music generation platform(s) did you use for this project?
I use Suno and Suno Studio. These products have many useful features that can help you shape a scratch track or demo and inspire you with musical ideas you might not have considered. It’s like having a co-producer at a fraction of the cost of traditional music production tools, such as loop libraries and samples. All these elements aid in song creation with just a click, and they’re no different from the song parts you can use in Suno. AI tools for music production are a great way for anyone to create. While I personally don’t just prompt and post, I don’t judge those who do—it’s a form of expression. For people who can’t play an instrument, whether due to lack of ability or a disability, AI allows them to create something meaningful with whatever human input they can provide. There are far worse things people could be doing than creating music they love.
Did you use a DAW (Digital Audio Workstation) like Pro Tools, Logic, or another platform to integrate the human and AI elements?
Yes, I use PreSonus Studio One Pro 7 as my DAW and EzDrummer 3 by Toontrack for drums. I have a small home studio where I do all my recording and play my instruments, which mainly include guitar (electric and acoustic), bass, piano/keyboard, and vocals.This serves as my launchpad for the AI to understand the cadence and vocal tone I’m aiming for in my music. Studio One also has tons of effects like drones and pads, etc., to create a lot of atmospheric sounds as well.
How was the creative workload divided between you and the AI tools on this track?
I like to record a scratch track—vocals and acoustic guitar—into Studio One. Then I upload that into Suno, which creates a demo version with whatever instruments I’d like to experiment with. Sometimes I get a cool synth, piano lead, or even drums added to my track. This inspires me because my song starts to take shape, and I can hear different possibilities for where it might go. If I find parts I like, I’ll split the stems (individual instrument tracks) in Suno and export them to Studio One. There, I use some stems as references and provide human input to create the final track. For example, I’ll listen to the bass, add my own style, and re-record my own track, doing the same for guitar and drums. Once I reach a sound I like, I add compression, EQ, mix down, and prepare to publish. Before releasing, I add my mixed song back into Suno, provide my own lyrics—which I take great pride in writing—and search for the vocal style I want for the track. Suno lets you use “Personas” that sound like the same vocalist. I’ll add my track to Suno, provide my original lyrics, and specify the vocal style I want—for instance, raspy female vocals with heartache you can feel. This brings the song to life and sounds incredibly realistic. Again, some people who use AI just prompt and post, but I take great pride in using the inspiration (and vocals) AI provides while still putting in as much human input as possible to call this track my own work.
What surprised you the most about collaborating with AI?
How much fun it would be. As a musician, you can get stuck in ruts—you have chords or lyrics and know you have something, but without spending a ton of money (and I know some people won’t like this), you can get an unbelievable co-producer that opens your mind and helps shape a song quickly, without the time it would take to work with a producer. As I mentioned earlier, there are many tools professional musicians use to move from point A to Z quickly, and autotune and other AI-assisted tools are everywhere. What I think has the industry a little rattled is that these tools are now in the hands of real musicians, too, and we can create something amazing without needing a contract or label demanding a certain sound, or a big-budget studio to create something that will impact listeners and keep them coming back and sharing your music with others.
How have listeners reacted so far to your human–AI collaborations?
My first song, “My Anchor,” was released on October 16th, and so far, I’ve received a ton of support from my old bandmates, family (of course), and friends. I have a lot more to say and create, and I feel like this is just the beginning. My other song, “One Last Time,” got a lot of views on TikTok—just under 3,000—which was amazing, so it’s only the beginning. More songs are coming and are ready for release, and I’m now on all streaming and social media platforms. So, it’s been under a month, and I’m lucky enough and thankful to be interviewed by you, Jason. I feel like the songs I create are resonating, and that’s the best feeling in the world.
Thanks again for sharing your perspective on this new creative frontier. I think it’s important for listeners — and fellow musicians — to understand both the potential and the boundaries of AI-assisted music, straight from the artists experimenting with it.
DarkHome's "All I'm After" is featured on my playlist "Man + Machine - collabs with AI" on Spotify:
▶️ Stream, save and share the playlist here.
You can find “All I'm After” on the streaming platform of your choice at: