Music is an essential part of everyday life. Millions of people worldwide use various music apps to listen to music at home, in the office, at the gym, or anywhere else. But sometimes, someone might walk into a store and hear a new song they like but don’t know anything about it. How does that person go about identifying a new beat?
By integrating the song identification feature into Spotify, users will have access to a more rounded listening experience without needing to leave the platform. They can identify a song and immediately add it to a playlist.
Spotify users must leave Spotify when they hear an unknown song, use other platforms to identify it and - if the identification process is successful- return to Spotify and manually add it to their playlist.
I have conducted comparative and competitive research and user interviews to understand how people identify new songs, their challenges, and how they integrate these new findings into their lives. The stories that I collected became a strong foundation for the insights below.
Method: Zoom calls
These were some of the questions asked during the interviews:
Below you can find some of the key insights uncovered during the interviews:
Help music lovers identify unknown songs through a streamlined process, increasing user engagement and the amount of new users joining Spotify. This involves building a song detection feature that will improve the user experience on Spotify.
Adding a song detection feature to Spotify means users never have to leave the app whenever they hear an unknown song they like and want to identify. They can add it directly to their playlists and integrate it into their day.
Creating a user flow not only helped me understand the exact steps of user's journey when performing a task like identifying a song, but it also pushed me to think about negative outcomes.
What kind of errors could users get? What could cause these errors and how can we mitigate this risk? If they do come up, how do we ensure users get proper feedback and alternative pathways?
On a mission to deliver the above goals as the main values to the user, I created the user flow starting with the home screen, identifying a song, and ending with the final screen—adding the new song to a playlist. I have also created a secondary flow that targets potential errors and showcases what those errors could be and how they would look to users.
Once I understood the essential design elements, I incorporated them into detailed wireframes and iterated through multiple design versions in Figma based on feedback collected through testing.
The new feature incorporates 29 screens in total: the main flow - identifying a song and adding it to a playlist features 11 screens; the rest of the screens are used to display different variants of error flows.
Flow 1: Identify a song and add it to playlist
This new feature uses Spotify’s UI and takes the user through a straightforward song identification process. The feature icon is located in the navigation bar at the bottom of the home screen. Upon tapping the icon, the user can start identifying a new song.
The first screen displays a simple UI and a big button at the center without any other distractions. Depending on how close the user is to the sound source and how fast the search is done, the app will showcase different screens during the search process. This is done to provide constant feedback to users and prevent them from closing the app.
Once the song is identified, the user sees the song and artist details, along with the ability to play the music, add it to favorites, or add it to a separate playlist. If the app shows the wrong song, the user can report this error by tapping the “Report” button on the same screen.
Flow 2: Error - no match found
If the app cannot find a match, it will display this error accordingly. The wording of the search screens gives the user an indication of a potential negative outcome, and then the last screen confirms it. To avoid the user dropping off, I added the Try Again button, which can be tapped to rerun the search.
Flow 3: Error - no music detected
During the interviews, people often said that one of the most common issues they encounter is that the app they use cannot detect music. This usually happens when they're out and about, and there's a loud noise in the background.
Improving sound accuracy is an ongoing technical effort for this project to mitigate this risk. However, such errors can still occur, and when they do, the user is shown a screen—at the end of the search—that explains this error and tells the user what to do.
Flow 4: Error - identifying potential variants
If the app cannot offer a definite song option, it may show users different variants from which they can choose. The probability of this happening should be low, but it should be covered nonetheless.
To understand how valuable potential users will find this feature, I asked testers to perform the following tasks:
I chose these particular screens for testing because they are central to illustrating how the new feature works and host integrates in the platform. After receiving feedback for all my designs, I made iterations based on user testing findings.
1. Text feedback
While searching for the song's name, the feature initially showed multiple feedback screens meant to discourage the user from leaving the platform. However, the wording in some of them was discouraging, thus creating confusion as to whether the task had been completed.
To minimize confusion, I moved some of these screens to the error-prone flows, making the initial flow lighter and easier to understand.
2. Main feature button
I wasn’t sure which button to tap to identify a song.
One user needed clarification about which button to press when identifying a new song. The confusion happened because the initial indication, "Tap to detect new song," was placed on the top left side of the screen near the Go Back icon. The user thought they needed to go back to detect the new song. To minimize confusion, I moved the text lower on the screen, closer to the large detection button.
3. Identical imagery
For a second there, I thought I’m on the album page, not the song page as I initially thought.
Using the same image for the album and song led testers to believe that they were looking at the artist's album rather than the song they might be interested in. I changed the image to remove any potential confusion. Aditionally, I also removed the Popular songs section and replaced it with an essential section in which users can report incorrect search results.
Below, you can try out the final version of this project's interactive prototype.
Integrating a feature into an existing product is a complex process. It requires thinking about where this feature will be placed, what it will look like, how it will be introduced to users and many other questions. While I didn’t have time to work on introducing the feature through Spotify’s onboarding process, I took the time to assess all of the other questions properly.
Following existing product guidelines ensures that the new feature flows seamlessly. Taking the time to outline the end goal while staying aligned with user needs is equally important.
However, my most important lesson was recognizing when less is more. I was often tempted to add as many things to my screens as possible, but this would only overwhelm users, who would end up not completing their tasks and becoming distracted by other elements. As a result, minimizing the offer on my screens was the right decision.
From a technical point of view, this feature type requires a lot of work. Reaching the desirable song accuracy—regardless of how much noise there may be around—and maintaining that capacity over time requires plenty of resources. This is why it’s imperative to stay in touch with the tech at all times and design within the agreed parameters.
What should be improved?