Spotify: Identify new songs

Role
Concept, Research, UX, UI, Testing
Platforms
Mobile
Industry
Entertainment
Tools
Figma
Duration
6 weeks
Background

Music is an essential part of everyday life. Millions of people worldwide use various music apps to listen to music at home, in the office, at the gym, or anywhere else. But sometimes, someone might walk into a store and hear a new song they like but don’t know anything about it. How does that person go about identifying a new beat?

By integrating the song identification feature into Spotify, users will have access to a more rounded listening experience without needing to leave the platform. They can identify a song and immediately add it to a playlist.

What I did
    1. User research to understand how people identify unknown songs and how they then integrate these into their existing playlists
    2. Competitive research - I analyzed how top competitors like Shazam, Soundhound, or Genius perform and what they offer their users.
    3. App navigation - visualizing how to integrate efficiently a new feature to an existing product, Spotify, by creating user flows and task flows.
    4. Wireframing - showcasing new feature flow and how it integrates into the app through low-, mid-, and high-fidelity wireframes.
    5. Interactive prototype - created an interactive prototype for the new feature flow.
    6. Usability testing - understanding how useful and easy to navigate the new feature is and iterate based on feedback.
Project goals
  1. This project aims to offer a unifying listening experience to Spotify users by:
    • Streamlining the process of identifying new unknown songs
    • Diversifying playlists
    • Discovering new artists
    At a business level, the addition of a new feature would:
    • Increase competition with Apple, as Shazam has been acquired by Apple, which has 450M active users and 20M song identifications per day
    • Increase user engagement for existing users
    • Increase the number of Spotify users
    Technical considerations
    • Use the database and create an algorithm that distils samples of a song into audio fingerprints and matches them to fingerprints from known songs taking into consideration their timing relative to each other within a song
    • Ongoing work towards increasing song accuracy and audio detection in public (loud noise environment) - identifying error scenarios and coming up with specific solutions that address these errors
    • User feedback is mandatory - it will help the product to develop and to be tailored to users’ needs

The problem

Spotify users must leave Spotify when they hear an unknown song, use other platforms to identify it and - if the identification process is successful- return to Spotify and manually add it to their playlist.

Research

I have conducted comparative and competitive research and user interviews to understand how people identify new songs, their challenges, and how they integrate these new findings into their lives. The stories that I collected became a strong foundation for the insights below.

Interviews

Method: Zoom calls

These were some of the questions asked during the interviews:

    • When do you listen to Spotify and how often?
    • What is your favourite Spotify feature?
    • If you could choose the next feature that Spotify releases what would that be?
    • Imagine walking into a store and hearing a song that catches your attention. You like it but don’t know anything about it - title or artist. How do you find out the name of the song or artist?
Insights

Below you can find some of the key insights uncovered during the interviews:

Defining the problem

Help music lovers identify unknown songs through a streamlined process, increasing user engagement and the amount of new users joining Spotify. This involves building a song detection feature that will improve the user experience on Spotify.

  • How might we enhance the listening experience to provide accurate song detection that allows users to discover new artists, diversify playlists and increase time spent on the platform?
The solution

Adding a song detection feature to Spotify means users never have to leave the app whenever they hear an unknown song they like and want to identify. They can add it directly to their playlists and integrate it into their day.

User flow

Creating a user flow not only helped me understand the exact steps of user's journey when performing a task like identifying a song, but it also pushed me to think about negative outcomes.

What kind of errors could users get? What could cause these errors and how can we mitigate this risk? If they do come up, how do we ensure users get proper feedback and alternative pathways?

Initial design

On a mission to deliver the above goals as the main values to the user, I created the user flow starting with the home screen, identifying a song, and ending with the final screen—adding the new song to a playlist. I have also created a secondary flow that targets potential errors and showcases what those errors could be and how they would look to users.

Once I understood the essential design elements, I incorporated them into detailed wireframes and iterated through multiple design versions in Figma based on feedback collected through testing.

Final design

The new feature incorporates 29 screens in total: the main flow - identifying a song and adding it to a playlist features 11 screens; the rest of the screens are used to display different variants of error flows.

Flow 1: Identify a song and add it to playlist

This new feature uses Spotify’s UI and takes the user through a straightforward song identification process. The feature icon is located in the navigation bar at the bottom of the home screen. Upon tapping the icon, the user can start identifying a new song. 

The first screen displays a simple UI and a big button at the center without any other distractions. Depending on how close the user is to the sound source and how fast the search is done, the app will showcase different screens during the search process. This is done to provide constant feedback to users and prevent them from closing the app.

Once the song is identified, the user sees the song and artist details, along with the ability to play the music, add it to favorites, or add it to a separate playlist. If the app shows the wrong song, the user can report this error by tapping the “Report” button on the same screen.

Flow 2: Error - no match found

If the app cannot find a match, it will display this error accordingly. The wording of the search screens gives the user an indication of a potential negative outcome, and then the last screen confirms it. To avoid the user dropping off, I added the Try Again button, which can be tapped to rerun the search.

Flow 3: Error - no music detected

During the interviews, people often said that one of the most common issues they encounter is that the app they use cannot detect music. This usually happens when they're out and about, and there's a loud noise in the background. 

Improving sound accuracy is an ongoing technical effort for this project to mitigate this risk. However, such errors can still occur, and when they do, the user is shown a screen—at the end of the search—that explains this error and tells the user what to do. 

Flow 4: Error - identifying potential variants

If the app cannot offer a definite song option, it may show users different variants from which they can choose. The probability of this happening should be low, but it should be covered nonetheless.

Usability testing

To understand how valuable potential users will find this feature, I asked testers to perform the following tasks: 

  1. Identify a new song.
  2. Add it to your playlist.
  3. Share the song with someone.

I chose these particular screens for testing because they are central to illustrating how the new feature works and host integrates in the platform. After receiving feedback for all my designs, I made iterations based on user testing findings. 

1. Text feedback

While searching for the song's name, the feature initially showed multiple feedback screens meant to discourage the user from leaving the platform. However, the wording in some of them was discouraging, thus creating confusion as to whether the task had been completed. 

To minimize confusion, I moved some of these screens to the error-prone flows, making the initial flow lighter and easier to understand.

2. Main feature button

I wasn’t sure which button to tap to identify a song.

One user needed clarification about which button to press when identifying a new song. The confusion happened because the initial indication, "Tap to detect new song," was placed on the top left side of the screen near the Go Back icon. The user thought they needed to go back to detect the new song. To minimize confusion, I moved the text lower on the screen, closer to the large detection button.

3. Identical imagery

For a second there, I thought I’m on the album page, not the song page as I initially thought.

Using the same image for the album and song led testers to believe that they were looking at the artist's album rather than the song they might be interested in. I changed the image to remove any potential confusion. Aditionally, I also removed the Popular songs section and replaced it with an essential section in which users can report incorrect search results.

Interactive Prototype

Below, you can try out the final version of this project's interactive prototype.

Conclusion

Integrating a feature into an existing product is a complex process. It requires thinking about where this feature will be placed, what it will look like, how it will be introduced to users and many other questions.  While I didn’t have time to work on introducing the feature through Spotify’s onboarding process, I took the time to assess all of the other questions properly. 

Following existing product guidelines ensures that the new feature flows seamlessly. Taking the time to outline the end goal while staying aligned with user needs is equally important. 

However, my most important lesson was recognizing when less is more. I was often tempted to add as many things to my screens as possible, but this would only overwhelm users, who would end up not completing their tasks and becoming distracted by other elements. As a result, minimizing the offer on my screens was the right decision. 

From a technical point of view, this feature type requires a lot of work. Reaching the desirable song accuracy—regardless of how much noise there may be around—and maintaining that capacity over time requires plenty of resources. This is why it’s imperative to stay in touch with the tech at all times and design within the agreed parameters. 

What should be improved?

  • I suggest improving the detection button by animating it when it searches for songs; combined with ongoing feedback through text, this would keep the users even more engaged.
Next steps
  • Coming up with a strategy to introduce the new feature to users following Spotify’s guidelines.
  • Continuously improving the song detection accuracy from a technical point of view.
  • Collecting further feedback from users to understand what needs to be further improved.
Scroll