From the Proto.IO Blog
Proto.io lets you easily create fully interactive audio-enabled mobile app prototypes that you can download to your devices and test on the go. The best part? No coding required. Check out some inspirational works over at Proto.io Spaces to see what you can do with our powerful tool. Already convinced? Start your free trial today.
Remember back when the Web was essentially a collection of glittering ‘About me’ sites built by 6th graders using Geocities and AOL telling you, once again, that you’ve got mail? Yes, just a couple of decades ago, going online was slow, shiny and loud. Very loud.
Source: 1990’s Gifs
Compared to those times, the online experience these days is relatively quiet. We’ve come a long way from having to live with the unabashed midi tunes blaring out loud abruptly whenever we open a new website. Unexpected and uninvited sound is now considered annoying for users.
Nonetheless, there are instances when designing with audio is deemed as necessary, such as in mobile games. Audio can also be beneficial in providing useful feedback. A good example of a use case would be a fitness app that uses sound to quickly relay confirmation of action to the user. Considering the UX of sound and the value it can provide to users, we look at the arguments for and possibilities of designing with audio in the mobile era.
The UX Of Sound
Have you ever hit ‘Play’ on Spotify only to realise that you have yet to successfully plug in your headphones and everyone on the subway is now looking your way with a death stare?
Painfully awkward, isn’t it?
It’s difficult to advocate designing with audio without first understanding the problems that many users face when encountering sound in their web and mobile experiences. Disruption and annoyance, like in the above subway scenario, are some of the primary negative outcomes we commonly associate with audio content. This is because sound is difficult to dismiss. It takes users out of their current experiences and demands immediate attention. For this reason alone, sound isn’t a popular option when designing user interfaces, given what we know about mobile user behaviour.
Our typical user is already over-distracted, probably on multiple devices, and have more than plenty to deal with. Furthermore, users are now connected anywhere. They could be listening to music, driving, or simply hanging out in the middle of nowhere. An unexpected introduction of sound will definitely disrupt. It could very well ruin the moment for them.
But sound can also be a very powerful and useful resource as a feedback mechanism, an informant or a humanising factor when applied appropriately. For instance, if your iPhone is about to blow up, you would definitely prefer it to warn you with some kind of alarming sound rather than just flash a red alert screen while it sits snugly in your pocket.
The design industry has always focused primarily on the visual experience but we tend to forget that the auditory experience is just as important to the user experience. After all, we do not merely experience the world by seeing. Designing with audio potentially makes for a more wholesome human experience.
Our world is already a noisy one, if we think about it. We tend to not notice as our minds had become accustomed to the sounds that we’re now familiar with. The sound that your computer makes every time you start it up. The tapping keys you hear whenever your co-worker, who doesn’t believe in ‘Silent Mode’, sends a text message.
But many of these sounds were actually new and (I’m sure) considered quite annoying only some years ago. They probably still are as annoying to many users. Just as we got used to seeing green confirmation buttons and red cancellation buttons, we got used to new beeps and tones. And we will get used to new sounds provided that they are meaningful, applied to the right context, and add value to our interaction with modern devices.
Going Beyond The GUI
I’m not the worst driver in the world but every time I park, I bless technology for the sensor that beeps at an increasing rate in relation to the proximity of my vehicle to the curb. Without it, I would have to crane my neck to get a better view and use my instincts to estimate the distance to the curb.
Cars can teach us a couple of basic things about designing with audio for better user experiences. The first is that user experience design should not be limited to the usual graphic user interface (GUI). The car has no means to visually inform me how far I could still go without hitting the curb. The second is that sound tends to be very useful when we go beyond the GUI. Without the audio feedback, my car might end up in the garage more often than I would like.
Given that we’re moving into a world of less or no UI, it’s important for experience designers to consider designing with audio. There is great potential for sound in designing for wearables and for the Internet of Things. Given the limited GUI in wearables and other connected devices such as the Nest thermostat, it might not be the best option to rely on visual feedback as the main feedback mechanism.
One industry towards which wearables are heading is healthcare. The market offers us a wide range of devices and apps that track every detail of our health status. Sound can be valuable in reminding users to perform daily tasks like record their blood pressure or temperature.
Of course, the technology is currently limited to tracking and it might not seem like such a big deal. But compared to paper-based medical history files that we still use today, it is a huge leap forward. As the technology advances, the potential for sound becomes even greater.
Here’s an example. What if a tracking device could predict when a person is about to go into cardiac arrest, call an ambulance on their behalf, alert bystanders to the emergency and recite instructions on how to perform CPR? That would be groundbreaking and such an interface design would require the insights of experience designers who are adept at designing with audio.
Voice Interfaces: Content And Personality
We don’t have to wait for the future in order to start designing with audio. Many devices with no GUI exists today. When my Bluetooth speakers run low on battery, it interrupts BB King to politely remind me to charge it. Perhaps it could be done in a less disruptive manner but it surely is less annoying than suddenly having a pair of dead speakers only because I wasn’t paying enough attention to notice the little flashing red light at the back.
Designing with audio therefore brings up two important considerations. What kind of sound should we choose and how should we present it?
Selecting Sounds and Mapping Meanings
Let’s return to the example of the Bluetooth speakers. If it had warned of the low battery status with a beeping tone instead of an announcement, would it be more ambiguous? No doubt. I would be on Google in an instant trying to describe the sound in a search query to figure out what it means.
How do we then decide what sounds to use? Different types of sounds evoke different emotional responses and can be symbolically mapped to shared meanings that are easily identified. Named Earcons, they’re just like visual icons that are attributed meaning through time and exposure. The creation of sounds is ultimately the work of audio designers and they consider factors such as pitch, volume, and direction. In designing with audio, what experience designers need to consider is the user’s reaction to and understanding of these sounds and how sound can therefore add value to the user.
Products With Personality
My Bluetooth speaker announces its battery status in a rather monotonous voice. It’s cold, professional, and informative. No surprise there. That is probably the standard and risk-free way to present a product. If it had spoken to me in a low, husky male voice with flirtatious overtones, it would be downright creepy.
Designing with audio allows for greater personalisation of a product. Not only does it give the product a human touch, it can convey a brand’s personality in ways that visual cues are not able to. Of course, personality is not just determined by how certain words are said but also by what is actually said. It is the interplay of content and delivery that constitutes the personality behind the voice.
Looking at real world examples, the four big players in the tech industry have each developed their own voice assistants. Apple’s Siri, Google Now, Amazon’s Alexaand Microsoft’s Cortana are getting better at understanding voice commands and performing tasks at the user’s request. But software capability is only one part of the equation. These companies are also putting in a lot to build more personality into their voice assistants.
The best example would be Siri’s witty responses to questions that users have asked. As frustrated as we might be with Siri for hearing ‘pizza’ instead of ‘Pisa’, its in-built sense of humour makes it a more endearing entity to us. Even Microsoft has taken the path to develop Cortana to better simulate a human personality.
A human personality is a major improvement from the cold professional bot. But ‘human’ is still rather generic. Not all apps have such a broad range of functions as voice assistants do. Some product personalities can be designed to better connect with real users and to engage them through voice interaction.
For instance, if I had a medical app that informs me about my health status, I would expect a calm and reassuring personality that speaks to me in a gentle but firm tone. But if the app has a Dr House personality dishing out caustic remarks ever so often, it wouldn’t last very long on my device’s home screen.
Knowing how to present audio content the right way to specific groups of users requires an intimate understanding of the target audience, their expectations and their needs. It will be up to experience designers to apply their insights when designing with audio in order to shape a product’s personality and to ensureconsistency throughout the user’s interactive journey with a product.
Audio And Accessibility
We have previously established that accessible design is good design. Putting the users first makes you a good designer. Choosing to not exclude the group of users who have difficulties accessing your product makes you an even better designer.
The problem of focusing predominantly on the visual experience is that users who have reading disabilities such as dyslexia or visual impairment are inadvertently left out. This is when designing with audio becomes a necessity, not just an afterthought. Besides following the accessibility guidelines for the various platforms, we have to start thinking about how designing with audio can intentionally include these marginalised users.
In a recent Listserve email, a middle-aged blind lady describes how she used the Voiceover feature on her phone to find out that the bus stop where she was at was temporarily closed. She uses apps to help her with everyday life, whether it’s grocery shopping or reading the papers. Technology has drastically changed her life experience but yet, she still comes across websites, mobile apps and public spaces that are not accessible to her.
This brings us to an important question: how often do experience designers consider accessibility? We often differentiate novice users and expert users but what about elderly users who are slower to adapt to new technology or users with dyslexia who can’t read easily?
Perhaps designers should spend some days using only the accessibility features on their mobile devices so as to better understand the experiences of users who do not rely on sight as their primary sense. In this way, we might come up with more creative ideas for designing with audio to improve the user experience for all.
We do not yet have all the answers as the UX of sound is a growing field. But we’re convinced that designing with audio is an important consideration for experience designers, especially in the mobile era. If you have further insights you’d like to share on how to design with audio to add value for users, feel free to reach out to us @protoio.