Let’s say you have quite a firm opinion about how you want your music to sound. You want it to sound clean and tonally well balanced with just enough bass—not too much. You want the center image to be right there, dead in the center no matter how you move or where you sit, and you want the sound stage to stay symmetrically within 30 degrees to each side of the center image.
Over the last decades there’s been a lot of research devoted to figuring out how to record a sound event and exactly reproduce the original sound field in a different location. So far, this has not been possible to achieve for sound reproduction in a consumer environment with a reasonable number of loudspeakers. However, recently we have seen a lot of innovation in spatial sound reproduction, with applications ranging from TV and cinema, to games and VR.
In today’s global business environment, the ability to work remotely offers a huge work life improvement and ultimately saves on travel and environmental costs. One thing still holding us back, however, is the lack of proper teleconferencing systems. Despite the various solutions available, attempting to work remotely and conduct conference calls with 5-10 people located in various places across the globe is a hugely disappointing and unproductive experience.
It wasn’t too long ago that portable audio meant blurry playback on a Walkman, and portable movies were, of course, completely unthinkable. Yet today we consume portable audio in all kinds of environments and for a range of different purposes— for music, games, audiobooks, GPS systems, audio assistants like Siri… the list goes on. And it's all delivered in the convenient package of a smartphone. For the longest time portable audio also necessitated the use of headphones, and this is still very much the case. The tiny speakers in smartphones, while significantly better than ten years ago, can only do so much. Sure, advanced signal processing has enabled improvements in output level and sound quality over the past few years, but watching a movie on a smartphone without headphones remains a less than massive audio experience. What’s the problem?
The Mobile World Congress (MWC) takes place in Barcelona every year. With over 100,000 exhibitors and visitors, it's by far the most important event in the industry. It also offers a key to understanding the trends of the future—not only in the mobile industry but in relation to how we live our lives. Mobile goes beyond mobile phones and everything you can think of is getting connected. Including some things you might not even imagine.
Background noise is an unavoidable nuisance. Whether it's produced by roaring excavators or hurried crowds at a train station, and whether it’s obscuring a voice message from your boss or a listening session with a Tchaikovsky waltz, it's always annoying. Given urbanization trends, our environments will likely only get noisier—with more people living closer together, and the proliferation of mobile sound systems providing a whole new palette of noises and disturbances which were not present in the listening scenarios of yesteryear.
Our flagship automotive technology Dirac Unison uses our most advanced signal processing methods, enabling speakers within a system to work together to optimally reproduce each input channel–something which was previously impossible in digital room correction. While it’s a product we’ve always been proud of, up until January this year, the prototype tuning tool that came with it was not. It definitely worked, and offered all the required settings. But it was hard to maintain, and far from user-friendly.
It’s hard not to write something after returning from an event like CES, which leaves you with so many inputs and impressions, it nearly blows your mind. The entire place is so packed with innovations and ideas, it's like the entire world’s waited all year just to reveal what they've been busy hatching up in the seclusion of their basement.
As I write this, I'm listening to a recording of Joss Stone. Her voice sounds completely natural, hovering in the air just a few meters in front of me, placed distinctly at the center of my sound system, remaining there regardless of how I move my head. I can almost touch the ambience of the recording. The low frequency extension is great, the room modes are extremely well controlled. The listening room is remarkably well treated, with just the right amount of air and sense of space, and without the annoyance of comb filters or spectral coloration. It’s treated so well, I don’t need digital room correction. This is an experience you can’t get without a HiFi and room treatment budget of at least $100,000 USD. The funny fact is this: I’m getting this experience with a pair of headphones. And the sound system I’m referring to? It’s a virtual one.
Have you ever wondered why music sounds so different on headphones compared to loudspeakers? It’s because, by design, headphones are not technically compatible with the stereophonic system. That isn’t to say you can’t still get great sound from headphones. Otherwise we wouldn’t be seeing the boom in headphone sales that we've been seeing the past few years (although it’s worth pointing out that some retail stores keep mirrors next to the headphone displays for customers who care more about looks than sound). In this post, I’ll be examining why music sounds different on headphones, and look into a technology that can upgrade headphone sound quality by several notches.
Ever since humankind started creating music, the means, or equipment, for doing so have often been prohibitively bulky. True, there exists many small musical instruments, but the ones that can produce sound of sufficient strength and volume are typically really, really large; and an ensemble playing multiple instruments at once requires a lot of space indeed.
This year I returned to home audio after over a decade working in other industries and I was shocked to find that very little had changed since I’d been away. Generally speaking, progress has been relatively slow. In the 90s there was almost no home install and people tried their best to create a good listening environment, sometimes with a good result, sometimes not. But last month I discovered a very different picture of progress. I attended the CEDIA expo for home technology in Dallas and it was undeniable that technology is finally catching up with the imaginations of the people. Here are a few examples...
When it comes to the reproduction of sound, the quest for “perfection” can seem like a pointless task when the definition itself is difficult to judge—even, and especially, for a musician. But the truth is our brains can tell the difference, and the closer a sound system comes to recreating a performance, the more relaxed and detail-rich our listening experience becomes.
As the listening habits of consumers shift towards ever smaller and more compact playback devices, the sole application of linear and time-invariant processing methods (such as sophisticated equalization technologies) is not necessarily sufficient for reaching the desired and often conflicting requirements of audibility, low distortion, tonal balance, bass response, loudness, etc. DSP system design for small loudspeakers is inherently a compromise, but by using cutting-edge digital technologies we can achieve results which invariably take any micro-speaker to the next level of performance.
Dirac Research has been heavily involved in the automotive industry since the early days. As a research and software company, our mission is to deliver outstanding tuning algorithms and tools that are second to none. In order to live up to this goal, it’s of utter importance that we reflect on and understand the role software plays in the tuning process: What are the expectations for a tuning tool? And who will use it?
When it comes to digital signal processing, there’s one puzzle that remains even once processing is complete. How do you fit the processed signal back inside the permissible limits of the digital number format? This post describes the “why” and the “how” of two different approaches you can take to get around this obstacle and finish the operation.
“If the only tool you have is a hammer, you tend to see every problem as a nail.” This well known quote is attributed to psychologist Abraham Maslow, who observed that the accessibility of a given tool tends to influence the type of approach humans take when solving a problem. As engineers and researchers, we are not spared from the phenomenon of Maslow’s hammer, although many of us might like to think otherwise...
Is there any real physical sound experience that can be exactly replicated through a stereo system? Probably not. Why? Because the sound engineer who’s making the recording is limited to coding a complex three-dimensional sound field using only two channels, which are then played back from two distinct locations in a probably less than perfect listening room.
The two weakest components of a HiFi system are typically the loudspeaker and the room the music is playing in— the second of which is most often overlooked. Even if you’ve invested in a best-in-class HiFi system, the listening room can still have a tremendous effect on the overall sound experience. Both a sound system’s frequency response* and impulse response** are profoundly altered by everything from standing wave patterns to wall reflections.
When you’re listening to music and something feels off, it can usually be attributed to at least one of two factors. Either something is out of key— for instance, an instrument isn’t tuned properly or a singer can’t sing. Or someone is missing a beat. If each musician in an orchestra were to play at their own tempo it would sound differently than intended, and likely pretty bad. The first of these factors is a question of frequency for a single sinusoid (does each note sound like it should?). The second is a property of time (does each note arrive when it should?).