The 2018 Mobile World Congress has come and gone. Dirac was in attendance once again this year, and we left the event filled with a very promising feeling. And not at all thanks to the weather—which turned out to be the worst ever since we started attending the event back in 2006—but thanks to all the great people we had the pleasure of meeting, along with the equally great meetings we held with partners, journalists, and potential customers.
More than two weeks after attending CES in Las Vegas and one thing remains on my mind: Cars.
So, you’re going to buy a new smartphone. What will you go for? A year ago, the must-have features on your list might have included: thin, lightweight, good camera, powerful chip, decent ROM&RAM, and great sound. Today, you would probably add to that a dual rear camera and large screen-to-body ratio. Yes, dual rear camera and large screen-to body ratio are some of the hot trends—among others such as face recognition, augmented reality and finger sensor—which, according to experts, will sweep over the smartphone industry in 2018.
Designing a sound system for a car has a different success formula than designing a sound system for a living room. In a car, neither the loudspeakers or the people listening to them can be placed precisely according to the standard. Consequently, when listening to a recording, the stereo information is likely to be lost or, at least, severely distorted.
Here we will discuss the motivation and basic principles behind a Dirac technology called Dirac Virtual Center, which was developed to solve one of the classic problems in automotive sound system tuning: the near-side bias problem.
Let’s say you have quite a firm opinion about how you want your music to sound. You want it to sound clean and tonally well balanced with just enough bass—not too much. You want the center image to be right there, dead in the center no matter how you move or where you sit, and you want the sound stage to stay symmetrically within 30 degrees to each side of the center image.
Over the last decades there’s been a lot of research devoted to figuring out how to record a sound event and exactly reproduce the original sound field in a different location. So far, this has not been possible to achieve for sound reproduction in a consumer environment with a reasonable number of loudspeakers. However, recently we have seen a lot of innovation in spatial sound reproduction, with applications ranging from TV and cinema, to games and VR.
In today’s global business environment, the ability to work remotely offers a huge work life improvement and ultimately saves on travel and environmental costs. One thing still holding us back, however, is the lack of proper teleconferencing systems. Despite the various solutions available, attempting to work remotely and conduct conference calls with 5-10 people located in various places across the globe is a hugely disappointing and unproductive experience.
It wasn’t too long ago that portable audio meant blurry playback on a Walkman, and portable movies were, of course, completely unthinkable. Yet today we consume portable audio in all kinds of environments and for a range of different purposes— for music, games, audiobooks, GPS systems, audio assistants like Siri… the list goes on. And it's all delivered in the convenient package of a smartphone. For the longest time portable audio also necessitated the use of headphones, and this is still very much the case. The tiny speakers in smartphones, while significantly better than ten years ago, can only do so much. Sure, advanced signal processing has enabled improvements in output level and sound quality over the past few years, but watching a movie on a smartphone without headphones remains a less than massive audio experience. What’s the problem?
The Mobile World Congress (MWC) takes place in Barcelona every year. With over 100,000 exhibitors and visitors, it's by far the most important event in the industry. It also offers a key to understanding the trends of the future—not only in the mobile industry but in relation to how we live our lives. Mobile goes beyond mobile phones and everything you can think of is getting connected. Including some things you might not even imagine.
Background noise is an unavoidable nuisance. Whether it's produced by roaring excavators or hurried crowds at a train station, and whether it’s obscuring a voice message from your boss or a listening session with a Tchaikovsky waltz, it's always annoying. Given urbanization trends, our environments will likely only get noisier—with more people living closer together, and the proliferation of mobile sound systems providing a whole new palette of noises and disturbances which were not present in the listening scenarios of yesteryear.
Our flagship automotive technology Dirac Unison uses our most advanced signal processing methods, enabling speakers within a system to work together to optimally reproduce each input channel–something which was previously impossible in digital room correction. While it’s a product we’ve always been proud of, up until January this year, the prototype tuning tool that came with it was not. It definitely worked, and offered all the required settings. But it was hard to maintain, and far from user-friendly.
It’s hard not to write something after returning from an event like CES, which leaves you with so many inputs and impressions, it nearly blows your mind. The entire place is so packed with innovations and ideas, it's like the entire world’s waited all year just to reveal what they've been busy hatching up in the seclusion of their basement.
As I write this, I'm listening to a recording of Joss Stone. Her voice sounds completely natural, hovering in the air just a few meters in front of me, placed distinctly at the center of my sound system, remaining there regardless of how I move my head. I can almost touch the ambience of the recording. The low frequency extension is great, the room modes are extremely well controlled. The listening room is remarkably well treated, with just the right amount of air and sense of space, and without the annoyance of comb filters or spectral coloration. It’s treated so well, I don’t need digital room correction. This is an experience you can’t get without a HiFi and room treatment budget of at least $100,000 USD. The funny fact is this: I’m getting this experience with a pair of headphones. And the sound system I’m referring to? It’s a virtual one.
Have you ever wondered why music sounds so different on headphones compared to loudspeakers? It’s because, by design, headphones are not technically compatible with the stereophonic system. That isn’t to say you can’t still get great sound from headphones. Otherwise we wouldn’t be seeing the boom in headphone sales that we've been seeing the past few years (although it’s worth pointing out that some retail stores keep mirrors next to the headphone displays for customers who care more about looks than sound). In this post, I’ll be examining why music sounds different on headphones, and look into a technology that can upgrade headphone sound quality by several notches.
When it comes to the reproduction of sound, the quest for “perfection” can seem like a pointless task when the definition itself is difficult to judge—even, and especially, for a musician. But the truth is our brains can tell the difference, and the closer a sound system comes to recreating a performance, the more relaxed and detail-rich our listening experience becomes.
As the listening habits of consumers shift towards ever smaller and more compact playback devices, the sole application of linear and time-invariant processing methods (such as sophisticated equalization technologies) is not necessarily sufficient for reaching the desired and often conflicting requirements of audibility, low distortion, tonal balance, bass response, loudness, etc. DSP system design for small loudspeakers is inherently a compromise, but by using cutting-edge digital technologies we can achieve results which invariably take any micro-speaker to the next level of performance.
Dirac Research has been heavily involved in the automotive industry since the early days. As a research and software company, our mission is to deliver outstanding tuning algorithms and tools that are second to none. In order to live up to this goal, it’s of utter importance that we reflect on and understand the role software plays in the tuning process: What are the expectations for a tuning tool? And who will use it?
When it comes to digital signal processing, there’s one puzzle that remains even once processing is complete. How do you fit the processed signal back inside the permissible limits of the digital number format? This post describes the “why” and the “how” of two different approaches you can take to get around this obstacle and finish the operation.
“If the only tool you have is a hammer, you tend to see every problem as a nail.” This well known quote is attributed to psychologist Abraham Maslow, who observed that the accessibility of a given tool tends to influence the type of approach humans take when solving a problem. As engineers and researchers, we are not spared from the phenomenon of Maslow’s hammer, although many of us might like to think otherwise...
Is there any real physical sound experience that can be exactly replicated through a stereo system? Probably not. Why? Because the sound engineer who’s making the recording is limited to coding a complex three-dimensional sound field using only two channels, which are then played back from two distinct locations in a probably less than perfect listening room.