Background noise is an unavoidable nuisance. Whether it's produced by roaring excavators or hurried crowds at a train station, and whether it’s obscuring a voice message from your boss or a listening session with a Tchaikovsky waltz, it's always annoying. Given urbanization trends, our environments will likely only get noisier—with more people living closer together, and the proliferation of mobile sound systems providing a whole new palette of noises and disturbances which were not present in the listening scenarios of yesteryear.
Our flagship automotive technology Dirac Unison uses our most advanced signal processing methods, enabling speakers within a system to work together to optimally reproduce each input channel–something which was previously impossible in digital room correction. While it’s a product we’ve always been proud of, up until January this year, the prototype tuning tool that came with it was not. It definitely worked, and offered all the required settings. But it was hard to maintain, and far from user-friendly.
It’s hard not to write something after returning from an event like CES, which leaves you with so many inputs and impressions, it nearly blows your mind. The entire place is so packed with innovations and ideas, it's like the entire world’s waited all year just to reveal what they've been busy hatching up in the seclusion of their basement.
As I write this, I'm listening to a recording of Joss Stone. Her voice sounds completely natural, hovering in the air just a few meters in front of me, placed distinctly at the center of my sound system, remaining there regardless of how I move my head. I can almost touch the ambience of the recording. The low frequency extension is great, the room modes are extremely well controlled. The listening room is remarkably well treated, with just the right amount of air and sense of space, and without the annoyance of comb filters or spectral coloration. It’s treated so well, I don’t need digital room correction. This is an experience you can’t get without a HiFi and room treatment budget of at least $100,000 USD. The funny fact is this: I’m getting this experience with a pair of headphones. And the sound system I’m referring to? It’s a virtual one.
Have you ever wondered why music sounds so different on headphones compared to loudspeakers? It’s because, by design, headphones are not technically compatible with the stereophonic system. That isn’t to say you can’t still get great sound from headphones. Otherwise we wouldn’t be seeing the boom in headphone sales that we've been seeing the past few years (although it’s worth pointing out that some retail stores keep mirrors next to the headphone displays for customers who care more about looks than sound). In this post, I’ll be examining why music sounds different on headphones, and look into a technology that can upgrade headphone sound quality by several notches.
Ever since humankind started creating music, the means, or equipment, for doing so have often been prohibitively bulky. True, there exists many small musical instruments, but the ones that can produce sound of sufficient strength and volume are typically really, really large; and an ensemble playing multiple instruments at once requires a lot of space indeed.
This year I returned to home audio after over a decade working in other industries and I was shocked to find that very little had changed since I’d been away. Generally speaking, progress has been relatively slow. In the 90s there was almost no home install and people tried their best to create a good listening environment, sometimes with a good result, sometimes not. But last month I discovered a very different picture of progress. I attended the CEDIA expo for home technology in Dallas and it was undeniable that technology is finally catching up with the imaginations of the people. Here are a few examples...
When it comes to the reproduction of sound, the quest for “perfection” can seem like a pointless task when the definition itself is difficult to judge—even, and especially, for a musician. But the truth is our brains can tell the difference, and the closer a sound system comes to recreating a performance, the more relaxed and detail-rich our listening experience becomes.
As the listening habits of consumers shift towards ever smaller and more compact playback devices, the sole application of linear and time-invariant processing methods (such as sophisticated equalization technologies) is not necessarily sufficient for reaching the desired and often conflicting requirements of audibility, low distortion, tonal balance, bass response, loudness, etc. DSP system design for small loudspeakers is inherently a compromise, but by using cutting-edge digital technologies we can achieve results which invariably take any micro-speaker to the next level of performance.
Dirac Research has been heavily involved in the automotive industry since the early days. As a research and software company, our mission is to deliver outstanding tuning algorithms and tools that are second to none. In order to live up to this goal, it’s of utter importance that we reflect on and understand the role software plays in the tuning process: What are the expectations for a tuning tool? And who will use it?
When it comes to digital signal processing, there’s one puzzle that remains even once processing is complete. How do you fit the processed signal back inside the permissible limits of the digital number format? This post describes the “why” and the “how” of two different approaches you can take to get around this obstacle and finish the operation.
“If the only tool you have is a hammer, you tend to see every problem as a nail.” This well known quote is attributed to psychologist Abraham Maslow, who observed that the accessibility of a given tool tends to influence the type of approach humans take when solving a problem. As engineers and researchers, we are not spared from the phenomenon of Maslow’s hammer, although many of us might like to think otherwise...
Is there any real physical sound experience that can be exactly replicated through a stereo system? Probably not. Why? Because the sound engineer who’s making the recording is limited to coding a complex three-dimensional sound field using only two channels, which are then played back from two distinct locations in a probably less than perfect listening room.
The two weakest components of a HiFi system are typically the loudspeaker and the room the music is playing in— the second of which is most often overlooked. Even if you’ve invested in a best-in-class HiFi system, the listening room can still have a tremendous effect on the overall sound experience. Both a sound system’s frequency response* and impulse response** are profoundly altered by everything from standing wave patterns to wall reflections.
When you’re listening to music and something feels off, it can usually be attributed to at least one of two factors. Either something is out of key— for instance, an instrument isn’t tuned properly or a singer can’t sing. Or someone is missing a beat. If each musician in an orchestra were to play at their own tempo it would sound differently than intended, and likely pretty bad. The first of these factors is a question of frequency for a single sinusoid (does each note sound like it should?). The second is a property of time (does each note arrive when it should?).
As a former HI FI and car stereo dealer I know people spend a lot of money trying to get the perfect sound. To be honest I haven’t looked into the business so much since the 90’s, but after a few months back in the segment, and a few exhibitions later, I see that little has changed since then. People are still spending just as much money on cables, contacting, racks, and turntable weights as ever.