Up to 37% in savings when you subscribe to hi-fi+
hifi-logo-footer

Begin typing your search above and press return to search. Press Esc to cancel.

MQA – It’s about time!

MQA – It’s about time!

Last week, Meridian Audio announced a wholly new music format, Master Quality Authenticated, or MQA. This is not one of those ‘here is a new toy truck, go play with it!’ press launches; instead this is one of those concepts best digested and ruminated upon for a while.

, MQA – It’s about time!

Direct information on MQA is scant. It’s an encode-decode process that samples the original master, can be streamed or downloaded ‘in any format’, and is then decoded in the player, be that player a DAC, software, or app. The results are claimed to deliver ‘in the studio’ performance squeezed into a one megabit per second stream that would normally spell a potentially mediocre MP3 track. In listening at London’s Shard tower, both through a large Meridian active system, and in a noise polluted headphone setting, MQA seems initially at least to live up to that high-performance audio claim.

, MQA – It’s about time!

MQA pioneer and Meridian Audio founder Bob Stuart suggested that, “Music lovers need no longer be short-changed; finally we can all hear exactly what the musicians recorded. MQA gives a clear, accurate and authentic path from the recording studio all the way to any listening environment – at home, in the car or on the go. And we didn’t sacrifice convenience.” Convenience and authenticity seem to be watchwords in this format; Stuart pointed out that sound quality has frequently been inversely proportional to convenience (citing the success of compact cassette and its comparatively poor sound quality as a pre-digital example), while the goal of MQA is to bring a more authentic view of what happened in the studio to the home.

, MQA – It’s about time!

In discussing how MQA came about, Meridian supremo Bob Stuart points to nature and neuroscience. The science of psychoacoustics (how we model our hearing) was all but cemented in place when neuro-imaging was in its infancy, and the models created reflect this. Psychoacoustics is perhaps oddly named as it skips the whole ‘psycho-‘ part, assuming the brain as receptor – rather than active participant – in the listening process. This was a scientifically correct line of reasoning at the time, because there was no robust method of looking inside the wetware between the ears, so the model was built on a foundation of audiology tests and test tones, to help describe how the hearing mechanism works. Even technologies such as lossy compression systems were initially developed with no useful method of determining how the brain processes the signals it receives.

 

Over the last 25 years or so, there has been a significant change in the way scientists can view how the brain processes its sensory input, thanks to increasingly refined nuclear-magnetic body scanners, and we can now better model how the brain interacts with the auditory mechanism and how it processes sound. And the results are fascinating. Our brains, it seems, are wired more toward temporal and spatial cues than timbral ones. This may seem obvious on investigation – if you hear a twig crack at night in the woods, you are alerted to where that ‘crack’ sound came from first, and after that fill in the blanks (‘sounded like a twig cracking’). While one follows the other almost instantaneously, one still follows the other. Neuroimaging tests show activity in different parts of the brain; transients trigger activity in Wernicke’s Area (in the boundary between the parietal and temporal lobe), where melody and timbre are associated with activity in the right frontal hemisphere of the brain known as Broca’s Area. The distance between the primary auditory cortex and Broca’s Area is far greater than that of the primary auditory cortex and Wernicke’s Area; not long enough for us to hear the attack of a sound and its timbral properties as different entities, but long enough for us to both process – and it seems weight in terms of significance – that temporal and spatial information faster than the timbral.

Once again, this appears as if nothing necessarily new; we’ve known for years the importance of an instrument recorded in its own space (this forms the basis of ‘the absolute sound’ ethos) and we’ve had many years of people from the UK pushing the idea of ‘Pace, Rhythm, and Timing’, but now we can see that these homespun ideas to point to a potentially less sonically damaging way of transferring recorded music to domestic listeners.

Meridian’s bright idea here was to view this comparatively recent data from neuroimaging, and see how it applies to digital audio. In overly-simplistic terms, when we sample a piece of music in PCM, we work to the frequency domain and bring the time domain along for the ride. Traditionally this has been no problem, because the response time for a human brain to process tones is slower than any potential inter-sample timing errors. However, what we were measuring was the time taken for Broca’s Area to process steady-state test tones; once we look to the speed at which those temporal cues get processed in Wernicke’s Area, things are not so clear-cut, and our response to such cues is approaching that of the timing errors inherent to current digital audio systems. MQA is designed as a system that takes a more time/frequency approach rather than the current frequency-dominant sampling method, thereby addressing the speed we process temporal cues.

Remember that we’re not in Kansas any more, Toto. This discussion of auditory neuroscience is very new territory for audio, and we’re not neuroscientists. A lot of this is subject to ‘huh… well, OK’ nodding along to ideas that are so far outside your normal comfort zones, and doubtless there will be genuine neuroscientists springing up to pass comment on the above. There will doubtless also be armchair neuroscientists with even less understanding of the topic than me (hey… at least I’ve got a book on the topic) willing to pour scorn. Somewhere between these two posts, interest lies.

Neuroscience notwithstanding, the problem is, there is scant information about how MQA actually works at this time, and even the neuro-imaging-based explanation is based more around me hitting the books, rather than some secret information imparted by Meridian (auditory neuroscience being something of a personal interest). Stuart and algorithm king Peter Craven did recently give an AES paper entitled “A Hierarchical Approach to Archiving and Distribution” that seems to form the backbone of MQA (there are also patents for the hardware decoder filed), the abstract of which reads, “Our aim is an improved time/frequency balance in a high-performance chain whose errors, from the perspective of the human listener, are equivalent to no more than those introduced by sound traveling a short distance through air.”

 

The system, it seems acts as an encapsulated signal along traditional PCM lines, and allows full backwards compatibility, in that it a MQA signal can act as a conventional PCM recording if it meets a device that does not support MQA. It also allows MQA encoding of any recording, past, present, or future. The discussion at the event was one of ‘folding’ the sound to compress it instead of ‘masking’, but this is the kind of paradigm shift in audio encoding and decoding that doesn’t just explain itself with a single slide on a Powerpoint presentation. And therein lies the potential problem with MQA; it’s the kind of thing that is easy to understand when heard, but abstract in the extreme to describe. And if it’s difficult for the technologists to describe without resorting to “it’s all a big ball of wibbly wobbly, timey wimey stuff”, how is this going to be pithily described to the mass-market? The reaction seems to vary between those who’ve heard it (who, like me, think it’s possibly the future of audio) and those who haven’t (who dismiss it for being insubstantial).

In a way, this might not matter. The music companies, it seems, are getting behind MQA (they like the idea of it, because those ties when those inconvenient, yet good sounding, formats like open-reel and LP were at their acme coincided with the times where album sales were at their peaks), the hardware, and the software companies are getting behind the concept, too. And if their combined might can sell this as ‘better’, even if the reason why it sounds better is complicated enough to put a Nobel prize-winning physicist in a coma, maybe it will win through.

, MQA – It’s about time!

We’ll know soon enough. MQA compatible sounds and devices will start to roll out in early 2015. The cynical might dismiss this as a way for the record companies to sell us our music back to us again, at the same time getting us to buy yet more hardware, but I’m not sure this is the motivation this time. MQA can be used for a streaming service, will work with a software upgrade to a music app on a smartphone, and sounds very good through headphones. I think this is a way for companies to try and wrestle some of their business back from the fruit-shaped elephant in the room. And, although this comes from a company with the most audiophile of audiophile credentials, MQA is not necessarily geared for us. This is a format designed to make audio streams squirted in as small a package as possible to a smartphone sound more like the sound in the studio, and do this on a level that can be easily heard through giveaway earbuds on a commuter train; that it has a knock-on effect for CD-quality and high-resolution audio is icing on the cake. But if it works, we all stand to benefit!

Tags: FEATURED

Adblocker Detected

"Neque porro quisquam est qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit..."

"There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain..."