PSB’s Paul Barton Treats Playback/Hi-Fi+ to Blind Listening Sessions at Canada’s NRC Acoustics Lab—Part 2

PSB’s Paul Barton Treats Playback/Hi-Fi+ to Blind Listening Sessions at Canada’s NRC Acoustics Lab—Part 2

In the first part of this blog, we talked about PSB’s Paul Barton taking a hand-picked group of audio journalists on a guided tour of the anechoic test chamber and control room at the NRC—a facility where many, many great loudspeaker designs have been put through their paces. Click here to read Part 1 of this blog. In the latter part of the day of our visit, Paul had something really special planned, in the form of an official NRC-style blind listening session/listening test.

The NRC maintains IEC-certified listening facilities whose dimensions and reverberant characteristics are a matter of public record, partly so that other listeners/designers could, if they wished to do so, construct similar listening rooms of their own. The room is rectangular in shape, with no dimensions that are multiples of one another (which would cause resonance problems), has a carpeted floor, with storage cabinet/bookshelves down one side, simple acoustic treatment panels down the other side, and provides an specific area to the front of the room where loudspeakers under test are to be placed. The wall behind the test speakers features damping materials, and the left and right corners of the wall behind the speakers are covered in absorptive curtains, which extend several feet out along the sidewalls of the room (all of these features are part of the IEC documentation for the room).

In front of the loudspeaker test area is an acoustically transparent, but optically opaque screen, which contains a removable center section to give researchers access to the speaker test area—making it easier to swap test speakers in and out. Behind the center section of the screen is an illuminated display that shows the (randomly chosen) ID numbers for each of the speaker systems under test. Under NRC practice, there are typically no more than four speakers systems under test in any one listening session, so that as the tests progress the listener simply sees a glowing “1”, “2”, “3”, or “4” to indicate which speaker is playing at any given moment.

About Blind Listening Tests

The night before the listening sessions, Paul Barton announced that he had some “homework” for us—in the form of detailed instructions for listener/participants in the blind listening tests. Listeners were told they would be listening to several loudspeaker systems with various program materials, and would be rating the speakers in several performance areas, as noted below.

Clarity/Definition (the scale runs from “Very unclear, poorly defined” to “Very clear, well defined).
Softness (the scale runs from “Hard, shrill, very sharp” to “Very soft, mild, subdued”).
Fullness (the scale runs from “Very thin” to “Very full”).
Brightness (the scale runs from “Dark, very dull” to “Very bright”).
Spaciousness, Openness (the scale runs from “Dry, very closed” to “Very open, spacious, airy”).
Nearness/Presence (the scale runs from “Very distant” to “Very near”)
Hiss, Noise, Distortions (the scale runs from “Very little” to “Very much”)
Loudness (the scale runs from “Very soft” to “Very loud”)
Pleasantness (the scale runs from “Very unpleasant” to “Very pleasant”)
Fidelity (the scale runs from “0”, which roughly means “the worst speaker imaginable”, to “10”, which roughly means “no further improvement can be imagined; for obvious reason, most listeners rate speakers somewhere “1”, meaning “Bad” on up to “9”, meaning “Excellent”).

Finally, listeners are given a blank frequency response chart on which they can attempt to sketch perceived response curves, if they wish.

Conceptually, I stumbled on three of the NRC rating parameters: “Hiss, Noise, Distortions;” “Loudness;” and “Pleasantness.” Here’s why. To me, “Hiss, Noise, and Distortions” are as much, if not more, a function of source components as they are of speakers (for example, a speaker simply can’t produce hiss unless it is fed hiss). “Loudness”, in turn, is more a function of available amplifier power and the manner in which that power is applied; obviously one loudspeaker can be more sensitive than another, but it can’t and doesn’t “turn itself up in order to play louder.” “Pleasantness,” finally, is something I think of more as a property of music than of the speaker; for example, if you feed an accurate, high fidelity speaker obnoxious and abrasive-sounding material, it’s going to sound, well, obnoxious and abrasive (as well it should under the circumstances). Ideally, a speaker wouldn’t have a pleasant or unpleasant style of its own; it would simply reproduce what’s on the recording, whether for good or ill. But despite these conceptual misgivings, the good news is that, as the instructions explained, the main rating—the one that trumps all others—is the “Fidelity” rating (a concept that both objective measurement folks and observational listeners should be able to support).

The Actual Listening Sessions

Just before the sessions began, Paul Barton explained that we would be hearing four speaker systems whose volume levels had been matched (at a specific midrange frequency), where the identities of the speakers would be unknown to us and to the NRC staff member running the experiment. We would listen to a mix of musical selections, with the experimenter switching between the speakers in random order as the music played (this is where those aforementioned ID lights come in handy), and our job was to rate all four speakers using the parameters above. Toward the end of the listening session, we would be allowed some time where the listeners themselves could control the speaker switcher, if, for example” they wished to compare on specific speaker versus another (say “2” vs. “4”). Listeners (and remember, these are journalists and some PSB-related PR/marketing guys we’re talking about) were told to try to remain expressionless and not to communicate either verbally or non-verbally while the listening sessions were in progress. We tried.

Barton further explained that, in the first of our blind listening tests, we would hear three speakers under comparison to one another, plus a fourth “anchor” speaker whose performance was decidedly off pace relative to the other three. Then, in the second test session, we would hear four speakers under evaluation—some carried forward from the first session, but some that were not.

What the Tests Were Like

In three words, I found the tests “revealing,” “demanding,” and “stressful”.

Revealing: The blind screen really forces you to ask, “What are the actual differences between these speakers” and, “How big are the differences, really?” In a way, it’s thoroughly refreshing to hear products without any biases regarding size, shape, price, configuration, or presumed design pedigree. The good news is that this is a true “what you hear is what you get” listening experience—one that, in a sense, invites listeners to free their minds of preconceptions and simply observe what is. I’m onboard with that idea.

Demanding: Unlike leisurely in-home listening tests, where you can use familiar musical materials in diagnostic ways to hear how one speaker handles a set of specific musical passages vs. another, here the material was not necessarily familiar and the randomized switching did not necessarily lend itself to learning as much as possible about the speakers under test. There was, for me, the pressure of needing to come up with accurate ratings in a very short span of time, and without making nearly as many specific comparisons as I might in a typical Playback review context. So, the demand was to “get the judgment” right, right away, and with incomplete sonic data.

I found that I could tell a lot about the speaker’s apparent frequency balance characteristics (and about the smoothness of their response curves), and could also learn a fair amount about their apparent resolution, transient speed, and textural characteristics. However, I could learn little about their imaging and full-fledged spatial characteristics, largely because—as I learned after the first session was completed—we were listening to single samples of all loudspeakers playing in mono. (Am I the only guy who feels this is a strange way to evaluate a product that will almost always be heard in stereo? Just asking…). Why mono? According to NRC, listening to a single speaker in mono gives more consistent test results and tends (or so it is theorized) to avoid various masking effects that a stereophonic presentation might entail. Still, I retain a healthy skepticism…

Stressful: I suppose audiophiles in general and audio journalist in particular take pride in their listening and evaluation skills, so that a double-blind listening session introduces stressors galore. Questions such as, “What if I’ve got garbage for ears?” or “What if my powers of discernment aren’t as keen as I think they are?” go racing through your head, exacerbated by the constant time pressures at hand (not to mention an utter absence of traditional left/right imaging cues to work with).

Finally, I found that the available listening seats were far from similar (although in theory they are supposed to yield fairly consistent listening test results). The room is designed so that four listeners can take part in tests at once, with two chairs placed in the front, and two more chairs directly behind them. In the first session, I was in the left/front seat and for the second session I was in the right rear seat. I found it was much harder to make critical assessments about speaker performance from the right/rear seat where I was seated directly behind another listener—a listening position I would rarely if ever encounter in my home. Still, I did the best I could, and waited to learn results later.

The Envelope, Please

In the earlier session, the four speaker systems under test were (as I later learned):

•The PSB Imagine Mini stand mount monitor used with a PSB HD8 subwoofer,
•The PSB Imagine T2 floorstander,
•The B&W CM7 floorstander, and
•An old (very old) Tannoy stand mount monitor from a bygone era.

I ranked the speakers in the order I’ve shown above (from top to bottom), with the Tannoy lagging far behind its more contemporary competitors. Of the four, and not too surprisingly, I found that the Imagine Mini/HD8 combo sounded very similar, though not identical, to the Imagine T2 floorstander (both speakers are from the same product family and were designed not too long after one another).

In the later session, the four speakers under test were (as I later learned):

•The PSB Imagine Mini stand mount monitor used with a PSB HD8 subwoofer,
•The PSB Imagine T2 floorstander,
•The B&W CM7 floorstander, and
•A prototype PSB product about which all journalists present were sworn to secrecy.

I ranked the speakers in the order shown (from top to bottom), and found it gratifying that my rank-orderings and overall fidelity ratings tracked closely with one another from session 1 to session 2 (Phew! I guess my observational/analytical skills are OK after all).

Several general observations I would make are these:

•PSB makes some darned nice speakers and their consistency of sound within a given product family is quite impressive.
•B&W makes a darned fine speaker, too, though one that caters to a different set of listener tastes (or perhaps a slightly different definition of “Fidelity”) than my own.
•Behind the scrim, and listening in mono, it is very hard to tell exactly how big a speaker system is or is not—the scrim eliminates any possibility of being biased by what our eyes tell us that our ears should hear.
•You can learn a lot from blind tests, but not everything that an audiophile would want to know.

After my visit, and at Paul Barton’s suggestion, I’ve been going back and reading (or in some cases re-reading) some of Dr. Floyd Toole’s NRC research papers—papers that provide the conceptual underpinning behind the procedures and experimental methodologies used for blind listening tests as conducted at the NRC. Honestly, those papers have raised more conceptual “red flags” for me than they have answered, although I find Toole’s work (and his obviously keen mind) simply fascinating. But, perhaps, those conceptual stumbling blocks will have to wait for a different day and a different blog.

For now, I’d just like to express my thanks for Paul Barton having given me and the other journalists on hand a wonderful window of insight into his world and work at the NRC Acoustics Labs.

Happy Listening.

blog comments powered by Disqus

Featured Articles