How We Test Gaming Headsets

What makes one headset perform better than another? What is legendary gaming sound, anyway? Purportedly, it was developed with professional gamers in mind, and every new headset supposedly comes with better, never-before-seen improvements that enhance your experience, particularly in first-person shooters.

What criteria should you use for comparing products in this category, and which buzzwords should you ignore? Before we start publishing our own reviews, we wanted to provide a scientific background to flavor the methodologies you'll see us use.

Today's story includes a deeper analysis of headsets and their performance. We'll cover the theoretical basics of sound, human hearing, and positional audio. You'll also come away with an understanding of what we measure and how. And in the end, digging into so-called gaming sounds yields some surprising results.

What Do We Have In Mind?

To evaluate whether a pair of headphones does provide the best acoustics and neutral sound representation, with the widest possible sound stage, it is necessary to examine the effects you hear in games, such as weapons, environmental noises, vehicle/aircraft sounds, and the quality of speech reproduction for different types of voices.

The gaming sounds we analyze come from titles like Battlefield 4, Far Cry 4, Fallout 3, and Crysis. They represent the "gaming soundscape" necessary to evaluate the quality of audio reproduction. Interestingly, many of the noises are cut off at around 16 kHz. And in some cases, games use some kind of clipping to make the sound seem even louder (compression).

Soon we'll see which frequency ranges are actually used in games and how certain sounds are reproduced. But first we need to cover a little bit of theory. After all, we don't want to nit-pick the headsets we review without telling you why we're being critical. It'll help for you to understand the method behind our madness.

A Bit About Sound And Waves

Sound waves are at the heart of our endeavor. We're looking at what happens between a sound source and the human ear as a receptor. Obviously, we want to create this sound using our test objects and hear it with our own ears. However, a lot can happen on the way between a source and its target (and also after the sound is physically recognized, as it's being transmitted, and when the brain interprets generated data).

You may have already learned some of this in physics class, but sound is a purely mechanical oscillation process that, depending on the respective medium, has a specific speed of propagation (in air, it's approximately 330 to 340 m/s). The speed of sound is also influenced by factors like humidity and temperature, but not by air pressure. For the sake of simplicity, we'll ignore those factors here.

More pertinent to our discussion is the fact that sound travels through space in a wave-like manner. Depending on the pitch of a tone, the corresponding sound wave has a specific wavelength, which is easy to calculate; it's the speed of sound divided by the frequency. For example, a test signal of one kilohertz (that's 1000 oscillations/s) at a propagation (speed of sound) of 330 m/s gives us 330 meters divided by 1000. Thus, the wavelength would be 33 cm (0.33 m). At 100 Hz, the wavelength would be 3.30 meters. At 30 Hz, it's almost 10 meters!

How We Hear: The Eardrum

Sound reaches our ear and travels along the ear canal to the eardrum. Let's keep it simple and use our example test frequency of one kilohertz. This sinusoidal sound is nothing more than a specific kind of pressure wave hitting the eardrum. The pressure exerted on the eardrum changes periodically, following the respective frequency illustrated in the image above, since the sound only hits the eardrum from one side.

What we "hear" is the difference between the sound pressure and the constant ambient air pressure, which should be identical on both sides of the eardrum, canceling itself out. These differences are no larger than just a few millionths of a bar, while the average air pressure in normal weather at sea level is 1000 mbar. This also shows just how sensitive our hearing actually is.

Unfortunately, the ear is a little limited when it comes to perceptible frequency range. On average, we pick up somewhere between 16 Hz to 16 kHz or so. Depending on a person's age, this range may be a little narrower, especially when it comes to higher-pitched sounds.

Physiology Of The Ear

Imagine sitting in a concert hall with an orchestra playing. There, you hear the loudest parts at up to 100 phon (subjectively perceived volume). At the same time, a recording of the concert is made, and mixed to precisely match the sound as it was heard in the first row.

So impressed, you purchase this recording on CD. But when you play it back, you're disappointed that the sound played within the four walls of your home, as well as the sound conveyed through headphones, seems to be completely different from what you experienced at the concert. The reason for this is our ear, which strongly influences the subjective perception of volume according to frequency and actual sound level. The human ear is not a linear, let alone ideal transducer!

The frequency-dependent perceived loudness was analyzed in early 1933 by Harvey Fletcher and Wilden A. Munson, who, for the first time, translated the results into a comprehensive curve. It can be seen very clearly that, especially at low frequencies with decreasing sound level, the perceived volume is drastically reduced. Certain high-frequency sounds are affected by this natural characteristic of human hearing as well. As a result, various amplifiers and active speakers today provide some "hearing-corrected" setting or "physiological" volume correction (loudness function), even though such features should always be approached with a healthy level of skepticism.

The much-praised "linearity" throughout the sound spectrum can not be achieved without a very high sound level, since our own ear will always be the weakest link in the chain!

Another factor is the different degrees of attenuation for various frequencies in relation to the distance and angle to the sound source. The signals that reach our ears on the shortest and most direct route are called "direct sound." However, since sound propagates in a wave-like manner (similar to light), and sources don't emit sound in just one direction, so-called reflections occur. Sound hits the auricle and is reflected back (in a weakened state), similar to a ball on a billiard table.

For closed and semi-open systems, the space between the transducer and ear is in fact our audio sensor chamber, thus explaining how and why different materials and shapes can noticeably change and distort sound.

Sound Event And Auditory Event

Even if a specific sound source produces the same sound event (for example, a piece of music played in an endless loop), every human being will subjectively perceive the resulting sound in a completely different way.

In contrast to this so-called sound event, the subjective auditory event is individually defined by spatial and temporal characteristics. The relation between sound event (stimulus) and auditory event (sensation) is a complex matter, and it would be wrong to treat one exactly like the other.

Too complicated? Let's take a look at what these things mean in detail by comparing the real sound event (measurable values) to our subjectively perceived auditory event (subjective impression):

Sound eventAuditory event
Sound pressure levelLoudness (phon)
FrequencyMelody (mel)
Acoustic spectrumSound
Position of sound sourceLocalization of the sound source and direction of the auditory event

We already learned about sound pressure level, frequency, and the acoustic spectrum of complex sound events as a mixture of various frequencies. This leaves the position of the sound source and the impact that these four distinct properties ultimately have when they act in combination.

Frequency Ranges And Subjective Auditory Impression

By analyzing exactly which frequencies a device is able to reproduce, and how well or poorly this reproduction is, it is possible to evaluate, with a high level of certainty, the device's overall performance across the entire frequency spectrum.

It might help to check out the interactive representation over at independentrecording.net, which provides a good overview of individual frequency ranges, examples of important sounds found in that specific range, and their potential sources. Just click the static preview image or the text link!

This diagram shows that all frequency ranges from lowest bass to highest pitch are well occupied.

Meanwhile, the following table presents an overview of the most important frequency ranges we use to subjectively evaluate headsets and headphones.

CategoryFrequency RangeDescription
Lowest bass16 to 32 HzThis area of the sub-bass octave is reached only by very few instruments, but may be essential for classical music. Only a handful of the speakers we test are able to reproduce this range to some extent, let alone in its entirety.
Low bass32 to 64 HzThe bass octave (32.7 to 65.4 Hz) contains many interesting instruments, as well as the effect track of clean-mixed Dolby soundscapes used in movies (so-called track 0) and certain game effects. It can include extremely low-tuned bass guitars, earthquakes, detonations, or big bass drums (kick drums) for those dance enthusiasts among us. Without low bass, everything sounds a little flat.
Bass and upper bass64 to 150 HzBass up to 150 Hz, which includes the great octave (65.4 to 130.8 Hz), contains the fundamental speech frequency of male voices and has a strong influence on the lifelike reproduction of male vocals. When we look at this range, that's exactly what we're looking for, along with the harmonization of different voices, including the localization of individual sources (choir).
Lower mid-range150 to 400 HzTogether with upper bass, the so-called fundamental range plays an important role in the subjectively perceived warmth and fullness of sound provided by a large number of instruments. The fundamental speech frequency of female voices can also be found in this area. We thus evaluate both individual female vocals and overlapping voices (choir) to assess the quality of spatial reproduction.
Upper mid-range400 Hz to 2 kHzThe upper mid-range contains the 1 kHz point, which is still used as a reference for many measurements. Unfortunately, many cheaper devices are tuned with these frequencies in mind in order to optimize their specifications and impress potential customers. However, this range also plays a relevant role in good spatial resolution, especially when it comes to broadband noise.
Lower treble2 kHz to 3.5 kHzThe human ear is most sensitive between 2 and 3.5 kHz, especially since the lower levels of this range are responsible for a good overtone reproduction of the human voice. These frequencies are also critical for the recognition of a voice or an instrument. Thus, in this context, this range relates to the respective tone color, or timbre.
Medium treble3.5 kHz to 6 kHzThe success or failure of speech reproduction as a whole is determined by this range, since sibilant sounds (like the letter "s" and hiss sounds) fall into it. A good reproduction of these frequencies can make or break the brilliance of many string and wind instruments. When it's over-represented, this range may easily leave a metallic or scratchy impression.
Upper treble6 kHz to 10 kHzThis range is important for a proper broadband representation of harmonics across many instruments and air sounds, as well as various percussion instruments. A popular subject to observe in this range is the frequently quoted jazz brush. While a guitar may suffer less from a bad representation, a violin might, in extreme cases, easily be degraded to a flute.
Super high frequency10 kHz to 20 kHzOnly a few instruments are included in this range. However, for people with good hearing, it ultimately makes the difference between poor versus good reproduction. Anyone who claims they can hear higher frequencies than these is veering off into voodoo territory. They're the ones who'll spend big bucks on gilded speaker cables. After all, the actual ability to hear frequencies of ~8 kHz and up is heavily dependent on the listener's age.

MORE: Best Deals

Create a new thread in the UK Article comments forum about this subject
No comments yet
Comment from the forums
    Your comment