Psychoacoustics

August 15, 2016 — January 28, 2020

Figure 1

1 Psychoacoustic units

A quick incomplete reference to pascals, Bels, erbs, Barks, sones, Hertz, semitones, Mels and whatever else I happen to need.

The actual auditory system is atrociously complex and I’m not going in to compete against e.g. perceptual models here, even if I did know a stirrup from a hammer or a cochlea from a cauliflower ear. Measuring what we can perceive with our sensory apparatus a whole field of hacks to account for masking effects and variable resolution in time, space and frequency, not to mention variation between individuals.

Nonetheless, when studying audio there are some units which are more natural to human perception than the natural-to-a-physicist physical units such Hz and Pascals. SI units are inconvenient when studying musical metrics or machine listening because they do not closely match human perceptual difference — 50 Herz is a significant difference at a base frequency of 100 Herz, but insignificant at 2000 Hz. But how big this difference is and what it means is rather a complex and contingent question.

Since my needs are machine listening features and thus computational speed and simplicity over perfection, I will wilfully and with malice ignore any fine distinctions I cannot be bothered with, regardless of how many articles have been published discussing said details. For example, I will not cover “salience”, “sonorousness” or cultural difference issues.

1.1 Start point: physical units

SPL, Hertz, pascals.

1.2 First elaboration: Logarithmic units

This innovation is nearly universal in music studies, because of its extreme simplicity. However, it’s constantly surprising to machine listening researchers who keep rediscovering it when they get frustrated with the FFT spectrogram. Bels/deciBels, semitones/octaves, dbA, dbV…

1.3 Next elaboration: “Cambridge” and “Munich” frequency units

Bark and ERB measures; these seem to be more common in the acoustics and psycho-acoustics community. An introduction to selected musically useful bits is given by Parncutt and Strasberger (Parncutt and Strasburger 1994).

According to Moore (2014) the key references for Barks is Zwicker “critical band” research (Zwicker 1961) extended by Brian Moore et al, e.g. in Moore and Glasberg (1983).

Traunmüller (1990) gives a simple rational formula to approximate the in-any-case-approximate lookup tables, as does (Moore and Glasberg 1983), and both relate these to Erbs.

1.3.1 Barks

Descriptions of Barks seem to start with the statement that above about 500 Hz this scale is near logarithmic in the frequency axis. Below 500 Hz the Bark scale approaches linearity. It is defined by an empirically derived table, but there are analytic approximations which seem just as good.

Traunmüller approximation for critical band rate in bark

\[ z(f) = \frac{26.81}{1+1960/f} - 0.53 \]

Lach Lau amends the formula:

\[ z'(f) = z(f) + \mathbb{I}\{z(f)>20.1\}(z(f)-20.1)* 0.22 \]

Harmut Traunmüller’s online unit conversion page can convert these for you and Dik Hermes summarises some history of how we got this way.

1.3.2 Erbs

Newer, works better on lower frequencies. (but possibly not at high frequencies?) Seem to be popular for analysing psychoacoustic masking effects?

Erbs are given different formulae and capitalisation depending where you look. Here’s one from (Parncutt and Strasburger 1994) for the “ERB-rate”

\[ H_p(f) = H_1\ln\left(\frac{f+f_1}{f+f_2}\right)+H_0, \]

where

\[ H_1 &=11.17 \text{ erb}\\ H_0 &=43.0 \text{ erb}\\ f_1 &= 312 \text{ Hz}\\ f_2 &= 14675 \text{ Hz} \]

Erbs themselves (which is different at the erb-rate for a given frequency?)

\[ B_e = 6.23 \times 10^{-6} f^2 + 0.09339 f + 28.52. \]

1.4 Elaboration into space: Mel frequencies

Mels are credited by Traunmüller (1990) to Beranek (1949) and by Parncutt (2005) to Stevens and Volkmann (1940).

The mel scale is not used as a metric for computing pitch distance in the present model, because it applies only to pure tones, whereas most of the tone sensations evoked by complex sonorities are of the complex variety (virtual rather than spectral pitches).

Certainly some of the ERB experiment also used pure tones, but maybe… Ach, I don’t even care.

Mels are common in the machine listening community, mostly through the MFCC, the Mel-frequency Cepstral Transform, which is a metric that seems to be historically popular for measuring psychoacoustic similarity of sounds. (Davis and Mermelstein 1980; Mermelstein and Chen 1976)

Here’s one formula, the “HTK” formula.

\[ m(f) = 1127 \ln(1+f/700) \]

There are others, such as the “Slanek” formula which is much more complicated and piecewise defined. I can’t be bothered searching for details for now.

1.5 Perceptual Loudness

ISO 226:2003 Equal loudness contour image by Lindosland:

Figure 2: ISO 226:2003 equal-loudness contours

Sones (Stevens and Volkmann 1940) are a power-law-intensity scale. Phons, ibid, are a logarithmic intensity scale, something like the dB level of the signal filtered to match the human ear, which is close to… dbA? Something like that. But you can get more sophisticated. Keyword: Fletcher-Munson curves.

For this level of precision, the coupling of frequency and amplitude into perceptual “loudness” becomes important and they are no longer the same at different source sound frequencies. Instead they are related via equal-loudness contours, which you can get from an actively updated ISO standard at great expense, or try to creconstruct from journals. Suzuki et al. (2003) seems to be the accepted modern version, but their report only lists graphs and is missing values in the few equations. Table-based loudness contours are available under the MIT license from the Surrey git repo, under iso226.m. Closed-form approximations for an equal loudness contour at fixed SPL are given in Suzuki and Takeshima (2004) equation 6.

When the loudness of an \(f\)-Hz comparison tone is equal to the loudness of a reference tone at 1 kHz with a sound pressure of \(p_r\), then the sound pressure of \(p_f\) at the frequency of \(f\) Hz is given by the following function:

\[ p^2_f =\frac{1}{U^2(f)}\left[(p_r^{2\alpha(f)} - p_{rt}^{2\alpha(f)}) + (U(f)p_{ft})^{2\alpha(f)}\right]^{1/\alpha(f)} \]

AFAICT they don’t define \(p_{ft}\) or \(p_{rt}\) anywhere, and I don’t have enough free attention to find a simple expression for the frequency-dependent parameters, which I think are still spline-fit. (?)

There is an excellent explanation of the point of all this — with diagrams - by Joe Wolfe.

1.6 Onwards and upwards like a Shepard tone

At this point, where we are already combining frequency and loudness, things are getting weird; we are usually measuring people’s reported subjective loudness levels for various signals, some of which are unnatural signals (pure tones), and with real signals we rapidly start running into temporal masking effects and phasing and so on.

Thankfully, I am not in the business of exhaustive cochlear modeling, so I can all go home now. The unhealthily curious might read (Hartmann 1997; Moore 2007) and tell me the good bits, then move onto sensory neurology.

Figure 3

2 Psychoacoustic models in lossy audio compression

Pure link dump, sorry.

3 References

Ball. 1999. Pump up the Bass.” Nature News.
———. 2014. Rhythm Is Heard Best in the Bass.” Nature.
Bartlett, and Medhi. 1955. On the Efficiency of Procedures for Smoothing Periodograms from Time Series with Continuous Spectra.” Biometrika.
Bauer, Benjamin B. 1970. Octave-Band Spectral Distribution of Recorded Music.” Journal of the Audio Engineering Society.
Bauer, B., and Torick. 1966. Researches in Loudness Measurement.” IEEE Transactions on Audio and Electroacoustics.
Benjamin. 1994. Characteristics of Musical Signals.” In Audio Engineering Society Convention 97.
Beranek. 1949. Acoustic Measurements.”
Bidelman, and Krishnan. 2009. Neural Correlates of Consonance, Dissonance, and the Hierarchy of Musical Pitch in the Human Brainstem.” Journal of Neuroscience.
Bingham, Godfrey, and Tukey. 1967. Modern Techniques of Power Spectrum Estimation.” Audio and Electroacoustics, IEEE Transactions on.
Bridle, and Brown. 1974. “An Experimental Automatic Word Recognition System.” JSRU Report.
Brown. 1991. Calculation of a Constant Q Spectral Transform.” The Journal of the Acoustical Society of America.
Cancho, and Solé. 2003. Least Effort and the Origins of Scaling in Human Language.” Proceedings of the National Academy of Sciences.
Cariani, and Delgutte. 1996a. Neural correlates of the pitch of complex tones. I. Pitch and pitch salience. Journal of neurophysiology.
———. 1996b. Neural correlates of the pitch of complex tones. II. Pitch shift, pitch ambiguity, phase invariance, pitch circularity, rate pitch, and the dominance region for pitch.” Journal of Neurophysiology.
Carter. 1987. Coherence and Time Delay Estimation.” Proceedings of the IEEE.
Cartwright, González, and Piro. 1999. Nonlinear Dynamics of the Perceived Pitch of Complex Sounds.” Physical Review Letters.
Cedolin, and Delgutte. 2005. Pitch of Complex Tones: Rate-Place and Interspike Interval Representations in the Auditory Nerve.” Journal of Neurophysiology.
Cochran, Cooley, Favin, et al. 1967. What Is the Fast Fourier Transform? Proceedings of the IEEE.
Cooley, Lewis, and Welch. 1970. The Application of the Fast Fourier Transform Algorithm to the Estimation of Spectra and Cross-Spectra.” Journal of Sound and Vibration.
Cooper, and Fazio. 1984. A New Look at Dissonance.” Advances in Experimental Social Psychology.
Cousineau, McDermott, and Peretz. 2012. The Basis of Musical Consonance as Revealed by Congenital Amusia.” Proceedings of the National Academy of Sciences.
Dattorro. n.d. “Madaline Model of Musical Pitch Perception.”
Davis, and Mermelstein. 1980. Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences.” IEEE Transactions on Acoustics, Speech, and Signal Processing.
de Cheveigné, and Kawahara. 2002. YIN, a Fundamental Frequency Estimator for Speech and Music.” The Journal of the Acoustical Society of America.
Duffin. 1948. Function Classes Invariant Under the Fourier Transform.” Duke Mathematical Journal.
Du, Kibbe, and Lin. 2006. Improved Peak Detection in Mass Spectrum by Incorporating Continuous Wavelet Transform-Based Pattern Matching.” Bioinformatics.
Elowsson, and Friberg. 2017. “Long-Term Average Spectrum in Popular Music and Its Relation to the Level of the Percussion.” In Audio Engineering Society Convention 142.
Fastl, and Zwicker. 2007. Psychoacoustics: Facts and Models. Springer Series in Information Sciences 22.
Ferguson, and Parncutt. 2004. Composing In the Flesh: Perceptually-Informed Harmonic Syntax.” In Proceedings of Sound and Music Computing.
Fineberg. 2000. Guide to the Basic Concepts and Techniques of Spectral Music.” Contemporary Music Review.
Gerzon. 1976. Unitary (Energy-Preserving) Multichannel Networks with Feedback.” Electronics Letters.
Godsill, and Davy. 2005. Bayesian Computational Models for Inharmonicity in Musical Instruments.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005.
Gómez, and Herrera. 2004. Estimating The Tonality Of Polyphonic Audio Files: Cognitive Versus Machine Learning Modelling Strategies. In ISMIR.
Gräf. 2010. Term Rewriting Extension for the Faust Programming Language.” Signal.
Guinan Jr. 2012. How Are Inner Hair Cells Stimulated? Evidence for Multiple Mechanical Drives.” Hearing Research.
Harris. 1978. On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform.” Proceedings of the IEEE.
Hartmann. 1997. Signals, Sound, and Sensation. Modern Acoustics and Signal Processing.
Heikkila. 2004. A New Class of Shift-Invariant Operators.” IEEE Signal Processing Letters.
Helmholtz. 1863. Die Lehre von Den Tonempfindungen Als Physiologische Grundlage Für Die Theorie Der Musik.
Hennig, Fleischmann, Fredebohm, et al. 2011. The Nature and Perception of Fluctuations in Human Musical Rhythms.” PLoS ONE.
Herman. 2007. Physics of the Human Body. Biological and Medical Physics, Biomedical Engineering.
Hermes. 1988. Measurement of Pitch by Subharmonic Summation.” The Journal of the Acoustical Society of America.
Hove, Marie, Bruce, et al. 2014. Superior Time Perception for Lower Musical Pitch Explains Why Bass-Ranged Instruments Lay down Musical Rhythms.” Proceedings of the National Academy of Sciences.
Huron, and Parncutt. 1993. An Improved Model of Tonality Perception Incorporating Pitch Salience and Echoic Memory. Psychomusicology: A Journal of Research in Music Cognition.
Irizarry. 2001. Local Harmonic Estimation in Musical Sound Signals.” Journal of the American Statistical Association.
Jacob. 1996. Algorithmic Composition as a Model of Creativity.” Organised Sound.
Kameoka, and Kuriyagawa. 1969a. Consonance Theory Part I: Consonance of Dyads.” The Journal of the Acoustical Society of America.
———. 1969b. Consonance Theory Part II: Consonance of Complex Tones and Its Calculation Method.” The Journal of the Acoustical Society of America.
Krishnan, Xu, Gandour, et al. 2004. Human frequency-following response: representation of pitch contours in Chinese tones.” Hearing Research.
Lahat, Niederjohn, and Krubsack. 1987. A Spectral Autocorrelation Method for Measurement of the Fundamental Frequency of Noise-Corrupted Speech.” IEEE Transactions on Acoustics, Speech and Signal Processing.
Langner. 1992. Periodicity Coding in the Auditory System.” Hearing Research.
Lerdahl. 1996. Calculating Tonal Tension.” Music Perception: An Interdisciplinary Journal.
Li. 1992. Random Texts Exhibit Zipf’s-Law-Like Word Frequency Distribution.” IEEE Transactions on Information Theory.
Licklider. 1951. A Duplex Theory of Pitch Perception.” Experientia.
Lorrain. 1980. A Panoply of Stochastic’cannons’.” Computer Music Journal.
Ma, Green, Barker, et al. 2007. Exploiting Correlogram Structure for Robust Speech Recognition with Multiple Speech Sources.” Speech Communication.
Manaris, Romero, Machado, et al. 2005. Zipf’s Law, Music Classification, and Aesthetics.” Computer Music Journal.
Masaoka, Ono, and Komiyama. 2001. A Measurement of Equal-Loudness Level Contours for Tone Burst.” Acoustical Science and Technology.
McDermott, Schemitsch, and Simoncelli. 2013. Summary Statistics in Auditory Perception.” Nature Neuroscience.
Medan, Yair, and Chazan. 1991. Super Resolution Pitch Determination of Speech Signals.” IEEE Transactions on Signal Processing.
Mermelstein, and Chen. 1976. Distance Measures for Speech Recognition: Psychological and Instrumental.” In Pattern Recognition and Artificial Intelligence,.
Michon, and Smith. 2011. Faust-STK: A Set of Linear and Nonlinear Physical Models for the Faust Programming Language.” In Proceedings of the 11th International Conference on Digital Audio Effects (DAFx-11).
Millane. 1994. Analytic Properties of the Hartley Transform and Their Implications.” Proceedings of the IEEE.
Moore. 2007. Cochlear hearing loss: physiological, psychological and technical issues. Wiley series in human communication science.
———. 2014. Development and Current Status of the ‘Cambridge’ Loudness Models.” Trends in Hearing.
Moore, and Glasberg. 1983. Suggested Formulae for Calculating Auditory‐filter Bandwidths and Excitation Patterns.” The Journal of the Acoustical Society of America.
Moorer. 1974. The Optimum Comb Method of Pitch Period Analysis of Continuous Digitized Speech.” IEEE Transactions on Acoustics, Speech and Signal Processing.
Morales-Cordovilla, Peinado, Sanchez, et al. 2011. Feature Extraction Based on Pitch-Synchronous Averaging for Robust Speech Recognition.” IEEE Transactions on Audio, Speech, and Language Processing.
Müller, Ellis, Klapuri, et al. 2011. Signal Processing for Music Analysis.” IEEE Journal of Selected Topics in Signal Processing.
Narayan, Temchin, Recio, et al. 1998. Frequency Tuning of Basilar Membrane and Auditory Nerve Fibers in the Same Cochleae.” Science.
Neely. 1993. A model of cochlear mechanics with outer hair cell motility.” Journal of the Acoustical Society of America.
Noll. 1967. Cepstrum Pitch Determination.” The Journal of the Acoustical Society of America.
Nordmark, and Fahlen. 1988. Beat Theories of Musical Consonance.” Speech Transmission Laboratory, Quarterly Progress and Status Report.
Olson. 2001. Intracochlear Pressure Measurements Related to Cochlear Tuning.” The Journal of the Acoustical Society of America.
Orlarey, Gräf, and Kersten. 2006. DSP Programming with Faust, Q and SuperCollider.” In Proceedings of the 4th International Linux Audio Conference (LAC06).
Pakarinen, Välimäki, Fontana, et al. 2011. Recent Advances in Real-Time Musical Effects, Synthesis, and Virtual Analog Models.” EURASIP Journal on Advances in Signal Processing.
Parncutt. 2005. Psychoacoustics and Music Perception.” Musikpsychologie–Das Neue Handbuch.
Parncutt, and Strasburger. 1994. Applying Psychoacoustics in Composition: ‘Harmonic’ Progressions of ‘Nonharmonic’ Sonorities.” Perspectives of New Music.
Pestana, Ma, and Reiss. 2013. Spectral Characteristics of Popular Commercial Recordings 1950-2010.” In New York.
Plomp, and Levelt. 1965. Tonal Consonance and Critical Bandwidth.” The Journal of the Acoustical Society of America.
Rabiner. 1977. On the Use of Autocorrelation Analysis for Pitch Detection.” IEEE Transactions on Acoustics, Speech, and Signal Processing.
Rasch, and Plomp. 1999. The Perception of Musical Tones.” The Psychology of Music.
Reitboeck, and Brody. 1969. A Transformation with Invariance Under Cyclic Permutation for Applications in Pattern Recognition.” Information and Control.
Robinson, and Dadson. 1956. A Re-Determination of the Equal-Loudness Relations for Pure Tones.” British Journal of Applied Physics.
Rouat, Liu, and Morissette. 1997. A Pitch Determination and Voiced/Unvoiced Decision Algorithm for Noisy Speech.” Speech Communication.
Salamon, Gomez, Ellis, et al. 2014. Melody Extraction from Polyphonic Music Signals: Approaches, Applications, and Challenges.” IEEE Signal Processing Magazine.
Salamon, Serrà, and Gómez. 2013. Tonal Representations for Music Retrieval: From Version Identification to Query-by-Humming.” International Journal of Multimedia Information Retrieval.
Schöner. 2002. Timing, Clocks, and Dynamical Systems.” Brain and Cognition.
Schroeder. 1961. Improved Quasi-Stereophony and ‘Colorless’ Artificial Reverberation.” The Journal of the Acoustical Society of America.
———. 1962. Natural sounding artificial reverberation.” Journal of the Audio Engineering Society.
Schroeder, and Logan. 1961. ‘Colorless’ Artificial Reverberation.” Audio, IRE Transactions on.
Serrà, Corral, Boguñá, et al. 2012. Measuring the Evolution of Contemporary Western Popular Music.” Scientific Reports.
Sethares. 1997. Specifying Spectra for Musical Scales.” The Journal of the Acoustical Society of America.
———. 1998. Consonance-Based Spectral Mappings.” Computer Music Journal.
Sethares, Milne, Tiedje, et al. 2009. Spectral Tools for Dynamic Tonality and Audio Morphing.” Computer Music Journal.
Skoe, and Kraus. 2010. Auditory Brainstem Response to Complex Sounds: A Tutorial.” Ear and Hearing.
Slaney, Malcolm. 1998. “Auditory Toolbox.” Interval Research Corporation, Tech. Rep.
Slaney, M., and Lyon. 1990. A Perceptual Pitch Detector.” In Proceedings of ICASSP.
Slepecky. 1996. Structure of the Mammalian Cochlea.” In The Cochlea. Springer Handbook of Auditory Research 8.
Smith, Julius O. 2010. Audio Signal Processing in Faust.” Online Tutorial: Https://Ccrma. Stanford. Edu/Jos/Aspf.
Smith, Sonya T., and Chadwick. 2011. Simulation of the Response of the Inner Hair Cell Stereocilia Bundle to an Acoustical Stimulus.” PLoS ONE.
Smith, Evan C., and Lewicki. 2006. Efficient Auditory Coding.” Nature.
Smith, Julius O., and Michon. 2011. Nonlinear Allpass Ladder Filters in Faust.” In Proceedings of the 14th International Conference on Digital Audio Effects (DAFx-11).
Sondhi. 1968. New Methods of Pitch Extraction.” IEEE Transactions on Audio and Electroacoustics.
Steele, Boutet de Monvel, and Puria. 2009. A Multiscale Model of the Organ of Corti.” Journal of Mechanics of Materials and Structures.
Stevens, and Volkmann. 1940. The Relation of Pitch to Frequency: A Revised Scale.” The American Journal of Psychology.
Stevens, Volkmann, and Newman. 1937. A Scale for the Measurement of the Psychological Magnitude Pitch.” The Journal of the Acoustical Society of America.
Stolzenburg. 2015. Harmony Perception by Periodicity Detection.” Journal of Mathematics and Music.
Suzuki, Mellert, Richter, et al. 2003. Precise and Full-Range Determination of Two-Dimensional Equal Loudness Contours.”
Suzuki, and Takeshima. 2004. Equal-Loudness-Level Contours for Pure Tones.” The Journal of the Acoustical Society of America.
Tan, and Alwan. 2011. Noise-Robust F0 Estimation Using SNR-Weighted Summary Correlograms from Multi-Band Comb Filters.” In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Tarnopolsky, Fletcher, Hollenberg, et al. 2005. Acoustics: The Vocal Tract and the Sound of a Didgeridoo.” Nature.
Terhardt. 1974. Pitch, Consonance, and Harmony.” The Journal of the Acoustical Society of America.
Thompson, and Parncutt. 1997. Perceptual Judgments of Triads and Dyads: Assessment of a Psychoacoustic Model.” Music Perception.
Titchmarsh. 1926. Reciprocal Formulae Involving Series and Integrals.” Mathematische Zeitschrift.
Traunmüller. 1990. Analytical Expressions for the Tonotopic Sensory Scale.” The Journal of the Acoustical Society of America.
Tymoczko. 2006. The Geometry of Musical Chords.” Science.
Umesh, Cohen, and Nelson. 1999. Fitting the Mel Scale.” In 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).
Valimaki, Parker, Savioja, et al. 2012. Fifty Years of Artificial Reverberation.” IEEE Transactions on Audio, Speech, and Language Processing.
Wagh. 1976. Cyclic Autocorrelation as a Translation Invariant Transform.” India, IEE-IERE Proceedings.
Welch. 1967. The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging over Short, Modified Periodograms.” IEEE Transactions on Audio and Electroacoustics.
Williamson, and Murray-Smith. 2002. Audio Feedback for Gesture Recognition.”
Xin, and Qi. 2006. A Many to One Discrete Auditory Transform.” arXiv:math/0603174.
Young, Evermann, Gales, et al. 2002. “The HTK Book.”
Zanette. 2008. Playing by Numbers.” Nature.
Zwicker. 1961. Subdivision of the Audible Frequency Range into Critical Bands (Frequenzgruppen).” The Journal of the Acoustical Society of America.
Zwislocki. 1980. Symposium on Cochlear Mechanics: Where Do We Stand After 50 Years of Research? The Journal of the Acoustical Society of America.