Sound & Acoustics Handbook


Free download. Book file PDF easily for everyone and every device. You can download and read online Sound & Acoustics Handbook file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Sound & Acoustics Handbook book. Happy reading Sound & Acoustics Handbook Bookeveryone. Download file Free Book PDF Sound & Acoustics Handbook at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Sound & Acoustics Handbook Pocket Guide.
Table of contents

The dihedral corners get next priority, since they work on 2 dimensions. Which is why the first step in setting up your acoustic treatment is to mount a bass trap at each of the tridhedral corners. Whenever two opposing walls are parallel to each other…. However, if you do use them… even better. In which case, the standard locations to put them are:.

The strategies we just covered are what you would typically use for a live room , to get a nice sound from virtually anywhere in the room. Specific acoustic treatment strategies exist, which I reveal in this post :. By positioning the mic as close to the instrument as you can without ruining the tone …. Because really , any type of soft porous material such as pillows , blankets , couches , or even clothes , can offer similar absorption.

Which is why the most popular DIY method of recording vocals is to prop an old mattress against your wall so its directly behind the singers back as he performs. And to do that:. If commercial acoustic foam is currently beyond your budget…. By using your reflection filter in combination with the previous 4 techniques we covered, your recordings will sound x better than they otherwise would in a completely bare room.

To see which ones I recommend, check out this post :. Starting with… 1. Moments later, some of those reflections reach the microphone by chance. Normally, those reflections get recorded… But with acoustic absorption , all that remains is the direct sound from the instrument to the microphone…which is exactly what we want. But most likely, the sound you hear will be somewhere in-between. Now… The closer it is to 1 , the more absorption you will need to make the room sound as dry as possible.

The closer it is to 2 , the less acoustic treatment you will need in general, although virtually any room will still benefit from a little. Bass Traps The first and most important element of acoustic treatment to add to your room is bass traps. In this chapter, we want to focus on the elements that shed light on best practices in recording, encoding, processing, compressing, and playing digital sound. Most important for our purposes is an examination of how humans subjectively perceive the frequencies, amplitude, and direction of sound.

A concept that appears repeatedly in this context is the non-linear nature of human sound perception. Understanding this concept leads to a mathematical representation of sound that is modeled after the way we humans experience it, a representation well-suited for digital analysis and processing of sound, as we'll see in what follows. First, we need to be clear about the language we use in describing sound.

In speaking of sound perception, it's important to distinguish between words which describe objective measurements and those that describe subjective experience. The terms intensity and pressure denote objective measurements that relate to our subjective experience of the loudness of sound.

Power is defined as energy per unit time, measured in watts W. Power can also be defined as the rate at which work is performed or energy converted. Watts are used to measure the output of power amplifiers and the power handling levels of loudspeakers. In relation to sound, we speak specifically of air pressure amplitude and measure it in pascals. Air pressure amplitude caused by sound waves is measured as a displacement above or below equilibrium atmospheric pressure.

During audio recording, a microphone measures this constantly changing air pressure amplitude and converts it to electrical units of volts V , sending the voltages to the sound card for analog-to-digital conversion. We'll see below how and why all these units are converted to decibels. The objective measures of intensity and air pressure amplitude relate to our subjective experience of the loudness of sound.

Generally, the greater the intensity or pressure created by the sound waves, the louder this sounds to us. However, loudness can be measured only by subjective experience — that is, by an individual saying how loud the sound seems to him or her. The relationship between air pressure amplitude and loudness is not linear.

That is, you can't assume that if the pressure is doubled, the sound seems twice as loud. In fact, it takes about ten times the pressure for a sound to seem twice as loud. Further, our sensitivity to amplitude differences varies with frequencies, as we'll discuss in more detail in Section 4. When we speak of the amplitude of a sound, we're speaking of the sound pressure displacement as compared to equilibrium atmospheric pressure. The range of the quietest to the loudest sounds in our comfortable hearing range is actually quite large.

The loudest sounds are on the order of 20 Pa. These values vary by the frequencies that are heard. Thus, the loudest has about 1,, times more air pressure amplitude than the quietest. Since intensity is proportional to the square of pressure, the loudest sound we listen to at the verge of hearing damage is 1,,,, times more intense than the quietest. Some sources even claim a factor of 10,,,, between loudest and quietest intensities. It depends on what you consider the threshold of pain and hearing damage. This is a wide dynamic range for human hearing. Another subjective perception of sound is pitch.

As you learned in Chapter 3, the pitch of a note is how "high" or "low" the note seems to you. The related objective measure is frequency. In general, the higher the frequency, the higher is the perceived pitch. But once again, the relationship between pitch and frequency is not linear, as you'll see below. Also, our sensitivity to frequency-differences varies across the spectrum, and our perception of the pitch depends partly on how loud the sound is.

A high pitch can seem to get higher when its loudness is increased, whereas a low pitch can seem to get lower. Context matters as well in that the pitch of a frequency may seem to shift when it is combined with other frequencies in a complex tone. In order to define decibels, which are used to measure sound loudness, we need to define some units that are used to measure electricity as well as acoustical power, intensity, and pressure.

Both analog and digital sound devices use electricity to represent and transmit sound. Electricity is the flow of electrons through wires and circuits. There are four interrelated components in electricity that are important to understand:. Electricity can be understood through an analogy with the flow of water borrowed from Thompson Picture two tanks connected by a pipe. One tank has water in it; the other is empty. Potential energy is created by the presence of water in the first tank. The water flows through the pipe from the first tank to the second with some intensity.

The pipe has a certain amount of resistance to the flow of water as a result of its physical properties, like its size. The potential energy provided by the full tank, reduced somewhat by the resistance of the pipe, results in the power of the water flowing through the pipe. By analogy, in an electrical circuit we have two voltages connected by a conductor.

Analogous to the full tank of water, we have a voltage — an excess of electrons — at one end of the circuit. The voltage at the first end of the circuit causes pressure, or potential energy, as the excess electrons want to move toward ground. This flow of electricity is called the current. A electrical or digital circuit is a risky affair and only the experienced can handle such a complicated task at hand. It is essential that one goes through the right selection guide, like the Altera fpga selection guide and only then embark upon an ambitious project.

If you are looking to save on your electric bill visit utilitysavingexpert. The physical connection between the two halves of the circuit provides resistance to the flow. The connection might be a copper wire, which offers little resistance and is thus called a good conductor. On the other hand, something could intentionally be inserted into the circuit to reduce the current — a resistor for example. The power in the circuit is determined by a combination of the voltage and the resistance. Equation 4. Thus, if you know any two of these four values you can get the other two from the equations above.

Volts, amps, ohms, and watts are convenient units to measure potential energy, current resistance, and power in that they have the following relationship:. The above discussion speaks of power W , intensity I , and potential energy V in the context of electricity. These words can also be used to describe acoustical power and intensity as well as the air pressure amplitude changes detected by microphones and translated to voltages.

Power, intensity, and pressure are valid ways to measure sound as a physical phenomenon. However, decibels are more appropriate to represent the loudness of one sound relative to another, as well see in the next section. First consider Table 4. From column 3, you can see that the sound of a nearby jet engine has on the order of times greater air pressure amplitude than the threshold of hearing.

Imagine a graph of sound loudness that has perceived loudness on the horizontal axis and air pressure amplitude on the vertical axis. We would need numbers ranging from 0 to 10,, on the vertical axis Figure 4. This axis would have to be compressed to fit on a sheet of paper or a computer screen, and we wouldn't see much space between, say, and Thus, our ability to show small changes at low amplitude would not be great. Although we perceive a vacuum cleaner to be approximately twice as loud as normal conversation, we would hardly be able to see any difference between their respective air pressure amplitudes if we have to include such a wide range of numbers, spacing them evenly on what is called a linear scale.

A linear scale turns out to be a very poor representation of human hearing. We humans can more easily distinguish the difference between two low amplitude sounds that are close in amplitude than we can distinguish between two high amplitude sounds that are close in amplitude. A decibel is based on a ratio — that is, one value relative to another, as in.

Hypothetically, and could measure anything, as long as they measure the same type of thing in the same units — e. Because decibels are based on a ratio, they imply a comparison. Decibels can be a measure of. What if we were to measure relative loudness using the threshold of hearing as our point of comparison — the , in the ratio , as in column 3 of Table 4.

That seems to make sense. The discussion above is given to explain why it makes sense to use the logarithm of the ratio of to express the loudness of sounds, as shown in Equation 4. The values in column 4 of Table 4. Notice that in Equation 4. This is because microphones measure sound as air pressure amplitudes, turn the measurements into voltages levels, and convey the voltage values to an audio interface for digitization.

Thus, voltages are just another way of capturing air pressure amplitude. Notice also that because the dimensions are the same in the numerator and denominator of , the dimensions cancel in the ratio. This is always true for decibels. Because they are derived from a ratio, decibels are dimensionless units. Power, intensity, and air pressure amplitude are three physical phenomena related to sound that can be measured with decibels.

The important thing in any usage of the term decibels is that you know the reference point — the level that is in the denominator of the ratio. Different usages of the term decibel sometimes add different letters to the dB abbreviation to clarify the context, as in dBPWL decibels-power-level , dBSIL decibels-sound-intensity-level , and dBFS decibels-full-scale , all of which are explained below. Comparing the columns in Table 4. If we had to graph loudness using Pa as our units, the scale would be so large that the first ten sound levels from silence all the way up to subways would not be distinguishable from 0 on the graph.

With decibels, loudness levels that are easily distinguishable by the ear can be seen as such on the decibel scale. Decibels are also more intuitively understandable than air pressure amplitudes as a way of talking about loudness changes. In an acoustically-insulated lab environment with virtually no background noise, a 1 dB change yields the smallest perceptible difference in loudness.

A 10 dB change results in about a doubling of perceived loudness. The increase still sounds approximately like a doubling of loudness. In contrast, going from 60 to 70 dBSPL is an increase of Talking about loudness-changes in terms of decibels communicates more. This is correct usage. The difference between any two decibels levels that have the same reference point is always measured in dimensionless dB. The bel , named for Alexander Graham Bell, was originally defined as a unit for measuring power.

The decibel turns out to be a more useful unit than the bel because it provides better resolution. When this definition is applied to give a sense of the acoustic power of a sound, then is the power of sound at the threshold of hearing, which is picowatt. Sound can also be measured in terms of intensity.

Since intensity is defined as power per unit area, the units in the numerator and denominator of the decibel ratio are , and the threshold of hearing intensity is. Neither power nor intensity is a convenient way of measuring the loudness of sound. We give the definitions above primarily because they help to show how the definition of dBSPL was derived historically.

The easiest way to measure sound loudness is by means of air pressure amplitude. When sound is transmitted, air pressure changes are detected by a microphone and converted to voltages. By Equation 4. From this we get:. We can show how Equation 4. Working in the opposite direction, you can convert the decibel level of normal conversation 60 dBSPL to air pressure amplitude:. If then. Rarely would you be called upon to do these conversions yourself. But now you know the mathematics on which the dBSPL definition is based.


  • Architectural Acoustics Handbook.
  • Fallen Angel.
  • A Case-Based Guide to Clinical Endocrinology (Contemporary Endocrinology).
  • Black Culture, White Youth: The Reggae Tradition from JA to UK.
  • Organizations’ Environmental Performance Indicators: Measuring, Monitoring, and Management.
  • Help Files.
  • Other product benefits.

So when would you use these different applications of decibels? Most commonly you use dBSPL to indicate how loud things seem relative to the threshold of hearing. In fact, you use this type of decibel so commonly that the SPL is often dropped off and simply dB is used where the context is clear. You learn that human speech is about 60 dB, rock music is about dB, and the loudest thing you can listen to without hearing damage is about dB — all of these measurements implicitly being dBSPL.

The definition of dBFS uses the largest-magnitude sample size for a given bit depth as its reference point.

A Beginners Guide to Room EQ Wizard (REW)

For a bit depth of n , this largest magnitude would be. Figure 4. Notice that since is never more than , is never a positive number. When you first use dBFS it may seem strange because all sound levels are at most 0. The discussion above has considered decibels primarily as they measure sound loudness. Decibels can also be used to measure relative electrical power or voltage. For example, dBV measures voltage using 1 V as a reference level, dBu measures voltage using 0.

The reference levels for different types of decibels are listed in Table 4. Notice that decibels are used in reference to the power of loudspeakers or the input voltage to audio devices. Of course, there are many other common usages of decibels outside of the realm of sound.

Microphones and sound level meters measure the amplitude of sound waves over time. There are situations in which you may want to know the largest amplitude over a time period. The sound pressure level of greatest magnitude over a given time period is called the peak amplitude. For a single-frequency sound representable by a sine wave, this would be the level at the peak of the sine wave. The sound represented by Figure 4. However, how would the loudness of a sine-wave-shaped sound compare to the loudness of a square-wave-shaped sound with the same peak amplitude Figure 4.

The square wave would actually sound louder. This is because the square wave is at its peak level more of the time as compared to the sine wave. To account for this difference in perceived loudness, RMS amplitude root-mean-square amplitude can be used as an alternative to peak amplitude, providing a better match for the way we perceive the loudness of the sound.

Rather than being an instantaneous peak level, RMS amplitude is similar to a standard deviation, a kind of average of the deviation from 0 over time. RMS amplitude is defined as follows:. Notice that squaring each sample makes all the values in the summation positive. If this were not the case, the summation would be 0 assuming an equal number of positive and negative crests since the sine wave is perfectly symmetrical.

The definition in Equation 4. The samples could also be quantized as values in the range determined by the bit depth, or the samples could also be measured in dimensionless decibels, as shown for Adobe Audition in Figure 4. Of course most of the sounds we hear are not simple waveforms like those shown; natural and musical sounds contain many frequency components that vary over time. In any case, the RMS amplitude is a better model for our perception of the loudness of complex sounds than is peak amplitude.

Sound processing programs often give amplitude statistics as either peak or RMS amplitude or both. This is because the sound wave changes over time. In the figure, the window width is ms. You need to be careful will some usages of the term "peak amplitude. If you allow the level to go too high, the signal will be clipped. In Chapter 3, we discussed the non-linear nature of pitch perception when we looked at octaves as defined in traditional Western music.

The A above middle C call it A4 on a piano keyboard sounds very much like the note that is 12 semitones above it, A5, except that A5 has a higher pitch. A5 is one octave higher than A4. A6 sounds like A5 and A4, but it's an octave higher than A5. The progression between octaves is not linear with respect to frequency. A2's frequency is twice the frequency of A1.

A3's frequency is twice the frequency of A2, and so forth. A simple way to think of this is that as the frequencies increase by multiplication , the perception of the pitch change increases by addition. In any case, the relationship is non-linear, as you can clearly see if you plot frequencies against octaves, as shown in Figure 4.

The fact that this is a non-linear relationship implies that the higher up you go in frequencies, the bigger the difference in frequency between neighboring octaves. Because of the non-linearity of our perception, frequency response graphs often show the frequency axis on a logarithmic scale, or you're given a choice between a linear and a logarithmic scale, as shown in Figure 4. Notice that you can select or deselect "linear" in the upper left hand corner. In the figure on the right, the distance between 10 and Hz on the horizontal axis is the same as the distance between and , which is the same as and This is more in keeping with how our perception of the pitch changes as the frequencies get higher.

You should always pay attention to the scale of the frequency axis in graphs such as this. The range of frequencies within human hearing is, at best, 20 Hz to 20, Hz. The range varies with individuals and diminishes with age, especially for high frequencies. Our hearing is less sensitive to low frequencies than to high; that is, low frequencies have to be more intense for us to hear them than high frequencies.

Frequency resolution also called frequency discrimination is our ability to distinguish between two close frequencies. Frequency resolution varies by frequency, loudness, the duration of the sound, the suddenness of the frequency change, and the acuity and training of the listener's ears. The smallest frequency change that can be noticed as a pitch change is referred to as a just-noticeable-difference jnd.

At low frequencies, it's possible to notice a difference between frequencies that are separated by just a few Hertz. At low frequencies, tones that are separated by just a few Hertz can be distinguished as separate pitches, while at high frequencies, two tones must be separated by hundreds of Hertz before a difference is noticed. You can test your own frequency range and discrimination with a sound processing program like Audacity or Audition, generating and listening to pure tones, as shown in Figure 4. One part of the ear's anatomy that is helpful to consider more closely is the area in the inner ear called the basilar membrane.

It is here that sound vibrations are detected, separated by frequencies, and transformed from mechanical energy to electrical impulses sent to the brain. The basilar membrane is lined with rows of hair cells and thousands of tiny hairs emanating from them. The hairs move when stimulated by vibrations, sending signals to their base cells and the attached nerve fibers, which pass electrical impulses to the brain.

In his pioneering work on frequency perception, Harvey Fletcher discovered that different parts of the basilar membrane resonate more strongly to different frequencies. Thus, the membrane can be divided into frequency bands, commonly called critical bands. Each critical band of hair cells is sensitive to vibrations within a certain band of frequencies.

Continued research on critical bands has shown that they play an important role in many aspects of human hearing, affecting our perception of loudness, frequency, timbre, and dissonance vs. Experiments with critical bands have also led to an understanding of frequency masking, a phenomenon that can be put to good use in audio compression. Critical bands can be measured by the band of frequencies that they cover. Fletcher discovered the existence of critical bands in his pioneering work on the cochlear response.

Critical bands are the source of our ability to distinguish one frequency from another. When a complex sound arrives at the basilar membrane, each critical band acts as a kind of bandpass filter, responding only to vibrations within its frequency spectrum. In this way, the sound is divided into frequency components.

If two frequencies are received within the same band, the louder frequency can overpower the quieter one. This is the phenomenon of masking , first observed in Fletcher's original experiments. Bandpass filters are studied in Chapter 7. Critical bands within the ear are not fixed areas but instead are created during the experience of sound. Any audible sound can create a critical band centered on it. However, experimental analyses of critical bands have arrived at approximations that are useful guidelines in designing audio processing tools. Table 4. Here, the basilar membrane is divided into 25 overlapping bands, each with a center frequency and with variable bandwidths across the audible spectrum.

The width of each band is given in Hertz, semitones, and octaves. The widths in semitones and octaves were derived from the widths in Hertz, as explained in Section 4. The center frequencies are graphed against the critical bands in Hertz in Figure 4. You can see from the table and figure that, measured in Hertz, the critical bands are wider for higher frequencies than for lower. This implies that there is better frequency resolution at lower frequencies because a narrower band results in less masking of frequencies in a local area.

The table shows that critical bands are generally in the range of two to four semitones wide, mostly less than four. This observation is significant as it relates to our experience of consonance vs. Recall from Chapter 3 that a major third consists of four semitones. Thus, the notes that are played simultaneously in a third generally occupy separate critical bands. This helps to explain why thirds are generally considered consonant — each of the notes having its own critical band.

Seconds, which exist in the same critical band, are considered dissonant. At very low and very high frequencies, thirds begin to lose their consonance to most listeners. This is consistent with the fact that the critical bands at the low frequencies and Hz and high frequencies over Hz span more than a third, so that at these frequencies, a third lies within a single critical band. In the early s at Bell Laboratories, groundbreaking experiments by Fletcher and Munson clarified the extent to which our perception of loudness varies with frequency Fletcher and Munson Their results, refined by later researchers Robinson and Dadson, and adopted as International Standard ISO , are illustrated in a graph of equal-loudness contours shown in Figure 4.

Each curve on the graph represents an n -phon contour. An n -phon contour is created as follows:. This contour was creating by playing a Hz pure tone at a loudness level of 10 dBSPL, and then asking groups of listeners to say when they thought pure tones at other frequencies matched the loudness of the Hz tone. Notice that low-frequency tones had to be increased by 60 or 75 dB to sound equally loud.

Some of the higher-frequency tones — in the vicinity of Hz — actually had to be turned down in volume to sound equally loud to the 10 dBSPL Hz tone. Also notice that the louder the Hz tone is, the less lower-frequency tones have to be turned up to sound equal in loudness.

For example, the phon contour goes up only about 30 dB to make the lowest frequencies sound equal in loudness to Hz at 90 dBSPL, whereas the phon contour has to be turned up about 75 dB. With the information captured in the equal loudness contours, devices that measure the loudness of sounds — for example, SPL meters sound pressure level meters — can be designed so that they compensate for the fact that low frequency sounds seem less loud than high frequency sounds at the same amplitude.

The A, B, and C-weighting functions are approximately inversions of the phon, phon, and phon loudness contours, respectively. This implies that applying A-weighting in an SPL meter causes the meter to measure loudness in a way that matches our differences in loudness perception at phons. To understand how this works, think of the graphs of the weighting as frequency filters — also called frequency response graphs. When a weighting function is applied by an SPL meter, the meter uses a filter to reduce the influence of frequencies to which our ears are less sensitive, and conversely to increase the weight of frequencies that our ears are sensitive to.

The fact that the A-weighting graph is lower on the left side than on the right means that an A-weighted SPL meter reduces the influence of low-frequency sounds as it takes its overall loudness measurement. On the other hand, it boosts the amplitude of frequencies around Hz, as seen by the bump above 0 dB around Hz. The use of weighted SPL meters is discussed further in Section 4.

Sometimes it's convenient to simplify our understanding of sound by considering how it behaves when there is nothing in the environment to impede it. An environment with no physical influences to absorb, reflect, diffract, refract, reverberate, resonate, or diffuse sound is called a free field.

A free field is an idealization of real world conditions that facilitates our analysis of how sound behaves. Sound in a free field can be pictured as radiating out from a point source, diminishing in intensity as it gets farther from the source. A free field is partially illustrated in Figure 4. In this figure, sound is radiating out from a loudspeaker, with the colors indicating highest to lowest intensity sound in the order red, orange, yellow, green, and blue.

The area in front of the loudspeaker might be considered a free field. However, because the loudspeaker partially blocks the sound from going behind itself, the sound is lower in amplitude there. You can see that there is some sound behind the loudspeaker, resulting from reflection and diffraction. In the real world, there are any number of things that can get in the way of sound, changing its direction, amplitude, and frequency components.

In enclosed spaces, absorption plays an important role. The diminishing of sound intensity is called attenuation. A general mathematical formulation for the way sound attenuates as it moves through the air is captured in the inverse square law, which shows that sound decreases in intensity in proportion to the square of the distance from the source. See Section 4. The attenuation of sound in the air is due to the air molecules themselves absorbing and converting some of the energy to heat.

The amount of attenuation depends in part on the air temperature and relative humidity. Thick, porous materials can absorb and attenuate the sound even further, and they're often used in architectural treatments to modify and control the acoustics of a room. Even hard, solid surfaces absorb some of the sound energy, although most of it is reflected back. The material of walls and ceilings, the number and material of seats, the number of persons in an audience, and all solid objects have to be taken into consideration acoustically in sound setups for live performance spaces.

Sound that is not absorbed by objects is instead reflected from, diffracted around, or refracted into the object. Hard surfaces reflect sound more than soft ones, which are more absorbent. The law of reflection states that the angle of incidence of a wave is equal to the angle of reflection. This means that if a wave were to propagate in a straight line from its source, it reflects in the way pictured in Figure 4.

In reality, however, sound radiates out spherically from its source.

Description

Thus, a wavefront of sound approaches objects and surfaces from various angles. Imagine a cross-section of the moving wavefront approaching a straight wall, as seen from above. Its reflection would be as pictured in Figure 4. In a special case, if the wavefront were to approach a concave curved solid surface, it would be reflected back to converge at one point in the room, the location of that point depending on the angle of the curve.

A person positioned elsewhere in the room cannot hear their whispers at all. A common shape found with whispering rooms is an ellipse, as seen in Figure 4. The shape and curve of these walls cause any and all sound emanating from one focal point to reflect directly to the other. Diffraction also has a lot to do with microphone and loudspeaker directivity. Consider how microphones often have different polar patterns at different frequencies. Diffraction is the bending of a sound wave as it moves past an obstacle or through a narrow opening. The phenomenon of diffraction allows us to hear sounds from sources that are not in direct line-of-sight, such as a person standing around a corner or on the other side of a partially obstructing object.

Low frequency sounds i. In other words, low frequency sounds are better able to travel around obstacles. For example, your stereo speaker drivers are probably protected behind a plastic or metal grill, yet the sound passes through it intact and without noticeable coloration. The obstacle presented by the wire mesh of the grill perhaps a millimeter or two in diameter is even smaller than the smallest wavelength we can hear about 2 centimeters for 20 kHz, 10 to 20 times larger than the wire , so the sound diffracts easily around it.

Refraction is the bending of a sound wave as it moves through different media. Typically we think of refraction with light waves, as when we look at something through glass or that is underwater. In acoustics, the refraction of sound waves tends to be more gradual, as the properties of the air change subtly over longer distances.

This causes a bending in sound waves over a long distance, primarily due to temperature, humidity, and in some cases wind gradients over distance and altitude. This bending can result in noticeable differences in sound levels, either as a boost or an attenuation, also referred to as a shadow zone. Reverberation is the result of sound waves reflecting off of many objects or surfaces in the environment. Imagine an indoor room in which you make a sudden burst of sound. Some of that sound is transmitted through or absorbed by the walls or objects, and the rest is reflected back, bouncing off the walls, ceilings, and other surfaces in the room.

The sound wave that travels straight from the sound source to your ears is called the direct signal. The first few instances of reflected sound are called primary or early reflections. Early reflections arrive at your ears about 60 ms or sooner after the direct sound, and play a large part in imparting a sense of space and room size to the human ear.

Early reflections may be followed by a handful of secondary and higher-order reflections. At this point, the sound waves have had plenty of opportunity to bounce off of multiple surfaces, multiple times. As a result, the reflections that are arriving now are more numerous, closer together in time, and quieter.

Much of the initial energy initial energy of the reflections has been absorbed by surfaces or expended in the distance traveled through the air. This dense collection of reflections is reverberation , illustrated in Figure 4. Assuming that the sound source is only momentary, the generated sound eventually decays as the waves lose energy, the reverberation becoming less and less loud until the sound is no longer discernable.

Audio Engineering Handbook – Acoustics Literature

Typically, reverberation time is defined as the time it takes for the sound to decay in level by 60 dB from its direct signal. Single, strong reflections that reach the ear a significant amount of time — about ms — after the direct signal can be perceived as an echo — essentially a separate recurrence of the original sound.

Even reflections as little as 50 ms apart can cause an audible echo, depending on the type of sound and room acoustics. While echo is often employed artistically in music recordings, echoes tend to be detrimental and distracting in a live setting and are usually avoided or require remediation in performance and listening spaces. Diffusion is another property that interacts with reflections and reverberation.

Diffusion relates to the ability to distribute sound energy more evenly in a listening space. While a flat, even surface reflects sounds strongly in a predictable direction, uneven surfaces or convex curved surfaces diffuse sound more randomly and evenly. Like absorption, diffusion is often used to treat a space acoustically to help break up harsh reflections that interfere with the natural sound. Unlike absorption, however, which attempts to eliminate the unwanted sound waves by reducing the sound energy, diffusion attempts to redirect the sound waves in a more natural manner.

Usually a combination of absorption and diffusion is employed to achieve the optimal result. There are many unique types of diffusing surfaces and panels that are manufactured based on mathematical algorithms to provide the most random, diffuse reflections possible. Putting these concepts together, we can say that the amount of time it takes for a particular sound to decay depends on the size and shape of the room, its diffusive properties, and the absorptive properties of the walls, ceilings, and objects in the room. In short, all the aforementioned properties determine how sound reverberates in a space, giving the listener a "sense of place.

Reverberation in an auditorium can enhance the listener's experience, particularly in the case of a music hall where it gives the individual sounds a richer quality and helps them blend together. Excessive reverberation, however, can reduce intelligibility and make it difficult to understand speech.

In Chapter 7, you'll see how artificial reverberation is applied in audio processing. A final important acoustical property to be considered is resonance. Like a musical instrument, a room has a set of resonant frequencies, called its room modes. Room modes result in locations in a room where certain frequencies are boosted or attenuated, making it difficult to give all listeners the same audio experience.

We'll talk more about how to deal with room modes in Section 4. We now turn to practical considerations related to the concepts introduced in Section 1. We first return to the concept of decibels. An important part of working with decibel values is learning to recognize and estimate decibel differences. Until you can answer that question in a dB value, you will have a hard time figuring out what to do.

It's also important to understand the kind of dB differences that are audible. The average listener cannot distinguish a difference in sound pressure level that is less than 3 dB. With training, you can learn to recognize differences in sound pressure level of 1 dB, but differences that are less than 1 dB are indistiguishable to even well-trained listeners.

Understanding the limitations to human hearing is very important when working with sound. For example, when investigating changes you can make to your sound equipment to get higher sound pressure levels, you should be aware that unless the change amounts to 3 dB or more, most of your listeners will probably not notice.

This concept also applies when processing audio signals. Having a reference to use when creating audio material or sound systems is also helpful. For example, there are usually loudness requirements imposed by the television network for television content. If these requirements are not met, there will be level inconsistencies between the various programs on the television station that can be very annoying to the audience.

These requirements could be as simple as limiting peak levels to dBFS or as strict as meeting a specified dBFS average across the duration of the show. You might also be putting together equipment that delivers sound to a live audience in an acoustic space. In that situation you need to know how loud in dBSPL the system needs to perform at the distance of the audience.

Once you know these requirements, you can begin to evaluate the performance of the equipment to verify that it can meet these requirements. Turn a sound up by 10 dB and it sounds about twice as loud. Similarly, Table 4.

These rules give you a quick sense of how boosts in power and voltage affect sound levels. A mathematical justification of these rules is given in Section 3. Decibels are also commonly used to compare the power levels of loudspeakers and amplifiers. For power, Equation 4. Based on this equation, how much more powerful is an W amplifier than a W amplifier, in decibels? For voltages, Equation 4. If you increase a voltage level from V to V, what is the increase in decibels? Multiplying power times 2 corresponds to multiplying voltage times because power is proportional to voltage squared:.

You have to multiply the power of the amplifier by ten in order to get sounds that are approximately twice asloud. The fact that doubling the power gives about a 3 dB increase in sound pressure level has implications with regard to how many speakers you ought to use for a given situation. If you double the speakers assuming identical speakers , you double the power, but you get only a 3 dB increase in sound level.

If you quadruple the speakers, you get a 6 dB increase in sound because each time you double, you go up by 3 dB. If you double the speakers again eight speakers now , you hypothetically get a 9 dB increase, not taking into account other acoustical factors that may affect the sound level. You can figure out how to do this with the power ratio formula, derived in Equation 4.

Applying this formula, what if you start with a W amplifier and want to get one that is 15 dB louder? Instead of trying to get more watts, a better strategy would be to choose different loudspeakers that have a higher sensitivity. The sensitivity of a loudspeaker is defined as the sound pressure level that is produced by the loudspeaker with 1 watt of power when measured 1 meter away. Also, because the voltage gain in a power amplifier is fixed, before you go buy a bunch of new loudspeakers, you may also want to make sure that you're feeding the highest possible voltage signal into the power amplifier.


  • Kale Recipes: Natures Superfood for Breakfast, Lunch and Dinner (The Easy Recipe);
  • Handbook of Acoustics by Malcolm J. Crocker - ihopocuhyp.tk.
  • CHAPTER 3: The Ultimate Guide to Acoustic Treatment for Home Studios;
  • How to Really Self-Publish Erotica: The Truth About Kinks, Covers, Advertising and More!.
  • Recommended for you?
  • Brocherts Crush Step 3: The Ultimate USMLE Step 3 Review, 4e.
  • That Went Well: Adventures in Caring for My Sister.

It's quite possible that the 15 dB increase you're looking for is hiding somewhere in the signal chain of your sound system due to inefficient gain structure between devices. Chapter 8 includes a Max demo on gain structure that may help you with this concept. A similar problem arises when you have two pieces of sound equipment whose nominal output levels are measured in decibels of different types. For example, you may want to connect two devices where the nominal voltage output of one is given in dBV and the nominal voltage output of the other is given in dBu.

You first want to know if the two voltage levels are the same. If they are not, you want to know how much you have to boost the one of lower voltage to match the higher one. The way to do this is to convert both dBV and dBu back to voltage. You can then compare the two voltage levels in dB. From this you know how much the lower voltage hardware needs to be boosted. By a similar computation, we get the voltage corresponding to 4 dBu, this time using 0.

From this you see that the lower-voltage device needs to be boosted by 12 dB in order to match the other device. These decibel computations are relevant to our work because power levels and voltages produce sounds. Ultimately, what we want to know is how loud things sound. Think about what happens when you add one sound to another in the air or on a wire and want to know how loud the combined sound is in decibels.

Instead, we derive the sum of decibels and as follows:. Convert to air pressure:. The combined sounds in this case are not perceptibly louder than the louder of the two original sounds being combined! The last row of Table 4. Perhaps of more practical use is the related rule of thumb that for every doubling of distance from a sound source, you get a decrease in sound level of 6 dB.


  • AutoCAD LT 2002 - Autodesk!
  • Kale Recipes: Natures Superfood for Breakfast, Lunch and Dinner (The Easy Recipe).
  • Handbook of Engineering Acoustics.
  • Audio Engineering Handbook.
  • Great Gluten-Free Vegan Eats: Cut Out the Gluten and Enjoy an Even Healthier Vegan Diet with Recipes for Fabulous, Allergy-Free Fare.
  • Fifteen Thousand Useful Phrases!

We can informally prove the inverse square law by the following argument. For simplification, imagine a sound as coming from a point source. This sound radiates spherically equally in all directions from the source. Sound intensity is defined as sound power passing through a unit area. The fact that intensity is measured per unit area is what is significant here.

You can picture the sound spreading out as it moves away from the source. This is illustrated in Figure 4. This phenomenon of sound attenuation as sound moves from a source is captured in the inverse square law, illustrated in Figure 4. What this means in practical terms is the following.

Audio Engineering Handbook

Say you have a sound source, a singer, who is a distance 7' 11" from the microphone, as shown in Figure 4. The microphone detects her voice at a level of 50 dBSPL. The listener is a distance 49' 5" from the singer. Then the sound reaching the listener from the singer has an intensity of. Notice that when the logarithm gives a negative number, which makes sense because the sound is less intense as you move away from the source. The inverse square law is a handy rule of thumb. Each time we double the distance from our source, we decrease the sound level by 6 dB.

The first doubling of distance is a perceptible but not dramatic decrease in sound level. Another doubling of distance which would be four times the original distance from the source yields a 12 dB decrease, which makes the source sound less than half as loud as it did from the initial distance. These numbers are only approximations for ideal free-field conditions. Many other factors intervene in real-world acoustics. But the inverse square law gives a general idea of sound attenuation that is useful in many situations.

Product details

The acoustic gain of an amplification system is the difference between the loudness as perceived by the listener when the sound system is turned on as compared to when the sound system is turned off. One goal of the sound engineer is to achieve a high potential acoustic gain , or PAG — the gain in decibels that can be added to the original sound without causing feedback. This potential acoustic gain is the entire reason the sound system is installed and the sound engineer is hired. Feedback can occur when the loudspeaker sends an audio signal back through the air to the microphone at the same level or louder than the source.

In this situation, the two similar sounds arrive at the microphone at the same level but at a different phase. The first frequency from the loudspeaker to combine with the source at a degree phase relationship is reinforced by 6 dB. The 6 dB reinforcement at that frequency happens over and over in an infinite loop. This sounds like a single sine wave that gets louder and louder. Without intervention on the part of the sound engineer, this sound continues to get louder until the loudspeaker is overloaded. To stop a feedback loop, you need to interrupt the electro-acoustical path that the sound is traveling by either muting the microphone on the mixing console or turning off the amplifier that is driving the loudspeaker.

If feedback happens too many times, you'll likely not be hired again. PAG is the limit. The amount of gain added to the signal by the sound engineer in the sound booth must be less than this. Otherwise, there will be feedback. In typical practice, you should stay 6 dB below this limit in order to avoid the initial sounds of the onset of feedback. This 6 dB safety factor should be applied to the result of the PAG equation. The amount of acoustic gain needed for any situation varies, but as a rule of thumb, if your PAG is less than 12 dB, you need to make some adjustments to the physical locations of the various elements of the sound system in order to increase the acoustic gain.

Generally you want the highest possible PAG, but in your efforts to increase the PAG you will eventually get to a point where the compromises required to increase the gain are unacceptable. These compromises could include financial cost and visual aesthetics. Once the sound system has been purchased and installed, you'll be able to test the system to see how close your PAG predictions are to reality.

If you find that the system causes feedback before you're able to turn the volume up to the desired level, you don't have enough PAG in your system. You need to make adjustments to your sound system in order to increase your gain before feedback. These issues are illustrated in the interactive Flash tutorial associated with this section. Not all aspects of the sound need to be amplified by this much. But the potential acoustic gain lets you know how much louder than the natural sound you will be able to achieve. The Flash tutorial associated with this section helps you to visualize how acoustic gain works and what its consequences are.

One fundamental part of analyzing an acoustic space is checking sound levels at various locations in the listening area. In the ideal situation, you want everything to sound similar at various listening locations. A realistic goal is to have each listening location be within 6 dB of the other locations. If you find locations that are outside that 6 dB range, you may need to reposition some loudspeakers, add loudspeakers, or apply acoustic treatment to the room.

With the knowledge of decibels and acoustics that you gained in Section 1, you should have a better understanding now of how this works. There are two types of sound pressure level SPL meters for measuring sound levels in the air. The most common is a dedicated handheld SPL meter like the one shown in Figure 4.

These meters have a built-in microphone and operate on battery power. They have been specifically calibrated to convert the voltage level coming from the microphone into a value in dBSPL. There are some options to configure that can make your measurements more meaningful. One option is the response time of the meter. A fast response allows you to see level changes that are short, such as peaks in the sound wave. A slow response shows you more of an average SPL.

Sound & Acoustics Handbook Sound & Acoustics Handbook
Sound & Acoustics Handbook Sound & Acoustics Handbook
Sound & Acoustics Handbook Sound & Acoustics Handbook
Sound & Acoustics Handbook Sound & Acoustics Handbook
Sound & Acoustics Handbook Sound & Acoustics Handbook
Sound & Acoustics Handbook Sound & Acoustics Handbook

Related Sound & Acoustics Handbook



Copyright 2019 - All Right Reserved