Audio Fundamentals

From Help Wiki
Revision as of 12:51, 21 April 2020 by Zornesn (Talk | contribs)

Page in Progress

Overview: Audio is an integral part of most media, software, communications, environments, and performances. Using music and/or sound design, you can communicate, set a tone, and transport people to another place. Through Evergreens resources, you can use audio to improve most academic projects and your skills. Whether you want to record a song, record an interview, make a podcast, gather sounds for a movie or video game, set up a sound system for your performance or presentation, or record wildlife sounds for scientific analysis. To some people, these projects may sound impossible, but they are achievable with the help of media loan equipment, help from staff, online resources and media spaces are available for you to learn and use in any academic project.

What is audio? In order to understand how to make good audio, it helps to have an understanding of what sound is itself, and how its mathematical phenomenon of the harmonic series is used to create music that sounds good. Audio is a medium to control sound. in this section give context for the rest of the page

Sound

Sound is the physical movement of air particles moving in response to a vibration. This vibration can be a vocal chord, a drum, a speaker or anything. Our brain interprets vibrations in our ears and produces an experience of rich and meaningful sound. This sound tells us various qualities about the sound source like the size of the person, the size of the drum head, or texture of a material. It also tells us the direction, the distance of the source as well as qualities about the room we are in by the way it bounces off the walls or passes through them. These qualities may seem redundant to visual stimulation, but they often reinforce visual information and cue the listener for what to look at in a space. Media with bad audio can take you out of the experience because it is not reinforcing the visual cues you are seeing on a screen. The order and qualities of the sound can affect us in an emotional way without the use of words, we usually call these sounds music. TheseThe rest of this section will elaborate on how these elements of sound work and how you can use them to produce useful audio.

Acoustics

All sounds we hear are acoustic, a speaker produces acoustic waves, while in a very similar but opposite fashion, a microphone captures them as voltage in an electrical signal. If you hit a surface and it makes sounds, that is because it is moving out and back in repeatedly. when moving out, it produces higher pressure for air particles to squeeze together and then when moving back, the surface creates low pressure for air particles to spread out. this process repeats until the energy put into the surface has been dissipated and it stops moving. The high and low pressure repeating is a wave that radiates out into space as the surface vibrates.
  • show wave gif somehow
anythings that makes sound vibrates somewhat like this. You could also think of plucking a guitar string. Any material like a surface, vocal cords, or string is going to create a complex wave of a combination of different vibrations that each produce simple waves, but strings are useful for us to focus on because they have a prominent vibration that takes place when the whole length of the string vibrates back and forth. There are a coupe of elements of waves to consider and we can think of quitar stings when going over these Elements

Elements of Simple Sound Waves

  • Frequency: The rate that string vibrates back and forth is called the frequency, there are many other synonymous words like tone, note and pitch. Frequency is measured in how many cycles (of going one direction then the other) per second. The cycles per second can be measured in hertz or hz. if a guitar string vibrates 440 cycles per second, we would call that a frequency of 440hz. The notes on a piano are basically all the available notes or pitch to choose from; all of westen music has their notes match up with a specific frequency. so that 440hz on a guitar could be called A, while a note of 440 on a piano, violin or vocal cord would also be that same A. we hear frequency as higher or lower. so a high pitch bird call will be a much higher frequency like 10,000hz and a low pitched bass note could be 60hz.
  • Amplitude: The intensity of the wave is the amplitude, we hear this as loudness and often describe it as volume. If you pluck the guitar string twice as hard and get the wave to oscillate back and forth twice as far, it has twice the amplitude and therefore we hear it as louder. we measure the amplitude as deciBels or dB. dB is like an averaging of amplitudes over time.
  • Phase: There is a third main element of a sound wave called phase. It is the time that the wave starts compared to others, but more on that later in the "how a sound fills a space" section.

Complex Waves

  • Harmonic partials: thinking of a guitar string, with the string moving back and forth at 440hz. that string actually produces many more waves on top of that simple 440hz wave that's why it is a complex wave. in addition to the whole length of the string vibrating at 440hz, the string is also vibrating halfway at twice the speed making it 880hz, but that wave is quieter. amazingly this happens all the way down with a third of the string being 3 times as fast being 1320hz. dividing the string up and having infinite waves vibrating on the string at once creating a complex wave with each smaller interval being quieter than the last. We still call this not A because it has a fundamental frequency (the big one) of 440hz and harmonic partials ( the smaller ones) of 880hz and 1320hz and so on.
  • Harmonic Series: The way that harmonic partials occur in relationship the fundamental frequency follows a mathematical musical concept called the Harmonic series.All pitched sound sources follow the harmonic series and have partials that double, triple, quarple in the same predictable way. This fact is why western music has chosen the pitches for notes they way they have. An A of 440hz is exactly the frequency of the next A which would be 880hz, and then the next would be double that at 1760hz. This interval of one A to the next one below or above it is called an octave and each octave is exactly half the one above it and double the one below it. This is true for other notes like B, C, D, E, F and G, and all the black keys on a piano called accidentals. The piano notes are grouped together as white keys and black keys and eac of their values are dependent on the way these particles interact with each other as part of the Harmonic series. when notes share paritals, they are called harmonious or consonant and if they share few partials they are called dissonant. This is what makes some chords sound nice and others ugly.
  • Timbre: Timbre (pronounced "Tamber ) is what the sound sounds like. timbre can also be called texture, color and tone, just to make things confusing. This is what makes the same note of A 440hz sound different when sung, played on a guitar and from a bell. The Timbre is influenced by the amplitude of each of the partials. bells have a lot of louder high partials, and clarinets have louder even paritals. pianos on the other hand will waver and deviate slightly from their partial frequency being higher and lower over time. Another contribution to timbre is the envelope of the sound. the envelope is how long the sound takes to get to its loudest point and then how long its takes to return to silence after the cause of the sound has stopped, like when a violinist takes the bow of the string or when the pianist releases the keys

Music

  • "Tonality:" The layout of the keyboard is an expression of Western music scales and those scales are built based on the harmonic series. all other cultures' music systems are based on the harmonic series. The circle of 5ths is an important concept used in the layout of western music scales. It is also based on the harmonic series. Each note 5 semitones (notes) above the last has the second most partials shared between each other. it is second only to the octave itself. This fact makes it a very important relationship (interval). understanding the circle of 5ths can be used to make harmonious complex music. More harmonic structures to use for music making are the scale, mode, keys to make melodies and chord progressions. advanced techniques can be used like counterpoint, polyphony. Inharmonic sound vs harmonic
  • "Rhythm" the time that notes are played is also limited by convention. just like a western music deciding it only wants to use a certain frequency (like the ones they put on the keyboard). rhythms are very similar that most music is in 4/4 meaning it comes in groups of four, with the first being played louder than the others. notes are usually some deviosion of this measure, but adding up to 4 quarter notes per measure. for more complex rhythms to experiment with, try polyrhythm, polymeter, syncopation, triplet, hemiola.
  • "Structure: most music follows a certain form or structure. for pop music it is into verse 2, chorus, verse 2, chorus, verse 3, bridge chorus chorus outro. each of these repeating sections has the same tonal and rhythmic content, maybe with variations and switching out the words. Each section, or phrase may be built on the concept of increasing or decreasing tension, usually giving some kind of resolution at the end.

How a Sound Fills a Space

We can hear the size and other qualities of a room based on how much reverb, delay and characteristics that become a part of the sound interacting with the space.

  • Delay: Sound bounces off walls and floor spaces. This can cause the sound like the initial sound to hit your ears, but as that sound radiates around the room, you will hear it again as it bounces off the wall and hits your ears after the initial instance. The second one is called delay because it happens after. that sound can bounce back and forth hearing it multiple times as it gets quieter. each delay sounds like an instance of the sound. delay in audio software can be used to make the sound repeat infinitely or change overtime as it delays.
  • Reverb: is different from delay because it sounds like a washed out version of the sound source. you will hear it as the sound itself then a tail of a jumbled up version of itself. This happens when a sound interacts with a space for it to bounce off many versions of it, bounce off of many angles of the room and get to your ear over a duration of time. Each room has its own reverb. These qualities can be captured using a technique called convolution reverb and applied to any sound in a computer. or you can just apply different types of reverb like one that emulates a cathedral or a hall.
  • Frequencies in Space: to add complexity. Each frequency of a sound interacts with a space differently. lower frequency will pass through walls or objects, or get caught up in the corners. while higher frequencies will bounce off a surface or get absorbed by it. These qualities plus delay and reverb tell you a lot about a space without having to consciously think about it. to improve the acoustics of a room, you can often use bass traps, and acoustic paneling ( egg cartons if you like) to break up the simplicity of a flat wall and corners.
  • Direction: How do you know what direction a sound is coming from? is it left, right, behind you? your ears are spaced apart so you can hear when a sound hits your right ear first then the left slightly after. this allows you to know its approximately some degrees to the right. The shape of your outer ear helps you know if sound is in front of or behind you by filtering out frequencies. your brain has come to know the characteristics of how frequencies are removed when it goes through the back of your ear and it tells you that the sound is likely behind you. The left and right can be controlled in audio systems with a control called panning. The filtration of an ear is hard less ubiquitous for sound control, but you can use binaural effects to emulate this experience. when listening to binaural audio, it tricks your brain into thinking you are in the space created by the audio environment

Echo, chorus, phalange, reverb, cancelation stereo

Audio systems

Okay, now that we have an understanding of sound itself we can move onto how to capture it, turn it into electricity, amplify it, record it, put it on a computer, through it through some effects to get exactly the sound we want. There are 2 main goals to think of for audio systems. You either want to record sounds, or amplify them for an event. Sometimes you may want to do both. Media Loan has tools for many projects that need these audio systems, whether you want to record in a studio, field, and a simple Public Address. No matter the size or complexity of the system, it is important to think of it in terms of signal flow. This means how the signal moves throughout the system. what cables is it going through and in each gear what is happening the audio. For most setups it is pretty simple like microphone to field recording to headphones, but you have to think about which knobs on the field recorder affect the sound source. This line of thinking will help with troubleshooting issues, for more info on check out the [Media Equipment Troubleshooting] guide. It can get confusing to talk about audio gear because sometimes the function of a piece of gear is a combination of other gear, but if you are aware of the vague categories of audio gear, you will know what it means for a mixer to act like an interface.

System components

  • Microphone: Uses a diaphragm to convert air pressures variations into variations of voltage and send it down a cable. There are Mics with different circuitry, one is called dynamic which is much more simple to use and the other is called Condenser which needs to be powered to work. This may be a battery or a thing called phantom power that is usually 48volt of electricity sent to the mic from whatever you are connecting it to like a mixer. Microphones can have different connectors like XLR as the most common, or 1/8" ( mini or aux). You can do a lot with microphones and there is a great variety of them used for different purposes. For more info, check out the [Microphone WIKI] and see what is available in Media Loans [Microphone Mic Catalog.]
  • Cable: there is actually a pretty great deal of variety in audio cables, but with just a little understanding, you can be comfortable choosing what is right for your system. there is XLR, 1/4", 1/8" (mini 3.5mm aux), RCA and some others that are less common. Both of those can either be balanced or unbalanced, it is an important concept if you are building a complicated system, but for most of the time it is okay not know, but if your signal is noisy, that difference in cable type may be the cause. If you don't know what connector you need, you can look up your equipment make and model to find a manual with connector information. An important concept to know is Mic vs line level signal. Microphones output a very weak signal out their cables, so whatever it is connected to needs to use a "preamplifier" to amplify that signal to a level so that it can withstand going through circuits and being recorded. a Preamp brings a mic level up to line level which is about 1000x bigger to Line level. is this process you face the biggest decision of gain staging, for more info on that read the gain staging section below. You need to choose if your signal is mic or line level as it goes into the preamp that may be in a mixer or interface. To make things more confusing, computer cables can carry digital audio signals. all the examples listed above have been analog, but some equipment can send digital audio to computers or other gear using USB or other cables. If you need an adapter to get from one connector to another, check out the cable and adapter page.
  • Interface: Interface is a broad term that applies to anything that is connecting two systems, but in audio it typically refers to an audio interface which connects an analog system to a digital system. They are commonly used to take audio from a mixer or microphone, convert it to digital audio and send it to a computer over USB or another computer cable. Mixers and field recorders can function as an audio interface because some have the ability to connect analog inputs to digital outputs that then go into the computer to be recorded. Media Loan has many kinds of interfaces, their is the black jack, or 2 kinds of analog mixers that convert
  • Mixer: A mixer is a type of equipment used to take in multiple audio signals "mix" them and send them to another part of the audio system like speakers, or a device to be recorded.
  • Recorder:
  • Speaker:
  • Amp:
  • Support:

Boom: Stands

Concepts

Gain staging, MixerDifferent effects like Phaseeq, panning, or bus Bus aux insert group Stereo field

Recording System

Studio Direct outputs of analog board into an interface or a digital board with Digital audio, representation of a waveform, adc dac, Using a Daw Talk about examples of file types we are familiar with like mp3, cd, records, tape Modern and digital gear computer Interface daw Concept of a digital audio file binary Link to daw folders Daw basics They are all pretty similar Left to right Files on computer Plugins Compression Reverb Phase EQ effects Midi Sequencing layering Sampling Artifacts Using a daw, link, midi keys, mic setup, each daw, headphones, monitors, eq, comp, effects, interface, eliminate noise, FB overdub, TB Field eliminate noise, attachments

Public Address System

Where to set up Listen from where the audience will be and have a person operate the mixer during the performance if needed. ML gear can only accommodate a x mic input performance. Mic vs line Test output with music from an ipod Input Mic check Output mono Feedback Eq for feedback Eq for audio quality Headphones, solo button Safety Don't press buttons randomly or plug in gear, be conscious of raised up trims and gains Gaff down cables so attendees don't trip on your cables, Different set ups Little mackie Bigger stereo speakers With monitors for