Difference between revisions of "Audio Fundamentals"

From Help Wiki
Line 26: Line 26:
 
<div class="col-md-3 sidebar">  
 
<div class="col-md-3 sidebar">  
 
{{Media Loan Basic Sidebar}}
 
{{Media Loan Basic Sidebar}}
 
{{Precautions, Help, and Other Wikis for all Media Loan Gear
 
|Exceptions= 
 
}}
 
 
<!--Precautions, Help, and Other Wikis for all Media Loan Gear is the template for the bottom of the sidebar. To edit the precautions for this gear specifically, "edit source" for this page for the keyword "Exceptions". be sure to write in the title for the exceptions similar to the title for what they all have.-->
 
 
 
<!-- Technical note for source editors: end col-md-3-->  
 
<!-- Technical note for source editors: end col-md-3-->  
  

Revision as of 14:43, 1 April 2021


Media Loan Logo.png
A guide for basic audio information.

Overview

Audio at Evergreen: Through Evergreens resources, students can use audio to improve most academic projects. Whether you want to set up a sound system for a presentation, record a song, interview, podcast, or gather sounds for a video, scientific analysis or computer program. There are many resources to help you learn these skills. See Media Loans Policies for more information on how to checkout gear and other resources.





SOUND

Wave Elements

All sounds we hear are just air pressure variations hitting our ears at different rates. A speaker or piano radiates these variations of air pressure, while a microphone captures the variations. If you hit a drum of pluck a guitar, it makes sounds because it is moving back and forth repeatedly. When moving out, it produces higher pressure for air particles to squeeze together and then when moving back, the surface creates low pressure for air particles to spread out. This process repeats until the energy put into the drum or guitar has been dissipated and it stops moving. The high and low pressure repeating is a wave.

    • show wave gif somehow
  • Frequency: The rate of the wave is called the frequency. Frequency is measured in how many times a cycle of high then low pressure occurs per second. The cycles per second can be measured in hertz (Hz). If a guitar string vibrates 440 cycles per second, we would call that a frequency of 440Hz. That 440Hz note would be called an A note because of the frequency. We hear frequency as higher or lower pitches. A high pitch bird song will be a much higher frequency like 10,000Hz and a low pitched bass note could be 60Hz. Frequency is pretty synonymous with tone, note and pitch.
  • Amplitude: The intensity of the wave is called the amplitude, we hear this as loudness and often describe it as volume. If you pluck the guitar string twice as hard and get the wave to oscillate back and forth twice as far, it has twice the amplitude and therefore we hear it as louder. Another common measurement of intensity is deciBels or dB. It is like an averaging of amplitudes over time.
  • Phase: There is a third main element of a sound wave called phase. It is the time that the wave starts compared to other waves. It can be important to consider because sounds can be "out of phase" where their high and low pressure areas cancel each other out. If two microphone are picking up the same sound from different distance, they may have be offset and cause these kinds of cancelations.
    • show image of all 3 elements

Timbre

Timbre is pronounced "Tamber". Simply put, it is what the sound sounds like. This is what makes a 440Hz note (which is an A note) sound different when sung, played on a guitar or a violin. The timbre is influenced by two things, the partials and the envelope. Timbre can also be called texture, color and tone.

  • Partials: These are additional frequencies that occur when you play a single note. Think of a guitar string playing that 440Hz note. That string actually produces many more frequencies in addition to that simple 440Hz frequency. In addition to the whole length of the string vibrating at 440Hz, the string is also vibrating halfway at twice the speed making it 880Hz. In addition, 1/3 of the string vibrates 3 times as fast being 1320Hz, and 1/4 vibrates at 4 times as fast being 1760Hz and so on. This process repeats infinitely and all vibrations occur simultaneously creating a complex wave, that sounds rich. Each additional frequency is increasingly quieter, or in other words, lower amplitudes. We still call the note we played on guitar A because it has a fundamental frequency (the lowest frequency) of 440Hz while it has partials (all frequencies above the lowest) of 880Hz, 1320Hz, 1760Hz and so on. The way that harmonic partials occur in relationship to the fundamental frequency follows a mathematical concept called the Harmonic Series which is the foundation for all musical harmony, explaining why notes sound good together. You can tell the difference between instrument because the amplitude of their partials vary.
    • image of partials
  • Envelope: Envelope is how the amplitude of a sound changes over time. It is how long the sound takes to get to its loudest point and then how long it's takes to return to silence after the cause of the sound has stopped, like when a violinist takes the bow of the string or when the pianist releases the keys. Each instrument has a different envelope that affect its timbre.
    • image of envelope

Sounds in Spaces

We can hear the size and other qualities of a room based on how much reverb, delay and characteristics that become a part of the sound interacting with the space.

  • Delay: Sound bounces off walls and floor spaces. This can cause the sound like the initial sound to hit your ears, but as that sound radiates around the room, you will hear it again as it bounces off the wall and hits your ears after the initial instance. That sound can bounce back and forth hearing it multiple times as it gets quieter. Each delay sounds like an instance of the sound. How long it takes for you to hear the delay and how many times it happens tell you a lot about the room you are in.
  • Reverb: This is different from delay because it sounds like a washed out version of the sound source. You will hear it as the sound itself then a tail of a jumbled up version of itself. This happens when a sound interacts with a space for it to bounce off many versions of it, bounce off of many angles of the room and get to your ear over a duration of time. Each room has its own reverb.
  • Frequencies in Space: The different frequency of a sound interacts with a space differently. Lower frequency will pass through surface or objects, or get caught up in the corners. Higher frequencies on the other hand, bounce off surfaces or get absorbed by it. These qualities plus delay and reverb tell you a lot about a space without having to consciously think about it. To improve the acoustics of a room, you can often use bass traps, and acoustic paneling to break up the simplicity of a flat wall and corners.
  • Direction: How do you know what direction a sound is coming from? Is it left, right, behind you? Your ears are spaced apart so you can hear when a sound hits your right ear first then the left slightly after. This allows you to know a sound is approximately some degrees to the right. The shape of your outer ear helps you know if sound is in front of or behind you by filtering out frequencies. Your brain has come to know the characteristics of how frequencies are removed when it goes through the back of your ear and it tells you that the sound is likely behind you. If a sound goes more to the left or right can be controlled in audio systems with a control called panning.

AUDIO

Audio Concepts

  • Signal Flow: Sound waves in the air can be turned into an electrical signal called an analog signal using a microphone, mixed with other signal using a mixer, then sent back out as a new physical sound waves using a speaker. No matter the size or complexity of the system, it is important to think of it in terms of signal flow. What order is the signal flowing through different cables and gear, and what is happening to the audio at each step? This line of thinking will help with quick adjustments and troubleshooting issues.
  • Gain Staging: An important concept to know is Mic vs line level signal. Microphones output a very weak signal out their cables, so whatever it is connected to needs to use a "preamplifier" to amplify that signal to a level so that it can withstand going through circuits and being recorded. a Preamp brings a mic level up to line level which is about 1000x bigger to Line level. is this process you face the biggest decision of gain staging, for more info on that read the gain staging section below. You need to choose if your signal is mic or line level as it goes into the preamp that may be in a mixer or interface.
  • Stereo field:xx

Audio System Components

There are categories used to identify audio gear, but many items combine elements from multiple categories. For example, a mixer may also act as an interface. It is important to be aware of these categories and that they may be combined depending on the gear.

  • Microphone: All Microphones turn acoustic sound into analog signal. The easiest mic to use is the "dynamic" mic. You can just plug it in and it's ready to go. Another common type is called the "condenser" which needs to be powered to work. Depending on the mic it may need to be supplied by a battery or "phantom power" which is usually 48volts of electricity sent to the mic from whatever you are connecting it to like a mixer. There is often a button that says "48v" to enable it. You can do a lot with microphones and there is a great variety of them used for different purposes. If you want to know more about mics, check out the [Microphone WIKI].
    • image of sm57 and
  • Cable: For most uses, cables are straight forward, you find the cable with a plug that connects to the socket you want to connect. The most common are XLR, 1/4", 1/8" (mini 3.5mm aux), and RCA. If you don't know what connector you need, you can look up your equipment make and model to find a manual with connector information. USB and other computer cables can carry digital audio signal if your gear is designed to utilize it. For advanced systems, you may want to learn the difference between balanced and unbalanced signal.
    • use images from caelin
  • Interface: In audio an interface refers to an audio device which connects an analog system to a digital system. They are used to take multiple analog signals (voltage) from a mixer or microphone, and convert it to digital signals (1's and 0's). It is also used to direct digital audio from your computer out into your analog system. You can use USB to connect the interface to the computer to record the digital audio. Other gear can function as an audio interface because some have the ability to connect analog inputs to digital outputs that then go into the computer to be recorded. Media Loan has an interface called the "Black Jack". There is also a microphone that acts as an interface called "Blue Yeti". There are 2 kinds of analog mixers, and 2 field recorders that all have the ability to act as an interface.
  • Mixer: A mixer is a type of equipment used to take in multiple audio signals "mix" them and send them to another part of the audio system like speakers, or a device to be recorded.MixerDifferent effects like Phase, panning, or bus Bus aux insert group
  • Recorder: Anything that stores audio. A field recorder, like the "LS-100", is the most popular kind of recorder in Media Loan. But if you were to use an interface and record on a computer, that computer would be a recorder. Media Loan also has tape recorders.
  • Amp: An amp is an amplifier, which takes an analog signal and makes increases the gain, which you can think of as volume. Guitar players call the speaker they connect their guitar to an amp, but it is actually both a speaker and an amp in one box. Most amps are just an amp with no speaker, and some devices are just speakers and they need to be connected. if your speaker has a fan in it, that is probably because it is cooling an amp. Most gear the receives microphones have a preamplifiers, which is a kind of amp that makes a mic level turn in to line level, but that was discussed in the above gain staging section. Most amps, that aren't specifically preamplifier, convert line level to speaker level so it can have enough gain to be heard on a speaker.
    • ml doesn't have any amps without speakers, but we do have guitar amps (with speakers)
  • Speaker: There are many different uses for speakers and each has their own type of speaker. Of course there are headphone and guitar amps. For events the bigs ones facing an audience are often called mains, or house. If you have any facing the performers so they can hear themselves they are called stage monitors. We also call speakers used in recording studios monitors because you are monitoring the mix.
  • Support: This term isn't an audio industry term, but rather how Media Loan categorizes items that hold audio gear. These kinds of gear can be overlooked, but are very helpful to getting clean audio. Media Loan has mic stands, speaker stands, pistol grips, boom poles, gorilla pods, goosenecks, stereo mount adapters.

RECORDING

Space

The key decision in finding the right space is how you can best eliminate noise. You want to avoid fans, road noise, other people walking around, wind and any other unintended sound. It is hard to remove these kinds of sounds from a signal, it might be possible though if you use noise reduction tools, but don't expect it to. At this point in time, most recording is digital, your signal gets sent to a computer and turned into a file. Unless you put a tape in your device, it is most likely digital.

  • Studio: any room used for recording could be called a studio. Professional studios have a control room to separate the recording engineer from the performers. Evergreen has 4 General Access recording studios, and 5 recording studios with control rooms, but you need to be in an audio class to use them. You could record in any quiet room and get good results, you just have to consider not only noise, but have reverby the room is. When you make a sound, can you hear that sound bounce around the room? This is usually caused by flat parallel surface. You can put up curtains, or other acoustic treatment techniques. There is no perfect audio system, you need to design it based on your needs, look at the Audio Systems Components above for ideas on where to get started.
  • Field: Field recording is anything that isn't in a studio, usually outside. People often use devices called field recorders which is like an all in one audio recording device. It functions as the mic, interface, computer, speakers. Although most phones have these abilities too, a field recorder can capture much higher quality sound and has the ability to connect with professional microphones. The microphones that come on most field recorders like the LS-100 work great. they are in a stereo XY configuration so you can get nice wide stereo sound or a whole field. If you want record a specific sound like an interview or a bird, you may want something more directional like a shotgun mic, like from the Sennheiser combo kit. Shotguns are often paired with windscreens and boom poles for getting sound for video. You could use a lavalier, which is a tiny mic that clips onto a person shirt for high quality close micing. For the recording only one sound and rejecting all other sounds, Media Loans best options is the Parabolic mic kits often used for nature recording and sports. You should always have headphones to monitor your sound. You never know when its going to rain, so it's a good idea to carry your gear in a dry bag, available at Media Loan.
    • boom pole, field recorder, parabolic,

Digital Audio

Analog signals are turned into digital signals, by being "sampled" by an Interface.

  • Sample: A sample is the smallest unit to represent sound, it represents a single position sof a speaker or microphone, or the comparative air pressure associated with it. As a side-note, modern music refers to small sections of sound as a sample which is a different use of the word, because an actual "sample" is just a single slice of audio that in itself doesn't sound like anything. All the samples put together in order make a waveform.
  • Sample Rate: The sample rate is how many sample there are per second like the 44.1kHz is a common sample. Thats 44100 samples per second. You are loosing sound information by reducing the signal from a continuous analog signal to a discreet digital one, but most people cant hear the difference.
  • Bit-Depth: A samples resolution is defined by it's bit-depth which is how many 1's and 0's are required to describe each sample. A higher bit-depth means each sample has more detail. The signal is streamed from one device to another, recorded, and manipulated using digital audio tools.
  • File Type: Once recorded, these 1's and 0's are stored as a file. There are many file types, Mp3 may be the most recognizable, but it is better to use higher quality ones like .wav or .flac if possible. CD's and DVD's are made possible by the ability to have 1's and 0's being represented as a either a tiny reflective (1) on non reflective (0) surface on the plastic. Tapes and records on the other hand are analog and have infinite detail between what would be reduced to a sample.
  • Artifacts: All of these medium and tools have "audio artifacts" which are errors that indicate the medium being used. You can hear the blockiness of low quality youtube videos, you can hear the tape pitch warble on a run down tape deck, a vinyl record player might get stuck and loop a couple times. all of these problems are good to be aware of an can be used to trick the user into thinking that they are listening to one medium rather than another.

DAW

Digital Audio Workstation, or Daw for short, is versatile tool for recording and editing your audio. There are many different kinds, but they all rely on the same principle of sound design, so if you learn one, you can learn the others. they have a Mixer view or some kind usually at the bottom to represent the sam kinds of controls you would have on a physical mixer. There is also a track view, where you get to see the waveforms recorded, from left to right, in each track stacked vertically. Most DAWs have the ability to use Midi which might be in a midi track, mixed in with all the audio channels. Link to daw foldersThey are all pretty similar

  • Recording: You can either bring in recorded file from a field recording or record directly from an interface. the interface will have different channels that you will want to connect to the channels of the virtual mixer (usually at the bottom). this is done with I/0 or input/output. find those word on a channel and select the channel that you want to record. sometime interface aren't recognized instantly so you may need to go into the DAW or the computer preferences to enable the interface as your audio device.
  • MIDI: is a very powerful tool in DAWs. It is the representation of musical note information like pitch, velocity (like volume) and timing. It in itself doesn't make sound, but it is very useful because you can tell virtual instrument to make sound. It also can connect to many kinds of audio gear using a MIDI cable, or nowadays a USB cable, to carry MIDI signals.
  • Mixing: On a computer, audio mixing is similar to if not interchangeable with the concept of audio editing. You are adjusting gain (volume) of each channel, adding effects(listed below), and adding "automations" to adjust those qualities over time. You can sequence midi, or chop up and move around audio layering

Recording Workflow

  1. Test out your equipment and space before your performer gets there. Make sure you are getting clean signal with no noise.
  2. With your performer run a mic check and set your trim (described in the above gain staging section).
  3. Make sure that the performer can hear what they need to; themselves, what is being recorded, the pre-recorded sounds, whatever they need.
  4. Record the take, check it, continue.
  5. Save your files in at least 2 locations for safety.

Effects

An audio effect in anything that adjust or creates various qualities of a sound. They can be physical piece of gear like guitar pedals or rack modules, but we will be talking about virtual effects that come with all DAWs. Most DAWs allow you choose an effect and order them in the mixer view at the top of the channels. Effects have "parameters" that can be controlled with knobs, and those parameters can be linked to the movement of time using a method called "automation". Most effects have a "bypass" button to turn it off so you can hear what it sounds like off and on quickly.

  • Panning, Stereo:f
  • Phase Ø:f
  • Reverb, Delay, Phaser, Flanger, Echo:f
  • EQ:f
  • Compression:f

PUBLIC ADDRESS (PA)

Safety

  • Don't press buttons randomly or plug in gear, be conscious of raised up trims and gains
  • Gaff dow cables so attendees don't trip on your cables,

Feedback

  • Eq for feedback
  • Eq for audio quality

Setting Up

  • Mic vs lineTest output with music from an ipod
  • Mic check
  • mono
  • Headphones, solo butto
  • Where to set up
    • Listen from where the audience will be and have a person operate the mixer during the performance if needed.
    • ML gear can only accommodate a x mic input performance.
  • Different set ups
    • Little mackie
    • Bigger stereo speakers
    • With monitors for