Difference between revisions of "Audio Fundamentals"

From Help Wiki
Line 51: Line 51:
 
=Recording=
 
=Recording=
 
StudioField
 
StudioField
 
+
* Modern and digital gear
Modern and digital gear
+
* computer
 
+
* Interface
computer
+
Digital audio, representation of a waveform, adc dacbinary  
 
+
* Concept of a digital audio fileFiles on computer
Interface
+
* To record digitally, the analog signal from a microphone is reduced to 1's and 0's in a computer to represetent "samples". Samples can be thought of as the smallest digital unit of a digital recording, it represents
 
+
* Talk about examples of file types we are familiar with like mp3, cd, records, tape
Direct outputs of analog board into an interface or a digital board with
+
* Sampling
 
+
* Artifacts
Digital audio, representation of a waveform, adc dacbinary
+
 
+
Concept of a digital audio file
+
 
+
To record digitally, the analog signal from a microphone is reduced to 1's and 0's in a computer to represetent "samples". Samples can be thought of as the smallest digital unit of a digital recording, it represents
+
 
+
Talk about examples of file types we are familiar with like mp3, cd, records, tape
+
 
+
Sampling
+
 
+
Artifacts
+
 
+
 
Using a Daw, <u>Link to daw folders</u>They are all pretty similar
 
Using a Daw, <u>Link to daw folders</u>They are all pretty similar
 
+
* Midi
Midi
+
* Sequencing layering
 
+
Sequencing layering
+
 
+
 
Pluginseffects
 
Pluginseffects
 
+
* have talked aboutReverbPhase panning
have talked aboutReverbPhase panning
+
* havent Compression, EQ
 
+
* Left to right
havent Compression, EQ
+
 
+
Left to right
+
 
+
Files on computer
+
 
+
 
eliminate noise, attachments
 
eliminate noise, attachments
  

Revision as of 21:56, 18 May 2020

Page in Progress

Audio at Evergreen: Through Evergreens resources, students can use audio to improve most academic projects. Whether you want to set up a sound system for a presentation, record a song, interview, podcast, or gather sounds for a video, scientific analysis or computer prgram. See Media Loans Policies for more information on how to checkout gear and other resources.

Sound

Wave Elements

All sounds we hear are just air pressure variations hitting our ears at different rates. A speaker or piano radiates these variations, while a microphone captures the variations. If you hit a drum of pluck a guitar, it makes sounds because it is moving back and forth repeatedly. When moving out, it produces higher pressure for air particles to squeeze together and then when moving back, the surface creates low pressure for air particles to spread out. This process repeats until the energy put into the drum or guitar has been dissipated and it stops moving. The high and low pressure repeating is a wave.

    • show wave gif somehow
  • Frequency: The rate of the wave is called the frequency. Frequency is measured in how many times a cycle of high then low pressure occurs per second. The cycles per second can be measured in hertz (Hz). If a guitar string vibrates 440 cycles per second, we would call that a frequency of 440Hz. That 440Hz note would be called an A note because of the frequency. We hear frequency as higher or lower pitches. A high pitch bird song will be a much higher frequency like 10,000Hz and a low pitched bass note could be 60Hz. Frequency is pretty synonymous with tone, note and pitch.
  • Amplitude: The intensity of the wave is called the amplitude, we hear this as loudness and often describe it as volume. If you pluck the guitar string twice as hard and get the wave to oscillate back and forth twice as far, it has twice the amplitude and therefore we hear it as louder. Another common measurement of intensity is deciBels or dB. It is like an averaging of amplitudes over time.
  • Phase: There is a third main element of a sound wave called phase. It is the time that the wave starts compared to other waves. It can be important to consider because sounds can be "out of phase" where their high and low pressure areas cancel each other out. If two microphone are picking up the same sound from differnt distance, they may have be offset and cause these kinds of cancelations.
    • show image of all 3 elements

Timbre

Timbre is pronounced "Tamber". Simply put, it is what the sound sounds like. This is what makes a 440Hz note (which is an A note) sound different when sung, played on a guitar or a violin. The timbre is influenced by two things, the partials and the envelope. Timbre can also be called texture, color and tone.

  • Paritals: These are additional frquencies that occur when you play a note. Think of a guitar string playing that 440Hz note. That string actually produces many more frequencies in addition to that simple 440Hz frequency. In addition to the whole length of the string vibrating at 440Hz, the string is also vibrating halfway at twice the speed making it 880Hz. In addition, 1/3 of the string vibrates 3 times as fast being 1320Hz, and 1/4 vibrates at 4 times as fast being 1760Hz and so on. This process repeats infinitetely and all vibrations occur simultaneously creating a complex wave, that sounds rich. Each additional frquency is increasingly quieter, or in other words, lower amplitudes. We still call the note we played on guitar A because it has a fundamental frequency (the lowest frequency) of 440Hz while it has partials (all frequencies above the lowest) of 880Hz, 1320Hz, 1760Hz and so on. The way that harmonic partials occur in relationship to the fundamental frequency follows a mathematical concpet called the Harmonic Series which is the foundation for all musical harmony, expliaining why notes sound good together. You can tell the differnece between insturment because the amplitude of their paritals vary.
    • image of partials
  • Envelope: Envelope is how the amplitude of a sound changes over time. It is how long the sound takes to get to its loudest point and then how long its takes to return to silence after the cause of the sound has stopped, like when a violinist takes the bow of the string or when the pianist releases the keys. Each instrument has a differnt envelope that affect its timbre.
    • image of envelope

Sounds in Spaces

We can hear the size and other qualities of a room based on how much reverb, delay and characteristics that become a part of the sound interacting with the space.

  • Delay: Sound bounces off walls and floor spaces. This can cause the sound like the initial sound to hit your ears, but as that sound radiates around the room, you will hear it again as it bounces off the wall and hits your ears after the initial instance. That sound can bounce back and forth hearing it multiple times as it gets quieter. Each delay sounds like an instance of the sound. How long it takes for you to hear the delay and how many times it happens tell you a lot about the room you are in.
  • Reverb: This is different from delay because it sounds like a washed out version of the sound source. You will hear it as the sound itself then a tail of a jumbled up version of itself. This happens when a sound interacts with a space for it to bounce off many versions of it, bounce off of many angles of the room and get to your ear over a duration of time. Each room has its own reverb.
  • Frequencies in Space: The different frequency of a sound interacts with a space differently. Lower frequency will pass through surface or objects, or get caught up in the corners. Higher frequencies on the other hand, bounce off surfaces or get absorbed by it. These qualities plus delay and reverb tell you a lot about a space without having to consciously think about it. To improve the acoustics of a room, you can often use bass traps, and acoustic paneling to break up the simplicity of a flat wall and corners.
  • Direction: How do you know what direction a sound is coming from? Is it left, right, behind you? Your ears are spaced apart so you can hear when a sound hits your right ear first then the left slightly after. This allows you to know a sound is approximately some degrees to the right. The shape of your outer ear helps you know if sound is in front of or behind you by filtering out frequencies. Your brain has come to know the characteristics of how frequencies are removed when it goes through the back of your ear and it tells you that the sound is likely behind you. If a sound goes more to the left or right can be controlled in audio systems with a control called panning.

Audio Systems

Audio Concepts

  • Signal Flow:The acoustic sound waves can be turned into an electrical signal called an analog signal using a microphone, mixed with other signal using a mixer, then sent back out as a new acoustic signal using a speaker. The amplification of live sounds for an audience is often called Public Address or PA for short. If you are making a recordings of the audio, almost all modern devices use digital recording technology and store the sounds as 1's and 0's on some kind of computer. No matter the size or complexity of the system, it is important to think of it in terms of signal flow. This means how the signal moves throughout the system. what cables is it going through and in each gear what is happening the audio. For most setups it is pretty simple like microphone to field recording to headphones, but you have to think about which knobs on the field recorder affect the sound source. This line of thinking will help with troubleshooting issues, for more info on check out the [Media Equipment Troubleshooting] guide. It can get confusing to talk about audio gear because sometimes the function of a piece of gear is a combination of other gear, but if you are aware of the vague categories of audio gear, you will know what it means for a mixer to act like an interface.
  • Gain staging:xx
  • Stereo field:xx

Audio System Components

  • Microphone: There are differnt kinds of microhones but all of them are some kind of material, called the element or transducer, that moves in response to sound pressure variations and creates a tiny electrical signal. The level, or strength of that signal is said to be a "mic level", but more on that in the gain staging section. There are Three Main kinds of Mics; Dynamic, Condesor, and Ribbon. To produce a signal, the dynamic mics have a diaphram move a coil over a magnet to fluctuate voltage. Ribbons are basically the same idea, but they have a fragile ribbon of metal move back and forth. The condesnsor is a bit more complicated and has a sensitive capacitor that fluctuates in voltage when pressure pushes a small metal plairt closer to another small metal plate. There is an important differnce that sets condesnors apart and that is the fact that they voltage to be sent to them. This power is called phanom power and it is often 48 volts, written 48v. A lot of gear will have a 48v button on it to make a condesor work, but dont have the button on when you are useing a dynamic or ribbon. which is much more simple to use and the other is called Condenser which needs to be powered to work. This may be a battery or a thing called phantom power that is usually 48volt of electricity sent to the mic from whatever you are connecting it to like a mixer. Microphones can have different connectors like XLR as the most common, or 1/8" ( mini or aux). You can do a lot with microphones and there is a great variety of them used for different purposes. For more info, check out the [Microphone WIKI] and see what is available in Media Loans [Microphone Mic Catalog.]
  • Cable: There is actually a pretty great deal of variety in audio cables, but with just a little understanding, you can be comfortable choosing what is right for your system. There is XLR, 1/4", 1/8" (mini 3.5mm aux), RCA and some others that are less common. Both of those can either be balanced or unbalanced, it is an important concept if you are building a complicated system, but for most of the time it is okay not know, but if your signal is noisy, that difference in cable type may be the cause. If you don't know what connector you need, you can look up your equipment make and model to find a manual with connector information. An important concept to know is Mic vs line level signal. Microphones output a very weak signal out their cables, so whatever it is connected to needs to use a "preamplifier" to amplify that signal to a level so that it can withstand going through circuits and being recorded. a Preamp brings a mic level up to line level which is about 1000x bigger to Line level. is this process you face the biggest decision of gain staging, for more info on that read the gain staging section below. You need to choose if your signal is mic or line level as it goes into the preamp that may be in a mixer or interface. To make things more confusing, computer cables can carry digital audio signals. all the examples listed above have been analog, but some equipment can send digital audio to computers or other gear using USB or other cables. If you need an adapter to get from one connector to another, check out the cable and adapter page.
  • use images from caelin
  • Interface: Interface is a broad term that applies to anything that is connecting two systems, but in audio it typically refers to an audio interface which connects an analog system to a digital system. They are commonly used to take audio from a mixer or microphone, convert it to digital audio and send it to a computer over USB or another computer cable. Mixers and field recorders can function as an audio interface because some have the ability to connect analog inputs to digital outputs that then go into the computer to be recorded. Media Loan has many kinds of interfaces, their is the black jack, or 2 kinds of analog mixers that convert
  • Mixer: A mixer is a type of equipment used to take in multiple audio signals "mix" them and send them to another part of the audio system like speakers, or a device to be recorded.MixerDifferent effects like Phase, panning, or bus Bus aux insert group
  • Recorder:
  • Amp:
  • Speaker:
  • Support: Boom Stands

Recording

StudioField

  • Modern and digital gear
  • computer
  • Interface

Digital audio, representation of a waveform, adc dacbinary

  • Concept of a digital audio fileFiles on computer
  • To record digitally, the analog signal from a microphone is reduced to 1's and 0's in a computer to represetent "samples". Samples can be thought of as the smallest digital unit of a digital recording, it represents
  • Talk about examples of file types we are familiar with like mp3, cd, records, tape
  • Sampling
  • Artifacts

Using a Daw, Link to daw foldersThey are all pretty similar

  • Midi
  • Sequencing layering

Pluginseffects

  • have talked aboutReverbPhase panning
  • havent Compression, EQ
  • Left to right

eliminate noise, attachments

Public Address

Where to set up

Listen from where the audience will be and have a person operate the mixer during the performance if needed.

ML gear can only accommodate a x mic input performance.

Mic vs lineTest output with music from an ipod

Input

Mic check

Output

mono

Feedback

Eq for feedback

Eq for audio quality

Headphones, solo button

Safety

Don't press buttons randomly or plug in gear, be conscious of raised up trims and gains

Gaff dow cables so attendees don't trip on your cables,

Diffeent set ups

Little mackie

Bigger stereo speakers

With monitors for