soundscapes
- Elise Guay
- Oct 31, 2024
- 18 min read
Updated: Dec 23, 2024
Coming into this course I truly didn't know what to expect. I felt slightly unprepared to be a student again so late in life (although everyone keeps saying, "it's never too late!") I felt especially unprepared coming into a tech heavy course, especially being thrown into the deep end straight away with the sharks in the water – coding and enchanted objects. Coding is something I had experience with 12 years ago when I was a stubborn 23 year old, hell bent on standing up to the man (aka digital artwork). I gained nothing from that class, and in fact my tutor, after I subverted the assignment by drawing something by hand and then scanning it in, said, "I have been teaching this course for over 10 years and I have never seen that before." This was not meant to be a complement. I think some of that disdain carried over, despite now using digital platforms for tattoo designs especially. And then we started soundscapes.
Even looking back on my notes I'm getting chills. We watched an amazing documentary called, "Making Waves: The Art of Cinematic Sound", and ironically all the noise in my brain stopped. Admittedly, the last couple weeks I was questioning what I was even doing here and what I would gain from this course, and was close to giving up, but this single documentary changed everything. I guess that's the point of media, isn't it? According to the documentary, sound is the first sense we develop, hence why mothers put headphones over their bellies to try and make their kid the next Beethoven. For me, music has saved my life more times than I can even give it credit for. From being an angsty teenager listening to emo music and heavy metal, to being an angsty adult listening to those same emo bands, just at a lower volume. It brings you back to a time, it doesn't really matter what time, but a time, whether that's the song that was played when you got on stage with your favorite band of 20 years, or the song you can't listen to because of a friend that passed away. It effects you. In that same vein, music and sounds within film elicits the same response.
There were several examples shown for this, but the one that stood out to me the most was the opening scene of "Saving Private Ryan." Captain Miller (Tom Hanks) is sat behind one of those Czech Hedgehogs watching the world explode around him. The camera pans to a young soldier covering his head and screaming, but all you hear is subdued explosions and see Miller's narrow frame of field. Those two elements together gave the viewer insight into what it may have felt like in that moment, on Omaha beach, watching all of the death and destruction and being directly in the middle of it. The effect of a lack of sound or subdued sound can have as intense if an effect on an audience as explosions. And that's the magic of sound. It brings you to a place or time or emotion simply by being timed correctly.
Thomas Edison invented the first phonograph in 1877. That wasn't even 150 years ago, think about that. In the last 150 years we have evolved from a clunky box that made noises, to tiny pieces of equipment like iPhones or even watches that hold hundreds of thousands of hours of music, connected wirelessly to headphones that play music at such a high quality Thomas Edison would have passed out listening to it. Since then, it took some time to attach music to film, which they did in 1926, and then attached voice to a film the next year in 1927. This birthed the sound editor. What they came to realize was that the hung microphones weren't strong enough to pick up sounds that weren't directly in range of the mic. The first real instance of what is now recognized as a sound designer came in 1933 with "King Kong." One of the coolest thing about some of the techniques they used in this film is that they are still being used today, almost 100 years later. Another interesting fact is that sound effects found their origin in radio, with Orson Welles and his broadcast of "War of the Worlds," for example. I personally equate Welles to a much older time, and the fact that only died 4 years before I was born is actually mind-blowing.
In 1963, Alfred Hitchcock, one of the most well known horror film directors, released "The Birds," which by today's standards is incredibly hokey and low production value, but back then it was ground breaking. One of the things that made it so intense was his use of sound. He didn't use music to instill fear in his audience, he used sound, or lack there of, to convey the fear. However, in his other famous release "Psycho," Hitchock "was so pleased with the score written by Bernard Herrmann that he doubled the composer’s salary to $34,501. Hitchcock later said, “33% of the effect of Psycho was due to the music.” An interesting fact about that is, "For the famous shower scene, Hitchcock had originally intended it to not have any music at all. Herrmann, however, secretly scored and recorded the music for it anyways. Upon viewing, Hitchcock loved the music and decided to keep it in the film" (Mobile Symphony Orchestra).
Coming into the 1960s and 1970s, we saw an explosion of famous names come onto the scene, ranging from Walter Murch, to George Lucas, to Francis Ford Coppola. Despite the film industry suffering due to the rise in television, these guys worked their tails off and are now some of the most well known names in film history. The came together to form American Zoetrope, which first saw failure and bankruptcy with the flop of " THX 1138." They decided to take on this gangster film that had been turned down by other directors, and from that came one of the most well known films "The Godfather." This reinvigorated American Zoetrope to continue on to find success that I'm sure they never imagined.
The next innovation in sound was moving on from the ancient mono sound (one speaker directly behind the screen), to stereo sound (two speakers on either side of the screen). One of the innovators of stereo sound was Barbara Streisand. With her 1976 release of "A Star is Born," she recognized the importance of an audience being made to feel like they're there. "Before that film, moviegoers heard the sound from a single speaker behind the screen. Streisand knew about a new surround sound system designed by Dolby that was ripe for use in theaters. It was urgent, she thought, to unleash that technology on the cinema world. So she offered $1 million of her own money to prove what could happen if enough time and care went into the sound design before a revolutionary presentation of the film in surround sound. Warner Bros. was so thrilled with the soundwork effort that they ended up paying for it instead" (The Beat).
Ben Burtt is considered the father of sound design. One of his most famous pieces of work is "Star Wars." He did a lot of field recording, going out literally into the world and making noises. He recorded hundreds of different sounds, from hitting a metal wire to recording the sound of a 70s tube TV. This was revolutionary because this had never been done before. There were no synthesizers, it was made with all real sounds that Burtt had recorded himself. Another aspect that revolutionized sound design is that the sound effects were determined before the film, which at the time wasn't done. Later in his career, he worked on "WALL-E." Here are some interesting facts about that production:
WALL-E rattles along alone on Earth looking for garbage to pick up. Did you hear the whirring sound he makes?
Ben Burtt was sitting at home one day wondering what kind of sound WALL-E would create. Ben happened to be watching an old cowboy movie. A character in the movie was cranking an old power generator and a big smile came over Ben's face. The cranking of the generator, he thought, was the perfect sound for WALL-E.What is that cool sound that EVE makes as she flies about? Ben had a lot of fun recording that. He found a friend who owned a three-meter-long radio controlled airplane. As the plane flew over his head, Ben was running underneath with a tape recorder recording the whirring sound so he could capture the perfect sound for EVE.
How did they make the sound for WALL-E's little cockroach friend?
Don't take for granted the clicking and clacking sound he makes. Ben had a pair of metal handcuffs and recorded the sound they make opening and closing. It's the perfect sound for a cockroach.
Do you remember how scary the sand storm was in WALL-E ?
Ben had to get extra creative to make the sound for that. He ran down a hallway carrying a big heavy canvas bag. The noise it made was used for the sand storm. Not very scary when you think about it, but when you hear it in the movie it's perfect.
Does the sound of crashing shopping carts sound authentic?
Well, that's because Ben took his 10-year-old daughter to a supermarket one day and put a tape recorder in the cart. They then smashed the carts into walls! The sound they recorded was used in that very scene in the movie.
Here are some more fun sound facts you may not know:-- There were 2,400 sound files created for WALL-E ! WALL-E's compacting noise is the recording of a car being crushed at a dump.-- M-O's cleaning noise is a recording of Ben's electric shaver.-- WALL-E's voice is created by running Ben Burtt's voice through a computer. With the aid of the computer, Ben is able to make WALL-E's voice do things a human voice never could.-- The wind on Earth is comprised of recordings of Niagara Falls. (Middlesex University)
Once the floodgates opened, new innovations were being made in the world of sound design. One of the most influential was "Apocalypse Now," which totally changed the face of sound in film. It introduced the us of surround sound, giving the audience an experience like they have never had before. "Coppola’s recreation of modern warfare is technologically and psychologically stunning. Mists and explosions blindingly illuminate the eerie, deranging oppressiveness of the jungle as the sound system surrounds and disorients the audience with a sensory blitz of roaring helicopters, weapons fire, screams, and the monumental crescendos of Wagner" (Freedom Socialist Party). There was a sound script for this film which had never been done before, and also sound editors were in charge of certain parts of the sound editing instead of the entire catalogue. The way the sound was produced in this film became the ground standard for how future films would include sound. Some of these movies would include "Jurassic Park," "Toy Story," and the short "Luxo Junior."
The development of Automated Dialogue Replacement (ADR) was pretty revolutionary. This is secondary audio that is recorded to be placed over the film portion. One of the examples they used was Tom Hanks scene in "A League of Their Own" when he delivers his famous, "There's no crying in baseball!" line. The actress he was yelling at couldn't be heard very well, so they re-recorded both her's and his lines to make them more audible. The other use for this is recording things like group noise or background noise. The best example of this is the mob scene in the film "Argo." It was filmed separately and then tracked over the pre-filmed scene.
There are multiple layers to sound, including ambience, music, rerecording, ADR, and sound effects. Of all of this, though, the most interesting part of all of it to me is the Foley artist. Of all of the jobs that could be had within the sound industry, this is the one that stood out to me the most and could actually lend to a future career path. "Foley is named after sound-effects artist Jack Foley.[2] Foley sounds are used to enhance the auditory experience of a movie. They can be anything from the swishing of clothing and footsteps to squeaky doors and breaking glass. Foley can also be used to cover up unwanted sounds captured on the set of a movie during filming, such as overflying airplanes or passing traffic" (Wikipedia). It seems, though, that finding a job as a Foley artist would prove to be difficult, "Most foley artists either work for established foley studios or as freelancers. Aspiring foley artists should be aware that it's difficult to make a long-term career in this field, mainly owing to the relatively small demand—even in Hollywood there are only a number of working foley artists" (Berklee). So maybe I need to broaden my scope and lean more towards the general realm of sound design?
workshop 1
After such a revelation earlier in the day, I was chomping at the bit to just dive right in and start making sound magic. However, much like anything else, there are basics that need to be learned first. As the saying goes, you have to walk before you can run, and crawl before you can walk. I have to remember that this is wildly uncharted territory for me. I was never a "drama" kid in school, ironically enough I was an athlete, a "jock" of sorts. I swam 7 days a week, sometimes 3 times a day, so I never got the chance to explore interests such as music or the drama department. I tried out for a play once and got the role, but I had to give it up because I had swim practice. So maybe, much like the whole experience of going to school again, this is a second chance to do something I would have wanted to in the first place had I not been influenced by people like my mother who said there's no money in the arts.
In this workshop we watched another film called "Digital Audio Explained," which is a pretty self explanatory title. I took several pages of notes throughout the video and now looking back on themI think I need to dive deeper into the parts that I found interesting or useful enough to write down. The first is "back to basics" of sound, breaking it down to its core which is the sound wave. First was the amplitude. "Amplitude is a measurement of the amount of energy transferred by a wave. Amplitude on a transverse wave is typically measured as the distance between the peak or trough of the wave and the equilibrium position, or the position of the medium at rest" (Study.com). The very basic graph I have drawn in my notes shows a wave length, which indicates that amplitude the measurement of the height of the wave which correlates to how loud it is. The shorter the wave length, the high the pitch, the longer the wave length, the lower the pitch.
Second was sample rate. This is the samples of sound per second, measure in Hertz (Hz). "Sample rate defines how many times per second we sample, or take a measurement of, an analog audio signal as it is converted into a digital signal. The sample rate also defines the high-frequency response of an audio recording. The Nyquist Theorem states that the highest audio frequency we can record is half of the frequency of the sampling rate. This means that with a sample rate of 44.1 kHz, we can record audio signals up to 22.05 kHz. Likewise, a 96 kHz sample rate allows for 48 kHz of audio bandwidth.[ ] We know that human hearing reaches from about 20Hz to 20 kHz, so why would we need sampling rates above 44.1 kHz? One answer is that many people, including scientists, claim that humans can perceive sounds as high as 50 kHz through bone conduction. That claim may theoretically be correct, but through air humans only hear up to about 20 kHz, so in a perfect world 20 kHz would be all the frequency range needed by humans." (SonarWorks Blog).
Thirdly was the bitrate. "Bit rate is the number of bits used to represent one second of audio. It’s calculated by multiplying the bit depth by the sample rate. For example, a 16-bit audio recording at a 44.1kHz sample rate would have a bit rate of 705,600 bits per second (16 x 44,100). The bit rate operates in a similar manner to the sample rate, but it measures bits instead of samples. Bit rate measures the bandwidth of data transmission equipment and is expressed in kilobits per second (Kbps), which is equivalent to thousands of bits per second. When it comes to audio, the bit rate is a term used more commonly for streaming or playback rather than audio recording. Essentially, a higher bit rate means better audio quality because each ‘bit’ captures a piece of data that can reproduce the original sound. So the more bits in a unit of time, the closer you are to recreating the original sound wave that your mic has produced" (Waveroom).
Going back to my class notes, I found it interesting that wavelengths are measured in Hertz, and amplitude is measured in decibels. The longer the wavelength, the lower the number. For example, a 3Hz wavelength would have much more spread out peaks and troughs, whereas a 12Hz wavelength would have very little space between peaks and troughs. Kilohertz represents thousands; for example, 40kHz would capture 20,000Hz of sound. The most common range of Hz is 44.1kHz (44,100Hz) and the number of kHz represents the sample rate of that recording. Bit rate is sample rate and bit depth multiplied together.
I have been trying to keep notes of questions I have. The first of these questions is "what is audio clipping?" I'm not sure if we went over it and I just missed it but it was certainly something I didn't understand, which I guess is the point of these Learning Journals, to explore and do our own research. "It often shows itself as an admonishing red light in our DAW, on our audio interface, or on our speakers.[] In the simplest sense, audio clipping is a form of waveform distortion. When an amplifier is pushed beyond its maximum limit, it goes into overdrive. The overdriven signal causes the amplifier to attempt to produce an output voltage beyond its capability, which is when clipping occurs.[] If a loudspeaker is clipping, for example, the phenomenon can be aurally understood as distortion or break-up. Physically, if a loudspeaker remains in a clipping state for too long, there is potential for damage to occur due to overheating. However, many speakers have built-in precautions to avoid clipping, such as circuits that act like limiters" (Produce Like A Pro). So basically what I'm understanding from this is that audio clipping is distortion of the sound when the system is overworked and can lead to damage within the equipment if prolonged.
Back to my class notes, we explored the different types of microphones. There are two different types: dynamic and condenser. Dynamic mics are used widely in a concert or live event type of setting, due to its dynamic range of loudness. Condenser mics are used more in controlled environments, such as studios doing voice overs or recording vocals. They tend to be much more sensitive than dynamic mics. The microphones break down further to ribbon, lavalier, contact, and piezo (not be confused with the piezo buzzers we used in the enchanted objects module). Ribbons are used for voice recording in studios; lavalier or "lav" mics as they are known in the industry, are clip on mics that would be used on stage or in reality shows; contact mics allow you to capture the vibrations of a solid object, usually metallic; and lastly the piezo is just a small and cheap mic.
Next up were the polar patterns. A polar pattern is basically how much of a sound will be picked up by the microphone and in which direction. An omni-directional mic would pick up sound from an equal range of 360º, which means their placement within a space doesnt't need to be a huge consideration. A cardioid mic picks up sound from a more limited range, and are best used in settings like concerts because they pick up sound in a mushroom shape in front of the mic, and rejects sound from behind it. Figure of 8 or bidirectional mics pick up sound from in front and behind, and rejects sound from the sides; these are best used in studio settings. The last was the shotgun, called so because of how it is constructed, has a very narrow recording range and is used in situations like news broadcasts. You'll often see them with a big fuzzy over them which helps dissipate sounds like wind.
Lastly were the recorders. Solid state recorders use things like SD cards to record sound and the zoom recorders we have in the studio are an iteration of that. Interface recorders are a medium between the computers and the mics; these are still portable if not as much so as the zooms. The XLR is another form of portable recorders.
The other question I had written down was "who figured out how to set up acoustic studios and acoustics in general?" In my research, this is what I found:
The roots of the recording studio go back to 19th-century inventors such as Thomas Edison and Alexander Graham, who laid the groundwork for the phonograph industry.[] However, the idea that recording studios could play a key part – in terms of equipment and atmosphere – in the creation of great music took hold in the 40s, with the proliferation of tape as a recording medium (when thermos plastic allowed for considerable improvement in the sound quality of recording). Companies such as RCA – who maintained studios in New York, Chicago, and Hollywood – Decca, Universal Recording Corporation, and Columbia Records began to focus on developing studio techniques. [] Pioneer Bill Putnam, an early architect of the modern recording studio, used techniques at his studio at Chicago’s Civic Opera that would come to define the modern record engineer, such as the use of tape and multi-tracking, creatively-deployed reverbs, and overdubbing. (discovermusic).
designing a soundscapes worksheet
We were given an assignment to go somewhere and listen to the various sounds of the place. The example was a train station, which included sounds such as a train horn, baby crying, and pigeons. I chose a Costa coffee shop because I figured it would offer a wide array of sounds and provide good data. This proved more difficult than expected. There wasn't as much of a variety of sounds as I had hoped. It was pretty limited to people talking, the coffee machine, and music. I took notes of my observations which I will include below, in the format in which I wrote them and then explore them further.
• One of the challenges of placing things on the graph is my position within the cafe; I was unable to find a seat directly center of the cafe due to how busy the place was at 10:20am on a Wednesday
• I almost wish there were less people so I could hear the other sounds better
• Despite how busy it is here, there isn't a wide range of sounds besides conversation; maybe somewhere like a train station, busy park, shopping mall, or casino would be better for diversity of sound
• An observation is that all of the sounds seem to be coming from the top left of the graph, despite there being people in front and behind me; I'm located next to a window, the door is to my back right, the barista bar is directly to my left but probably about 15' away behind a plexiglass half wall (probably left over from covid)
• The ceiling is very high, probably around 20', and made of corrugated sheet metal with metal I beams in between; the flooring is made of what looks to be snap floor material (hard plastic made to look like wood); there is very little if any fabric on the chairs; I'm guessing that all of these factors add to high the amplitude is, which is bordering on uncomfortably loud
• Another challenge was placing sounds on a vertical; the music is obviously going to be the highest because it's coming from the speakers in the corners of the cafe, which should give it a surround sound, but due to how loud everything else is, it sounds like its only coming from the back right speaker behind me
• At first listen, everything except the music sounded like it was on the same level, but upon closer observation, sounds that are further away sound higher up, almost as if they were on an upward arcing line
• I'm curious if the distinct lack of acoustic absorption is purposeful? Kind of like how fast food restaurants are painted bright colors to trigger a fight or flight response, maybe Costa deliberately designed their buildings to be a louder environment in order to move customers through quicker? Is there something to this? Would I find the same observations at a smaller chain or mom and pop/one off place? (My guess is no); the word "hostile" comes to mind, like hostile architecture
• Looking over the sheet again, I decided to move the 5 that represents music because even though it's the highest vertically, it's still prominent above the other noises such as the baristas and coffee machine/steamer, but muted by the conversations especially the ones closest to me (separated by only a few feet between tables)
• I'm curious to see if different days, months, seasons, etc., play into the noise levels or even a different Costa location – are they all like this or is it this one in particular due to its proximity to a grocery store, gum, bank, and main road
• There seems to be a lot of people doing work and having meetings
• I think the time of day and day of the week play a bigger factor than initially anticipated – it's Wednesday which could mean people cramming for weekend deadlines; interviews, at least in my experience, happen in the first half of the week; there are many universities in Glasgow, including the Glasgow veterinary school right down the street
Below, I have included a recording of just the sounds made by placing my phone camera down on the table, as well as a short clip of my proximity in the space
One of the things I'd like to explore further is this idea of "hostile" architecture and if there is something to this idea that it's purposeful in order to make people leave. According to one article, "That’s because loud restaurants are more profitable. According to Pearlman, the haute-casual dining trend also helps restaurateurs run bigger and more successful businesses. Constructing interiors out of hard surfaces makes them easier (and thus cheaper) to clean. Eschewing ornate decor, linens, table settings, and dishware makes for fewer items to wash or replace. Reducing table service means fewer employees and thus lower overhead. And as many writers have noted, loud restaurants also encourage profitable dining behavior. Noise encourages increased alcohol consumption and produces faster diner turnover. More people drinking more booze produces more revenue. Knowing this, some restaurateurs even make their establishments louder than necessary in an attempt to maximize profits" (The Atlantic). Obviously this is geared towards an establishment that sells alcohol, but there is something to be said in regards to cleanliness and not having as much upkeep. Instead of viewing the architecture as "hostile," maybe the idea that it is "minimalist" is more accurate. However, minimalism doesn't lend to acoustic.
According to another article, loud isn't always bad, "If you agree with the general consensus that most restaurants are too loud, then it may seem counterintuitive that some restaurant owners and chefs actually seek out the noise. Why is this? They believe that it signals that the establishment is popular, and that it produces a sense of conviviality and hospitality. Although this is up for debate, there’s evidence that a loud environment is actually profitable. Hard Rock Cafe, for example, has the practice down to a science. Just like bright lights, loud, fast music cause patrons to talk less, consume more, and leave sooner" (Fohlio). Basically, it seems like restaurants are using loud environments to pump people through the doors. And there is actual psychology behind that, "Loud, fast music activates the sympathetic nervous system (the ‘fight-or-flight' response), which opposes the parasympathetic system and thereby diminishes appetite" (Psychology Today). It is truly incredible that the use of sound can trigger such visceral responses, much like the example I mentioned earlier of fast food chains painting bright colors like red and yellow to have the same effect as loud music.
Comments