Experiments
Below you'll find a miscellaneous selection of projects I've worked on that contain interactive performance elements, procedural structures, nonlinear approaches to composition, and other topics related to game audio or new media generally.
Nostalgia Machine (2017) for live sampling environment, built in Max/MSP
Nostalgia Machine is a live sampling environment created in the MAX/MSP visual programming suite by Cycling 74. It takes an arbitrary collection of uncompressed audio as input and generates cued points within that audio that a solo performer can then manipulate with an external controller. The interface of the patch also includes visual feedback that can be displayed for an audience during performance.
The patch has five main parts:
A file parsing module that searches for valid directories of audio for use in the patch
A module that analyzes the length of each loaded audio file and generates a set of random cues for file playback; 8 distinct cues are generated when a new file is loaded/selected for playback, and cues within 2 separate files are loaded for playback at any given time
A monophonic file playback system with rate manipulation
The interface itself, manipulable via MIDI devices
The GUI, which displays:
The name of the audio directory currently selected
File names for the currently loaded audio cues
Visual feedback from the performer’s MIDI device
A webcam feed that displays a color-manipulated and pixelated image of either the performer or the audience, to taste
Because the cues within each audio file are selected randomly each time a new file is loaded, the performer does not know which parts of the audio file will play until they trigger the cues. This forces the performer to be responsive and resourceful in order to make something musical out of what might otherwise be a jumbled mess. The resulting sound world is similar to a radio rapidly changing stations, or flipping through the channels on a TV set.
The environment is not programmed with any specific collection of audio in mind, nor does it come prepackaged with audio by default. Any audio may be used, and the curation of audio collections is a large part of the performance of the piece. A well-curated collection of material will inevitably represent one of the following things:
A specific time
A specific place
A genre of music or classification of sound
A specific person, ie the person doing the curating
Similarly, a performance of Nostalgia Machine inevitably reveals how a performer engages with the collection of audio and what it means to them personally. This can be inferred from how the performer navigates the material: which moments they choose to linger on or repeat (if any), which they skip through, and which they slow down and speed up.
It is highly encouraged for any performance to include a way for audience members to view the entire collection of source material the performer is working with. This provides context and allows the audience to better appreciate the thought and work behind the curation of material, even if they do not recognize the material itself.
Proof of concept: A partial nonlinear Wwise music implementation for Toby Fox's Undertale (2015)
As a final project of sorts, for a video game sound design course at the Peabody Institute, I created this partial implementation for Toby Fox's game Undertale (2015) using Wwise. The audio created is all original material, and is meant to show off some different kinds of cues that would be required were the game to use a more dynamic music system. Also, it provided a nice opportunity to imagine an aesthetic alternative to the original Undertale soundtrack.
Undertale's original music system was not built with Wwise, and I did not have access to the game's source code, so these cues represent a proof of concept only, with no hooks into the game code.
Undertale is copyrighted and owned by Toby Fox. Aside from game stills, none of the material presented here is from the original game.
Static Cues
The following cues were written as simple loops that play for the duration the player is in the area or encounter. They are capable of being interrupted, ducked, or otherwise transformed in response to game events, but are otherwise fixed in structure.
Dynamic Cues
The following cues are segmented so that they can be put together in real time, either in response to game code or using an internal logic system. The videos below show each cue in motion and are accompanied by descriptions of their internal logic.
Title Screen
The title cue contains 3 main parts:
An initial bass drum hit, played once on start
A subsequent continuously looping "heartbeat" sound
A set of two intermittent high sine/triangle synths, set up as Wwise triggers that can be hooked up to respond arbitrarily to game code. As such, they are triggered manually here for demonstration purposes.
The original game doesn't linger long enough on the title screen to account for these elements as I've composed them, but in a full implementation, it would be easy to change the title screen's behavior to respond to player input rather than cycle between a title card and an introductory cutscene.
Snowdin Town
This cue contains a single static loop of music that is accompanied by a set of 15 pizzicato string ensemble inserts. These inserts are played at specific places in the main cue 20% of the time. Rather than being controlled by a simple probability statement, these inserts have the potential to be hooked into the game state, i.e. whether the player is indoors or outdoors, which outside map is loaded, or how close they are to certain objects on the map.
Snowdin Town Shop
This cue starts with a basic introduction and statement of the main theme, and then settles into a loop between variations of those two parts. A sequence and switch chooses between 3 separate versions of the intro/transition material, followed by one of 6 different variations on the main theme.
There is also a parallel switch set of 29 one-shot inserts that play intermittently on an irregular timer.
The reason behind making this cue so involved was to create a structure that was extensible to the many other shops in the game. Each shop would have its own set of transitions and variations on the basic theme drawn from a master pool of those elements, and the ambient one-shots could potentially be customized to each location.
Sound design rough cut for Shift 2: Unleashed (2011) teaser trailer
The footage used here was originally released for a sound design competition hosted by Waves in 2011. It was provided for the contest by Electronic Arts (EA).
This audio cut was made as an assignment for a video game sound design course at the Peabody Institute of the Johns Hopkins University in 2019. It utilizes solely the complement of sounds provided for the original 2011 contest.
Etudes written in Csound and ChucK
The following selections are experiments that utilize the strongly-typed audio programming languages Csound and ChucK.
The Csound etudes are presented here as (mostly tongue-in-cheek) short demonstrations of ability and written during a course of study on synthesis theory taken at the Peabody Institute. They make substantial use of Csound's peculiar tracker-like functionality (not advised, very clunky) as well as demonstrate specific types of synthesis studied in the course.
The ChucK pieces are much more in the spirit of exploratory side projects as I was trying to feel out the kinds of things I could do with the language. Due to the "live" nature of the language, the pieces are more less fleshed out.
Csound
Tracker demo (after Rick Astley's "Never Gonna Give You Up")
Subtractive synthesis demo (after Alice DJ's "Better Off Alone")
Additive synthesis demo (variations on emulating the THX logo cue)
AM synthesis demo (discretely arranged inharmonic AM banding instruments)
FM synthesis demo (FM banding artifacts generated by carrier and modulator signals changing frequencies at different rates)
Waveshaping demo (short composition using a waveshaped polyphonic synth)
The wavetable source is excerpted from "ただのともだち" ["Just Friends"] by artist salyuxsalyu
ChucK
"ground" (basic looping)
"slots" (random modal pitch selection, programmatic event timing changes)
"binary modes" (binary toggling between random modal pitch collections)
"chord toggle" (a switch tree launching threads that play different harmonies)
Further projects in Max/MSP
Sailor1: 2-osc polyphonic subtractive synthesizer
16-voice keyboard control
voice and filter envelopes with variable velocity sensitivity
standard Max implementations of lowpass, hipass, bandpass, and notch filters
simple delay unit with feedback
pastel magical girl aesthetic
[sample coming soon]
Procedure Dances: generative Markov emulation of Steve Reich's music
Reichian rhythms in two voices are generated according to a Markov-like system that chooses whether a new note should be played based on the rhythmic content of the previous beats
pitch material is chosen by a similar set of rules that determine the allowable set of generatable pitches and for how long each of those sets are used
the comparative state memory for the active harmony is shorter than it is for rhythm, but is theoretically arbitrarily extensible
[sample coming soon]
Pluto1: 4-stage video effects matrix built with Jitter/Vizzie
runs a single video source through a maximum of 4 stages of video processing in any arbitrary order
effects include brightness/contrast control, image smearing, tiling, and cell-shuffling
all effects are manipulable using either sliders on screen or MIDI CC learn
[sample coming soon]
emmie: Max/MSP implementation of my piece "emmie goes to the zoo" (2016)
7-voice sinusoidal polyphonic synthesizer with cue-group activation
voice availability monitoring in 2 formats:
text
visual feedback of progress of last active cue
stereo reverb
low-tech shorthand score for order of cues
sophie's shooting star séance special (2015) for keyboards/electronics
16-voice polyphonic saw synth
130-voice polyphonic sinusoidal synth
used for generating semi-random modal lattices
monophonic hybrid subtractive lead synth
monophonic noise envelope generator
unbelievably childish but nonetheless respectably clean interface design
Proxessor: 3x3 stereo signal processing matrix
3x3 matrix routing for an initial incoming stereo signal
up to 3 stages of effects processing
bitcrushing and sample rate manipulation
stereo delay with feedback and wet/dry control
ring modulation
Miscellaneous retro-inspired compositions and sound design
Standalone cues
These cues were written as homages to low-bit compositions of early heavy-hitting home consoles and handhelds. They utilize a set of plugins known as tweakbench, several of which emulate various pieces of early sound hardware. All instruments and processing effects except delay lines were created using this suite of plugins.
loosely after "Normal Duel" Pokémon Trading Card Game (1998)
loosely after "King Theme" from Super Mario Bros. 3 (1988)
loosely after "Green Greens" from Kirby's Dream Land (1992)
loosely after Steely Dan's "Rikki Don't Lose That Number" (1974)
loosely after "My Favorite Things" from The Sound of Music (1965)
loosely after "Gerudo Valley" from Legend of Zelda: Ocarina of Time (1999)
SFX
This quick set of sounds was made as an implementation of the full complement of modern Pokémon attack type sound effects in an early-generation style. They were created with a sound-creation tool called Bfxr and use no post-processing.