Edward Wrzesien, a 2004 graduate of the computer science department, sits at the helm of the studio’s controls. The music resonating through the dim room is a piece he composed for the upcoming Mountain Computer Music Festival, the first event of its kind in Missoula, which takes place Tues., Sept. 7, at the University. A few feet away stands Charles Nichols, assistant professor of composition and music technology and festival organizer.
Wrzesien’s piece is one of the 10 to 12 student, faculty and guest compositions that will be featured during the concert.
The festival, Nichols says, will showcase the high-level creative work in composition and music technology students are doing with computers. It will also bring in guests and guest compositions, and introduce the community to current computer music at the international level. And hopefully, says Nichols, it will create an exciting and ongoing venue for computer music in Western Montana.
As Nichols explains, computer music is growing in popularity throughout the country and around the world, in no small part because the technologies have become easier for people to use.
“It’s no longer the case that you have to be at an Ivy League college with a mainframe computer on campus,” says Nichols, who received his master’s degree from Yale and his doctorate from Stanford. “We’ve come a long way in computer music. Now, everything is in your laptop.”
In addition, he says, you can augment your basic laptop setup with controllers and outboard synthesizers. “Even when I started back in the ’80s, it was this big computer with sound cards and outboard samplers—a lot of stuff to carry around for one performance.”
The improved portability of the tools, says Nichols, is only one of the reasons computer music has become a popular medium in which to work, and for all kinds of music. “Academic, classically trained musicians,” says Nichols, “are using computer-generated sound in concert pieces, and popular musicians are using computer-generated sound and music in popular music.” Artists like Aphex Twin, he adds, have taken computer music and made it popular. Musicians as diverse as Radiohead and Cher also employ computer music in their sets.
Within the academic community, computer music has a long history, even at UM. Electronic or technologically enhanced music, says theory and composition professor Patrick Williams, has been a part of the composition program at UM in one form or another since the late ’60s, when analog technology was emerging around the country.
Charles Nichols came to UM three years ago to head the department’s composition and music technology program, which now has over 20 students and offers a major in music. When he arrived at UM, Nichols updated a room on the second floor of the music building to create a state-of-the-art computer music and recording facility. He added eight-channel surround sound, several controllers and outboard synthesizers, and made it possible to record onto eight tracks of digital audio. He also constructed a smaller second studio and upgraded the computers and keyboards in the lab where he teaches.
Not all computer music is created and performed in the same manner, Nichols says. As with other kinds of music, styles of computer music vary, and the interactions between musician, instrument and computer vary as well.
Nichols teaches four different program classes in computer music: Music Concrete, Sequencing and Synthesis, Interactivity and Computer-Generated Sound. The idea, he says, is to give each student a basic overview of the current state and history of computer music. Most of the pieces to be performed at the festival are interactive pieces, with the musician playing an acoustic instrument into the computer and the computer processing the sound, “spatializing it around the speakers.” With others, Nichols continues, “the electronic instrument is triggering things that happen in the computer.”
Wrzesien’s eerie piece uses computer-generated sound—more specifically, something called sonification. Though the sounds are not natural (they don’t exist in nature), to call them “unearthly” is also somewhat inaccurate, considering the nature of the piece. Wrzesien used ice flow data from a satellite transmission over Antarctica and “sonified” it into music. He formatted the data for his own purposes, he says, and then “threw it” into algorithms he wrote to create the music. Nichols calls “Ice Flow Sonification” a “starkly beautiful piece.”
Wrzesien got the idea for his project when Nichols came back from a computer music festival in Korea and told Wrzesien about a piece he had heard that incorporated data from both healthy and cancerous human cells.
“You use the different parts of the data as different elements of sound,” Nichols says. However, he emphasizes, “the music that comes out of the computer is only as good as the composer who has programmed the computer. It’s very easy now to make music on the computer, but it’s very hard to make interesting and expressive music on a computer.”
As an example of the interesting kind, Nichols uses Wrzesien’s four-and-a-half minute piece, which took the composer about a month to make, and about two more months to make “expressive and musical.”
Nichols, who received the 2003–2004 Award from the American Society of Composers, Authors and Publishers for his original scores, will also perform at the festival with a piece he composed for a dance by the Montana Transport Co. last November. The piece is called “Posture” and uses a process called granularization, which “takes recorded sounds and chops them up into really fine particles and then reassembles into different sounds.
“And then,” Nichols explains, “from those sounds you can assemble completely different sounds that don’t have any kind of connection to real world sounds.”
“Posture” is a fascinating orchestra of effervescent sounds that fizzle in your ears. They are familiar—violin sounds, vocalizations, tapping on bowls full of water, crinkling tin foil, jingling keys—but they have been mutated in such a way that they become almost unrecognizable.
Matthew Burtner, the festival’s guest composer and performer, is a professor of composition and computer music at the University of Virginia, and associate director of the Virginia Center for Computer Music at UVA. An internationally renowned “sound artist,” Burtner will perform two pieces, one called “S-trance-S” and the other “S-morph-S.” Both are composed for the Metasaxophone, a “retrofitted” saxophone of Burtner’s own invention.
“The whole idea is to use hybrid musical instruments that use technology to redistribute acoustics,” Burtner says, adding that this wouldn’t be possible in real world acoustics. Along with his performance, Burtner will also give a lecture titled “Disembodying the Physical/Embodying the Virtual.”
The purpose of computer music, Nichols says, is not to replace acoustic instruments or acoustic music. Rather, “The purpose of computer music is to provide an additional palette of possibilities, an additional palette of sounds, and ways of interacting with sound.”
The first Mountain Computer Music Festival takes place Tues., Sept. 7, at 7:30 p.m. at the Phyllis Washington Amphitheater (at the base of the M trail) or, in case of rain, in the UM Music Recital Hall. A donation of $3 for students and $5 for adults is suggested. The lecture by guest performer Matthew Burtner, “Disembodying the Physical/Embodying the Virtual: Computer Music Composition, Instrument Design and Performance,” will take place at 1 PM in the department of music.