The Sounds of Science
Cellist, composer and computer music professor Margaret Schedel is helping researchers experience their data through a different channel
Margaret Schedel has many sides, and they all involve the creation of sound.
On one side, she's an assistant professor in the Department of Music, College of Arts and Sciences, at Stony Brook University who feels at home teaching analysis of 20th century music in a classroom just as much as she does teaching digital arts to 6,500 students around the world in SUNY’s first Massive Open Online Course (MOOC).
On another side, Schedel is a highly inventive cellist and composer whose interactive media opera, A King Listens, premiered at the Cincinnati Contemporary Arts Center while she was still working her way toward a DMA in music composition at the University of Cincinnati College-Conservatory of Music. Since that time her works have been performed throughout the United States and abroad.
Then there is her distinctly technical/analytical side — the one that embraces computers and math.
“I’m a musician first, but I think my medium of expression is enhanced by the computer,” she says. “For example, I have a bow that has sensors on it, so I can play my cello normally, or much more interesting to me, I can use my gestures and manipulate the sound. I also like math, so in my brain I make all sorts of connections.”
One of those connections led her to the Center for Functional Nanomaterials at Brookhaven National Laboratory (BNL), where she has been sonifying data with staff scientist Kevin Yager.
Yager, whose research focuses on the use of scattering methods to measure nanostructures, is co-manager of the X9 beamline at BNL’s National Synchrotron Light Source. He also happens to be married to Schedel. She had seen some of the images that come out of his X-ray scattering work, so one day she asked him to explain the math behind it.
Turns out that in his research, Yager used Fast Fourier Transform (FFT), an algorithm already well known to Schedel from its application in audio. “We use FFT because it enables you to split the pitch portion of the sound from the timing information,” she says.
Once Schedel realized that their fields had FFT in common, she began to connect the dots. She convinced Yager that she could enhance his research by giving him the opportunity to hear the vast amount of data from the beamline instead of just seeing the images.
Schedel essentially attached pitch to a location and volume to brightness, and then played a sound from an image. Yager could hear that different materials made different sounds but was not convinced that this was useful, she says, until they got to one that glitched and made a very different sound.
“You can miss seeing the glitch on the screen because you’re focused on other things — you’re on a computer, you’re mixing chemicals,” Schedel says. “But if the glitch made a sound, you wouldn’t have to be paying attention to the picture. You could just be listening and thinking, ‘That sounded fine. That sounded fine. Oh, that sounded bad — I’d better go check the equipment.’”
Schedel and Yager documented their collaboration in a joint paper titled “Hearing Nano-Structures: A Case Study in Timbral Sonification,” which Schedel presented at the International Conference on Auditory Display (ICAD) in June 2012 in Atlanta. The paper was so well received that Cary, N.C.-based analytics software developer SAS Institute invited Schedel to give a virtual presentation to a gathering of its own.
“As we enter this era of big data, people are trying to figure out different ways to experience their data,” says Schedel. “Sonification is becoming a new buzzword.”
Now Schedel is using the knowledge she gained from her collaboration with Yager at BNL to help a diverse range of departments across Stony Brook experience their own data through sound.
She is exploring the possibility of sonifying patients’ health records for the new Department of Biomedical Informatics, and is working with the Department of Physical Therapy to sonify the movements of patients with multiple sclerosis. She is also involved in bringing sound to the Department of Computer Science’s Reality Deck.
“You can imagine the Reality Deck, with 360 degrees of images,” she says. “You can’t see what’s behind you, but if you’re looking for a specific thing — like a quasar or a pulsar — and the galaxy image is all around you, we can put a little beep-beep where that quasar or pulsar is, and then when you are looking for it in the big field, you can hear it and then locate it.”
Her level of involvement in sonification projects varies, says Schedel. “It can be as simple as somebody else finding the pulsar at a coordinate, and I just use that coordinate to make a sound file beep. Or it can be as complicated as what I’m doing with Kevin, where I actually wrote the algorithm and we are directly sonifying that image. As tools get faster and data gets bigger, I hope people will start sharing their algorithms so we can learn from one another.”
Schedel joined the Stony Brook faculty in 2007 as a core member of the Consortium for Digital Arts, Culture and Technology, whose basic mandate, she says, is to work across disciplines in the digital arts. She takes that mandate seriously, and her track record bears that out.
“Stony Brook has strong art, theater and music departments, and I have amazing colleagues here,” she says. “But I also have amazing colleagues in the sciences. I like to joke that I am the most cross-disciplinary person you’ll ever meet.”
By Patricia Sarica