Data Sonification; An emerging opportunity for graduate music/sound design departments to expand research in an art and science collaboration Scot Gresham-Lancaster
Data Sonification; An emerging opportunity for graduate music/sound design departments to expand research in an art and science
Data Sonification; An emerging opportunity for graduate music/sound design departments to expand research in an art and science collaboration
Senior Lecturer in Sound Design
Arts and Technology
University of Texas at Dallas
Abstract: As the size of data sets grow larger and larger, they are becoming more difficult to investigate for unique patterns and anomalies. Most tools for this sort of investigation are visually based. There is an opportunity with additional tool of sonification to enhance the ability of researchers to observe new relationships in data sets. A synthesis of sight and sound increases the likelihood of exposure of new features and interconnections hidden in more standard “visual only” modes of investigation. The creative application of musical understanding of acoustics, physical modeling synthesis, harmony, even musical style enable the use of sonification to become part of the curriculum for graduate level study not only in research labs but in music conservatories and schools world wide. The bridge between musical practice and sonification is just beginning to be realized, but the potential reward is great. This white paper proposes to layout some basic premises that music and sound art departments should consider when introducing the concept of sonification tool set for scientific discovery. The aim is to encourage new resources that will leverage the rich history related to music and sound design to create new tools and paradigms for the expanded investigation of ever growing and varied data sets across a wide range of disciplines.
A common definition from 1994 of sonification is, “Sonification, a form of auditory display, is the use of non-speech audio to convey information or perceptualize data.” This is the reference that is commonly used because the word itself has not made it into the Oxford English Dictionary, Larousse, Zingarelli or any number of dictionaries worldwide. This is a testament to how new this field is. While graphs and charts have been with us since Gutenberg and before, it is not until ready access to databases and the conversion of numbers into sound via computers that conception of sonification was even realistic as a practice. To create a sonification one needs to formulate a computer program to take a sequence of numbers and makes a scaled and converted output of those numbers as some sort of sound. This sound could be musical notes where the pitch of each note represents a value, for example. In truth there is no standard yet for this conversion, it could be any combination of all sorts of transformations of correlated sets of numbers into a set of sounds and the acoustic parameters related to those sounds.
The very fact that there are no standards and little use case studies opens this field of research into an interesting range of new investigation. To even begin to formulate a sonification “scheme’ of one sort or another requires a convergence of many disciplines. Ideally, there should be individuals involved that are expert computer programmers, acousticians, psycho-acousticians, composers and sound engineers and that would not include the individuals needed to design use case studies and testing regimen. Such an eclectic combination of skill sets points up the challenge and the promise of this area of research. It truly is a fully STEAM (science technology engineering ART mathematics) and the interesting part of this concept is that while the scientific techniques for doing the conversions of data into sound are easily conceivable, what isn’t as clear is the artistic or at least “crafting” of sonifications that have the ability to stand the test of sustained listening. Far too many examples of direct conversions or oversimplified re-mappings of data into repetitive, grating or even unlistenable examples are found by anyone doing a general survey of the various efforts worldwide to create sonifications. One can think of a visual equivalent of a badly rendered graph or pie chart with bad color choices, too small, no differentiation between the data being contrasted etc. Many aesthetic challenges shape the choices made by the composer/sound designer working with the final realization of the conversion of a data flow to sound.
Most of the challenges of the area of electroacoustic music, which is the general set of techniques used to make sonification in most cases, are also in a transitory and unsettled place. The famous aesthetic theorist and philosopher Theodor Ardono was the first to articulate the problem and promise of electronic music production. “Infatuation with the material along with blindness toward what is made out of it resulting from the fiction that the material speaks for itself, from an effectively primitive symbolism. To be sure, the material does speak but only in those constellations in which the artwork positions it”. In this case, Adorno is concerned with the fascination of early electronic music composers with the “Material” or the new sounds themselves and not the context or form in which these sounds will be placed.
The same problem persists across into the area of sonification. Too often the representation of the data set is explained in a text that precedes the act of listening to the actual example of sonification being played. The skill and craft, one would hope even the art of sonification in a future context will transcend this boundary of explanation and create a type of realization that is self realized, self explanatory. The form or framework of the sonification itself must therefore be informed of all the various disciplines outlined before, otherwise the content of the sound is totally amorphous and without internal structure. The act of creating a fully realized sonification requires that all the aspects of science, technology and art related to the specific data set being sonified, require tight collaboration and cross communication to be fully actualized,
This sort of fully realized sonification can not be successfully managed without a well defined and rigorously followed workflow that allows each of the participants across all the disciplines the option to bring their particular understanding and contribution to the overall process. To be clear this is a new regimen that is just being formulated after years of research in the area and many heuristically based approaches to creating a context to accomplish the goal of a standardized system for functional but listenable sets of sonifications across a broad range of disciplines and potential collaborations.
Below is illustrated the basic workflow diagram for three interdependent activity requirements to fully realize a sonification in a way that takes into account the needs of the specific discipline being examined, but allows each of the varied sets of skills required to do this optimally an area of focus within the process itself that will create the most efficient interactivity between the skill sets. By clearly defining the expectations for each of the collaborators the likelihood of meeting the full potential of the realization are maximized.
The specific discipline puts forward a set of data or enables access to a specific real time data flow that the researcher wants to examine. This will require an interview process from the sonification team to more fully understand the needs of the researcher and the very specific areas of understanding that is being investigated. For example: A Geoscientist has a volumetric data set representing a transitional area of geological significance. This can be rendered in 2-d slices or to a 3-d goggle set visually, but sonically the area can be represented as a sound mass where specific sounds represent specific rock types localized in 3-d acoustic space. The Geoscientist in this case would be tasked with supplying access to the volumetric data that represents the geographical layers in general with the coordinates in three dimensions relative for the specific site in question. This information in many cases can be provided via Excel sheets as CSV (comma separated value) tables. In other cases, with real time data streams, for example, specific information can take the form of dynamic XML or Json data flows over the Internet in the form of UDP or TCP/IP packets. All these sorts of technical details need to be communicated and coordinated and access to the information must be provided. This requires the assistance of Computer Science expertise as well.
Phase One includes these specific collaborators
1 Researcher in Specific Science under examination (GeoScience in the example above)
2 Project Sonifier (Composer-Sound Designer)
3 Computer Science specialist (data transfer and message protocol formatting)
The project coordinator or sonifier must create a user interface that will allow the Researcher to manipulate and interact with the data once it is represented as a sound stream. This requires breaking the sound environment into one of two possible domains. Time based or sequential, where the information comes as a time series data flow and the changes in sound reflect the dynamic shift in the information over scaled time or the data set is represented as a static object that can be examined, manipulated and navigated upon. IN collaboration the Researcher and the Sonifier must agree on which of these two or both representations the researcher would want. Then there needs to be a clarification regarding scale, range and in some cases preferred musical or timbral style. Once the options have been clarified the sonifier then coordinates with a designer of engineer to create the user interface. This can take the form of an actual specialized hardware interface that is designed for the specific project or a web based browser set of buttons , knobs and value readouts that communicate to the researcher the current state of the “sonification engine” At this point the specific out put of the system must be codified into specific acoustic/musical parameters (location, frequency, amplitude, timbre etc.) and those parameters need to be parsed with in the OSC (Open Sound Control ) protocol for direct communication with the Phase Three synthesis functionality.
Phase Two includes these specific collaborators
1 Project Sonifier (Composer-Sound Designer)
2 Researcher in Specific Science under examination (GeoScience in the example above)
3 Design Engineer (either hardware or software CHI expert)
Once the OSC parameters have been set this has the distinct advantage of being fairly self-documenting. A typical OSC message may look something like this: /freq 440.032. This is pretty clearly requesting an oscillator to sound at a frequency of 440.032 HZ. Locational information would be express in terms of Cartesian coordinates /x /y /z ” /amp for amplitude or what ever was decided on in the design of Phase Two. The real craft and subtlety of this portion of the design work is to take these data flows and working in interaction with the recently codified User Interface, create a palatable if not masterful new acoustic environment that is directly reflecting the data that is under investigation. It is at this point that the real opportunity to fully engage graduate level student sound designers/composers to create and push forward this new discipline. The opportunity expands as an area where Psycho-Acousticians and well as Acousticians can become involved in refining and redefining the sound output formats and interface interactions to make a specific and functional, quite possibly reusable new resource for each of the participating scientific disciplines. At this point user testing will yield results regarding the efficacy of the specific sound design approach.
Phase Three includes these specific collaborators
1 Project Sonifier (Composer-Sound Designer)
3 Psycho-Acousticians (Music cognition specialist)
4 Human Interface Design Evaluators
The bottom of the diagram shows arrows of “feedback and redesign” which will obviously be part of the development process for each of these lines of tool creation. Keep in mind at the end of the design/redesign process an entirely functional and potentially widely reusable new tool for each specific discipline that goes through this process will be being realized. Wholly new ways of investigating scientific data sets will emerge and the potential synergy of this line of investigation in conjunction with the already well establish visual modes of research is very promising.
As technological innovation is reframing our consideration of the tasks before us, here is yet another opportunity to reframe an aspect of graduate studies in music and sound design. By implementing this sort of regimen within the context of the curriculum design for graduate study in those fields, this new tool of sonification can become an integrated part of a dynamic new way of understanding the place of sound in our new media culture and foster collaboration across all the various disciplines outlined above. It must be remembered that the tools to even think of this course of study have just become widely available in the last decade or less, so it is understandable that there has not been more defining research in this area.
For a true and usable new version of sonification to emerge it will take the sort of cross disciplinary collaboration that has been outlined here. Each participant must understand her or his specific discipline and problem well enough to articulate the design specification that is required. This is what makes this approach a promising tool for fostering collaboration in Science Technology Engineering ART and Mathematics. It is across these disciplines that students will discover the new resources and potential of the act of collaborating as well as being part of creating a whole new class of tools that may help researchers in all those areas of study to push the limits of research and understanding.