Image Image Image Image Image

Presented at CHI 2012, Touché is a capacitive system for pervasive, continuous sensing. Among other amazing capabilities, it can accurately sense gestures a user makes on his own body. “It is conceivable that one day mobile devices could have no screens or buttons, and rely exclusively on the body as the input surface.” Touché.

Noticing that many of the same sensors, silicon, and batteries used in smartphones are being used to create smarter artificial limbs, Fast Company draws the conclusion that the market for smartphones is driving technology development useful for bionics. While interesting enough, the article doesn’t continue to the next logical and far more interesting possibility: that phones themselves are becoming parts of our bodies. To what extent are smartphones already bionic organs, and how could we tell if they were? I’m actively researching design in this area – stay tuned for more about the body-incorporated phone.

A study provides evidence that talking into a person’s right ear can affect behavior more effectively than talking into the left.

One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain’s left hemisphere superiority for processing verbal information.

I heavily prefer my left ear for phone calls. So much so that I have trouble understanding people on the phone when I use my right ear. Should I be concerned that my brain seems to be inverted?

Read on and it becomes clear that going beyond perceptual psychology, the scientists are terrifically shrewd:

Tommasi and Marzoli’s three studies specifically observed ear preference during social interactions in noisy night club environments. In the first study, 286 clubbers were observed while they were talking, with loud music in the background. In total, 72 percent of interactions occurred on the right side of the listener. These results are consistent with the right ear preference found in both laboratory studies and questionnaires and they demonstrate that the side bias is spontaneously displayed outside the laboratory.

In the second study, the researchers approached 160 clubbers and mumbled an inaudible, meaningless utterance and waited for the subjects to turn their head and offer either their left of their right ear. They then asked them for a cigarette. Overall, 58 percent offered their right ear for listening and 42 percent their left. Only women showed a consistent right-ear preference. In this study, there was no link between the number of cigarettes obtained and the ear receiving the request.

In the third study, the researchers intentionally addressed 176 clubbers in either their right or their left ear when asking for a cigarette. They obtained significantly more cigarettes when they spoke to the clubbers’ right ear compared with their left.

I’m picturing the scientists using their grant money to pay cover at dance clubs and try to obtain as many cigarettes as possible – carefully collecting, then smoking, their data – with the added bonus that their experiment happens to require striking up conversation with clubbers of the opposite sex who are dancing alone. One assumes that, if the test subject happened to be attractive, once the cigarette was obtained (or not) the subject was invited out onto the terrace so the scientist could explain the experiment and his interesting line of work. Well played!

Another MRI study, this time investigating how we learn parts of speech:

The test consisted of working out the meaning of a new term based on the context provided in two sentences. For example, in the phrase “The girl got a jat for Christmas” and “The best man was so nervous he forgot the jat,” the noun jat means “ring.” Similarly, with “The student is nising noodles for breakfast” and “The man nised a delicious meal for her” the hidden verb is “cook.”

“This task simulates, at an experimental level, how we acquire part of our vocabulary over the course of our lives, by discovering the meaning of new words in written contexts,” explains Rodríguez-Fornells. “This kind of vocabulary acquisition based on verbal contexts is one of the most important mechanisms for learning new words during childhood and later as adults, because we are constantly learning new terms.”

The participants had to learn 80 new nouns and 80 new verbs. By doing this, the brain imaging showed that new nouns primarily activate the left fusiform gyrus (the underside of the temporal lobe associated with visual and object processing), while the new verbs activated part of the left posterior medial temporal gyrus (associated with semantic and conceptual aspects) and the left inferior frontal gyrus (involved in processing grammar).

This last bit was unexpected, at first. I would have guessed that verbs would be learned in regions of the brain associated with motor action. But according to this study, verbs seem to be learned only as grammatical concepts. In other words, knowledge of what it means to run is quite different than knowing how to run. Which makes sense if verb meaning is accessed by representational memory rather than declarative memory.

Researchers at the University of Tampere in Finland found that,

Interfaces that vibrate soon after we click a virtual button (on the order of tens of milliseconds) and whose vibrations have short durations are preferred. This combination simulates a button with a “light touch” – one that depresses right after we touch it and offers little resistance.

Users also liked virtual buttons that vibrated after a longer delay and then for a longer subsequent duration. These buttons behaved like ones that require more force to depress.

This is very interesting. When we think of multimodal feedback needing to make cognitive sense, synchronization first comes to mind. But there are many more synesthesias in our experience that can only be uncovered through careful reflection. To make an interface feel real, we must first examine reality.

Researchers at the Army Research Office developed a vibrating belt with eight mini actuators — “tactors” — that signify all the cardinal directions. The belt is hooked up to a GPS navigation system, a digital compass and an accelerometer, so the system knows which way a soldier is headed even if he’s lying on his side or on his back.

The tactors vibrate at 250 hertz, which equates to a gentle nudge around the middle. Researchers developed a sort of tactile morse code to signify each direction, helping a soldier determine which way to go, New Scientist explains. A soldier moving in the right direction will feel the proper pattern across the front of his torso. A buzz from the front, side and back tactors means “halt,” a pulsating movement from back to front means “move out,” and so on.

A t-shirt design by Derek Eads.

Recent research reveals some fun facts about aural-tactile synesthesia:

Both hearing and touch, the scientists pointed out, rely on nerves set atwitter by vibration. A cell phone set to vibrate can be sensed by the skin of the hand, and the phone’s ring tone generates sound waves — vibrations of air — that move the eardrum…

A vibration that has a higher or lower frequency than a sound… tends to skew pitch perception up or down. Sounds can also bias whether a vibration is perceived.

The ability of skin and ears to confuse each other also extends to volume… A car radio may sound louder to a driver than his passengers because of the shaking of the steering wheel. “As you make a vibration more intense, what people hear seems louder,” says Yau. Sound, on the other hand, doesn’t seem to change how intense vibrations feel.

Max Mathews, electronic music pioneer, has died.

Though computer music is at the edge of the avant-garde today, its roots go back to 1957, when Mathews wrote the first version of “Music,” a program that allowed an IBM 704 mainframe computer to play a 17-second composition. He quickly realized, as he put it in a 1963 article in Science, “There are no theoretical limits to the performance of the computer as a source of musical sounds.”

Rest in peace, Max.

UPDATE: I haven’t updated this blog in a while, and I realized after posting this that my previous post was about the 2010 Modulations concert. Max Mathews played at Modulations too, and that was the last time I saw him.

I finally got around to recording and mastering the set I played at the CCRMA Modulations show a few months back. Though I’ve been a drum and bass fan for many years, this year’s Modulations was the first time I’d mixed it for others. Hope you like it!

Modulations 2010
Drum & Bass | 40:00 | May 2010

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 82.7 MB)

1. Excision — System Check
2. Randomer — Synth Geek
3. Noisia — Deception
4. Bassnectar — Teleport Massive (Bassnectar Remix)
5. Moving Fusion, Shimon, Ant Miles — Underbelly
6. Brookes Brothers — Crackdown
7. The Ian Carey Project — Get Shaky (Matrix & Futurebound’s Nip & Tuck Mix)
8. Netsky — Eyes Closed
9. Camo & Krooked — Time Is Ticking Away feat. Shaz Sparks

Over the last few days this video has been so much bombshell to many of my music-prone friends.

It’s called the Multi-Touch Light Table and it was created by East Bay-based artist/fidget-house DJ Gregory Kaufman. The video is beautifully put together, highlighting the importance of presentation when documenting new ideas.

I really like some of the interaction ideas presented in the video. Others, I’m not so sure about. But that’s all right: the significance of the MTLT is that it’s the first surface-based DJ tool that systematically accounts for the needs of an expert user.

Interestingly, even though it looks futuristic and expensive to us, interfaces like this will eventually be the most accessible artistic tools. Once multi-touch surface are ubiquitous, the easiest way to gain some capability will be to use inexpensive or open-source software. The physical interfaces created for DJing, such as Technics 1200s, are prosthetic objects (as are musical instruments), and will remain more expensive because mechanical contraptions will always be. Now, that isn’t to say that in the future our interfaces won’t evolve to become digital, networked, and multi-touch sensitive, or even that their physicality will be replaced with a digital haptic display. But one of the initial draws of the MTLT—the fact of its perfectly flat, clean interactive surface—seems exotic to us right now, and in the near future it will be default.

Check out this flexible interface called impress. Flexible displays just look so organic and, well impressive. One day these kinds of surface materials will become viable displays and they’ll mark a milestone in touch computing.

It’s natural to stop dancing between songs. The beat changes, the sub-rhythms reorient themselves, a new hook is presented and a new statement is made. But stopping dancing between songs is undesirable. We wish to lose ourselves in as many consecutive moments as possible. The art of mixing music is to fulfill our desire to dance along to continuous excellent music, uninterrupted for many minutes (or, in the best case, many hours) at a time. (Even if we don’t explicitly move our bodies to the music, when we listen our minds are dancing; the same rules apply.)

I don’t remember what prompted me to take that note, but it was probably not that the mixing was especially smooth.

A tomato hailing from Capay, California.

LHCSound is a site where you can listen to sonified data from the Large Hadron Collider. Some thoughts:

  • That’s one untidy heap of a website. Is this how it feels to be inside the mind of a brilliant physicist?
  • The name “LHCSound” refers to “Csound”, a programming language for audio synthesis and music composition. But how many of their readers will make the connection?
  • If they are expecting their readers to know what Csound is, then their explanation of the process they used for sonification falls way short. I want to know the details of how they mapped their data to synthesis parameters.
  • What great sampling material this will make. I wonder how long before we hear electronic music incorporating these sounds.

The Immersive Pinball demo I created for Fortune’s Brainstorm:Tech conference was featured in a BBC special on haptics.

I keep watching the HTC Sense unveiling video from Mobile World Congress 2010. The content is pretty cool, but I’m more fascinated by the presentation itself. Chief marketing officer John Wang gives a simply electrifying performance. It almost feels like an Apple keynote.

The iFeel_IM haptic interface has been making rounds on the internet lately. I tried it at CHI 2010 a few weeks ago and liked it a lot. Affective (emotional haptic) interfaces are full of potential. IFeel_IM mashes together three separate innovations:

  • Touch feedback in several different places on the body: spine, tummy, waist.
  • Touch effects that are generated from emotional language.
  • Synchronization to visuals from Second Life

All are very interesting. The spine haptics seemed a stretch to me, but the butterfly-in-the-tummy was surprisingly effective. The hug was good, but a bit sterile. Hug interfaces need nuance to bring them to the next level of realism.

The fact that the feedback is generated from the emotional language of another person seemed to be one of the major challenges—the software is built to extract emotionally-charged sentences using linguistic models. For example, if someone writes “I love you” to you, your the haptic device on your tummy will react by creating a butterflies-like sensation. As an enaction devotee I would rather actuate a hug with a hug sensor. Something about the translation of words to haptics is difficult for me to accept. But it could certainly be a lot of fun in some scenarios!

I’ve re-recorded my techno mix Awake with significantly higher sound quality. So if you downloaded a copy be sure to replace it with the new file!


Techno | 46:01 | October 2009

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 92 MB)

1. District One (a.k.a. Bart Skils & Anton Pieete) — Dubcrystal
2. Saeed Younan — Kumbalha (Sergio Fernandez Remix)
3. Pete Grove — I Don’t Buy It
4. DBN — Asteroidz featuring Madita (D-Unity Remix)
5. Wehbba & Ryo Peres — El Masnou
6. Broombeck — The Clapper
7. Luca & Paul — Dinamicro (Karotte by Gregor Tresher Remix)
8. Martin Worner — Full Tilt
9. Joris Voorn — The Deep

I recently started using Eclipse on OS X and it was so unresponsive, it was almost unusable. Switching tabs was slow, switching perspectives was hella slow. I searched around the web for a solid hour for how to make it faster and finally found the solution. Maybe someone can use it.

My machine is running OS X 10.5, and I have 2 GB of RAM. (This is important because the solution requires messing with how Eclipse handles memory. If you have a different amount of RAM, these numbers may not work and you’ll need to fiddle with them.)

  • Save your work and quit Eclipse.
  • Open the Eclipse application package by right-clicking (or Control-clicking) on and select “Show Package Contents.”
  • Navigate to Contents→MacOS→, and open “eclipse.ini” in your favorite text editor.
  • Edit the line that starts with -”XX:MaxPermSize” to say “-XX:MaxPermSize=128m”.
  • Before that line, add a line that says “-XX:PermSize=64m”.
  • Edit the line that starts with “-Xms” to say “-Xms40m”.
  • Edit the line that starts ith “-Xmx” to say “-Xmx768m”.
  • Save & relaunch Eclipse.

Worked like a charm for me.

Scroll to Top

To Top

Towards a dimension space for musical devices

David M. Birnbaum, Rebecca Fiebrink, Joseph Malloch, Marcelo M. Wanderley
Input Devices and Music Interaction Laboratory
McGill University
Montreal, Canada


While several researchers have grappled with the problem of comparing musical devices across performance, installation, and related contexts, no methodology yet exists for producing holistic, informative visualizations for these devices. Drawing on existing research in performance interaction, human-computer interaction, and design space analysis, the authors propose a dimension space representation that can be adapted for visually displaying musical devices. This paper illustrates one possible application of the dimension space to existing performance and interaction systems, revealing its usefulness both in exposing patterns across existing musical devices and aiding in the design of new ones.


Human-Computer Interaction, Design Space Analysis, New Interfaces for Musical Expression

1. Examining musical devices

Musical devices can take varied forms, including interactive installations, digital musical instruments, and augmented instruments. Trying to make sense of this wide
variability, several researchers have proposed frameworks for classifying the various systems.

As early as 1985, Pennycook [15] offered a discussion of interface concepts and design issues. Pressing [18] proposed a set of fundamental design principles for computer-music interfaces. His exhaustive treatment of the topic laid the groundwork for further research on device characterization. Bongers [3] characterized musical interactions as belonging to one of three modes: Performer–System interaction, such as a performer playing an instrument, System–Audience interaction, such as those commonly found at interactive sound installations, and Performer–System–Audience interaction, which describes interactive systems in which both artist and audience interact in real time. Wanderley et al. [24] discussed two approaches to classification of musical devices, including instruments and installations: the technological perspective and the semantical perspective. Jordà [7] characterizes instruments in terms of music output complexity, control input complexity and performer freedom. Focusing on interactive installations, Winkler [25] discussed digital, physical, social, and personal factors that should be considered in their design. In a similar way, Blaine and Fels [1] studied design features of collaborative musical systems, with the particular goal of elucidating design issues endemic to systems for novice players. While these various approaches contribute insight to the problem of musical device classification, most did not provide a visual representation, which could facilitate device comparison and design. One exception is [24], which proposed a basic visualization employing two axes: type of user action and user expertise (Figure 1). Piringer [16] offers a more developed representation, as shown in Figure 2. However, both of these representations are limited to only a few dimensions. Furthermore, the configurations could be misread to imply orthogonality of the dimensions represented by the x- and y-axes.

Figure 1: The 2-dimensional representation of Wanderley et al. [24].

Figure 2: An example of a visual representation by Piringer [16]. “Expressivity” appears on the y-axis, with the categories very good, good, middle, and very little (top to bottom). “Immersion” appears on the x-axis, with the categories Touch-Controller, Extended-Range, Partially Immersive, and Fully Immersive, an adaptation from [11]. Each shape represents an instrument; the size indicates the amount of feedback and the color indicates feedback modality.

The goal of this text is to illustrate an efficient, visually-oriented approach to labeling, discussing, and evaluating a broad range of musical systems. Musical contexts where these systems could be of potential interest might relate to Instrumental manipulation (e.g., [22]), Control of pre-recorded sequences of events (see [10], [2]), Control of sound diffusion in multi-channel sound environments, Interaction in the context of (interactive) multimedia installations ([25], for example), Interaction in dance-music systems [4], and Interaction in computer game systems. Systems in this diverse set involve a range of demands on the user(s) that characterize the human-system interaction, and these demands can be studied with a focus on the underlying system designs. The HCI-driven approach chosen for this study is design space analysis.

2. Design space analysis

Initially proposed as a tool for software design in [8] and [9], design space analysis offers tools for examining a system in terms of a general framework of theoretical and practical design decisions. Through formal application of ‘QOC’ analysis composed of Questions about design, Options of how to address these questions, and Criteria regarding the suitability of the available options, one generates a visual representation of the design space of a system. In effect, this representation distinguishes the design rationale behind a system from the set of all possible design decisions. MacLean [8] outlines two goals of the design space analysis approach: to “develop a technique for representing design decisions which will, even on its own, support and augment design practice,” and to “use the framework as a vehicle for communicating and contextualising more analytic approaches to usersystem [sic] interaction into the practicalities of design.”

2.1 Dimension Space Analysis

Dimension space analysis is a related approach to system design that retains the goals of supporting design practice and facilitating communication [6]. Although dimension space analysis does not explicitly incorporate the QOC method of outlining the design space of a system, it preserves the notion of a system inhabiting a finite space within the space of all possible design options, and it sets up the dimensions of this space to correspond to various object properties.

The Dimension Space outlined by Graham [6] represents interactive systems on six axes. Each system component is plotted as a separate dimension space so that the system can be examined from several points of view. Some axes represent a continuum from one extreme to another, such as the Output Capacity axis, whose values range from low to high. Others contain only a few discrete points in logical progression, such as Attention Received, which contains the points high, peripheral, and none. The Role axis is the most eccentric, containing five unordered points.

A dimension plot is generated by placing points on each axis, and connecting them to form a two-dimensional shape. They are created from the perspective of a specific entity involved in the interaction. Systems and their components can then be compared rapidly by comparing their respective plots. The shape of the individual plots, however, contain no intended meaning. The flexibility of the dimension space approach lies in the ability to redefine the axes. In adapting this method, the choice of axes and their possible values is made with respect to the range of systems being considered, and the significant features to be used to distinguish among them. Plotting a system onto a Dimension Space is an exercise that forces the designer to examine each of its characteristics individually, and it exposes important issues that may arise during the design or use of a system.

We illustrate one possible adaptation of [6]’s multi-axis graph to classify and plot musical devices ranging from digital musical instruments to sound installations. For this exercise, we chose axes that would meaningfully display design differences among devices, and plotted each device only once, rather than creating multiple plots from different perspectives.

2.2 An Example Dimension Space

In adapting the dimension space to the analysis of musical devices, we explored several quantities and configurations of axes. It was subjectively determined that the functionality of the spaces was not affected in plots with as many as eight axes. As an example, Figure 3 shows a seven-axis configuration, labeled with representative ranges. Figure 4 shows plots of several devices, drawn from the areas of digital musical instruments and interactive installations incorporating sound and/or music. Each of the axes are described in detail in the following section.

  • The Required Expertise axis represents the level of practice and familiarity with the system that a user or performer should possess in order to interact as intended with the system. It is a continuous axis ranging in value from low to high expertise.
  • The Musical Control axis specifies the level of control a user exerts over the resulting musical output of the system. The axis is not continuous, rather it contains three discrete points following the characterization of [19], using three possible levels of control over musical processes: timbral level, note level, and control over a musical process.
  • The Feedback Modalities axis indicates the degree to which a system provides real-time feedback to a user. Typical feedback modes include visual, auditory, tactile, and kinesthetic [24].
  • The Degrees of Freedom axis indicates the number of input controls available to a user of a musical system. This axis is continuous, representing devices with few inputs at one extreme and those with many at the other extreme.
  • The Inter-actors axis represents the number of people involved in the musical interaction. Typically interactions with traditional musical instruments feature only one inter-actor, but some digital musical instruments and installations are designed as collaborative interfaces (see [5], [1]), and a large installation may involve hundreds of people interacting with the system at once [21].
  • The Distribution in Space axis represents the total physical area in which the interaction takes place, with values ranging from local to global distribution. Musical systems spanning several continents via the Internet, such as Global String, are highly distributed [20].
  • The Role of Sound axis uses Pressing’s [17] categories of sound roles in electronic media. The axis ranges between three main possible values: artistic/expressive, environmental, and informational.

The seven-axis dimension space developed for musical devices
Figure 3: The 7-axis dimension space

3. Trends in dimension plots

The plots of Michel Waisvisz’ The Hands (Figure 4(a)) and Todd Winkler’s installation Maybe… 1910 (Figure 4(h)) provide contrasting examples of the dimension space in use. The Hands requires a high amount of user expertise, allows timbral control of sound (depending on the mapping used), and has a moderate number of inputs and outputs. The number of inter-actors is low (one), the distribution in space is local, and the role of the produced sound is expressive. The installation Maybe… 1910, is very different: the required expertise and number of inputs are low, and only control of high-level musical processes (playback of sound files) is possible. The number of output modes is quite high (sights, sounds, textures, smells) as is the number of inter-actors. The distribution in space of the interaction, while still local, is larger than most instruments, and the role of sound is primarily the exploration of the installation environment.

The Hands
(a) The Hands
(b) Hyper-Flute
(c) Theremin
(d) Tooka
Block Jam
(e) Block Jam
Rhythm Tree
(f) Rhythm Tree
Global String
(g) Global String
(h) Maybe…1910
Figure 4: Dimension space plots.

When comparing these plots, and those of other music devices, it became apparent that the grouping used caused the plots of instruments to shift to the right side of the graph, and plots of installations to shift to the left. Installations commonly involve more people at the point of interaction, with the expectation that they are non-experts. Also, installations are often more distributed in space than instruments, which are intended to offer more control and a high potential for expressivity, achieved by offering more degrees of freedom. Sequencing tools, games, and toys typically occupy a smaller but still significant portion of the right side of the graph.

4. Conclusions

We have demonstrated that a dimension space paradigm allows visual representation of digital musical instruments, sound installations, and other variants of music devices. These dimension spaces are useful for clarifying the process of device development, as each relevant characteristic is defined and isolated. Furthermore, we found that the seven-axis dimension space resulted in visible trends between plots of related devices, with instrument-like devices tending to form one distinct shape and installations forming another shape. These trends can be used to present a geometric formulation of the relationships among existing systems, of benefit to device characterization and design.

Our future work in this direction might include further refinement of the system of axes, including changing the number of axes, or their definitions. Furthermore, a major problem remains insofar as the current plots are based partly on a subjective assessment of the devices. This assessment should be verified with empirical measurements from user tests [23]. Others who wish to employ dimension space analysis can adapt or change the axes as needed, though in the future a standard set of axes more universal in appeal may emerge.

5. References

[1] Blaine, T., Fels, S. Contexts of collaborative musical experiences. In Proceedings of the International Conference on New Instruments for Musical Expression, pp. 129–134. Montreal, Canada, 2002.
Full text (pdf, 284 KB)

[2] Boie, R., Mathews, M. The radio drum as a synthesizer controller. In Proceedings of the International Computer Music Conference, 1989.

[3] Bongers, B. Physical interaction in the electronic arts: Interaction theory and interfacing techniques for real-time performance. In Wanderley, M. M. and Battier, M., editors, Trends in Gestural Control of Music. Ircam, Centre Pompidou, France, 2000.
Full text (pdf, 6.0 MB)

[4] Camurri, A. Interactive dance/music systems. In Proceedings of the International Computer Music Conference, pp. 245–252, 1995.
Full text (pdf, 736 KB)

[5] Fels, S., Vogt, F. Tooka: Explorations of two person instruments. In Proceedings of the International Conference on New Instruments for Musical Expression, pp. 129–134. Dublin, Ireland, 2003.
Full text (pdf, 476 KB)

[6] Graham, T. C. Nicholas, Watts, Leon A., Calvary, Gaëlle, Coutaz, Joëlle, Dubois, Emmanuel, Nigay, Laurence. A dimension space for the design of interactive systems within their physical environments. In Proceedings of the Conference on Designing Interactive Systems, pp. 406–416. ACM Press, 2000.
Full text (pdf, 173 KB)

[7] Jordà, S. FMOL: Toward user-friendly, sophisticated new musical instruments. Computer Music Journal, 26(3):23–39, 2002
Full text (pdf, 208 KB)

[8] MacLean, A., Bellotti, V., Shum, S. Developing the design space with design space analysis. In P. F. Byerley, P. J. Barnard, and J. May, editors, Computers, Communication and Usability: design issues, research and methods for integrated services, pp. 197–219. North Holland Series in Telecommunication, Elsevier, Amsterdam, 1993.
Full text (pdf, 194 KB)

[9] MacLean, A., McKerlie, D. Design space analysis and use-representations. Technical Report EPC-1995-102, Rank Xerox Research Centre, Cambridge, September 1994.
Full text (pdf, 223 KB)

[10] Mathews, M. The conductor program and mechanical baton. In M. Mathews and J. Pierce, editors, Current directions in computer music research, pp. 263–281. MIT Press, Cambridge, Massachusetts, 1989.
Full text (pdf, 2.1 MB)

[11] Mulder, A. Design of virtual three-dimensional instruments for sound control. PhD thesis, Simon Fraser University, 1998.
Full text (pdf, 1.0 MB)

[12] Newton-Dunn, H., Nakano, H., Gibson, J. Block Jam: A tangible interface for interactive music. In Proceedings of the International Conference on New Instruments for Musical Expression, pp. 170–177. Montreal, Canada, 2003.
Full text (pdf, 653 KB)

[13] Palacio-Quintin, C. The Hyper-Flute. In Proceedings of the International Conference on New Instruments for Musical Expression, pp. 206–207. Montreal, Canada, 2003.
Full text (pdf, 174 KB)

[14] Paradiso, J. The Brain Opera technology: New instruments and gestural sensors for musical interaction and performance. Journal of New Music Research, 28(2):130–149, 1999.
Full text (pdf, 1.2 MB)

[15] Pennycook, B. W. Computer-music interfaces: A survey. Computing Surveys, 17(2):267–289, 1999.
Full text (pdf, 1.9 MB)

[16] Piringer, J. Elektronische Musik und Interaktivität: Prinzipien, Konzepte, Anwendungen. PhD thesis, Institut für Gestaltungs – und Wirkungsforschung der Technischen Universität Wien, October 2001.
Full text (pdf, 3.9 MB)

[17] Pressing, J. Some perspectives on performed sound and music in virtual environments. Presence, 6:1–22, 1997.

[18] Pressing, J. Cybernetic issues in interactive performance systems. Computer Music Journal, 14(1):12–25, 1999.

[19] Andrew Schloss, W. Recent advances in the coupling of the language Max with the Mathews/Boie Radio Drum. In Proceedings of the International Computer Music Conference, pp. 398–400, 1990.
Full text (pdf, 223 KB)

[20] Tanaka, A., Bongers, B. Global string: A musical instrument for hybrid space. In M. Fleischmann and W. Strauss, editors, Proceedings: cast01 // Living in Mixed Realities, pp. 177–181. MARS Exploratory Media Lab, FhG – Institut Medienkommunikation, 2001.
Full text (pdf, 937 KB)

[21] Ulyate, R., Bianciardi, D. The interactive dance club: Avoiding chaos in a multi participant environment. In Proceedings of the International Conference on New Interfaces for Musical Expression, Seattle, Washington, 2001.
Full text (pdf, 41 KB)

[22] Waisvisz, M. The Hands, a set of remote MIDI-controllers. In Proceedings of the International Computer Music Conference, pp. 313–318, 1985.
Full text (pdf, 727 KB)

[23] Wanderley, M. M., Orio, N. Evaluation of input devices for musical expression: Borrowing tools from HCI. Computer Music Journal, 26(3):62–76, 2002.
Full text (pdf, 115 KB)

[24] Wanderley, M. M., Orio, N., Schnell, N. Towards an analysis of interaction in sound generating systems. In ISEA2000 Conference Proceedings, December 2000.
Full text (pdf, 270 KB)

[25] Winkler, T. Audience participation and response in movement-sensing installations. In Proceedings of the International Computer Music Conference, December 2000.
Full text (pdf, 38 KB)

← Back to Publications