Image Image Image Image Image

Presented at CHI 2012, Touché is a capacitive system for pervasive, continuous sensing. Among other amazing capabilities, it can accurately sense gestures a user makes on his own body. “It is conceivable that one day mobile devices could have no screens or buttons, and rely exclusively on the body as the input surface.” Touché.

Noticing that many of the same sensors, silicon, and batteries used in smartphones are being used to create smarter artificial limbs, Fast Company draws the conclusion that the market for smartphones is driving technology development useful for bionics. While interesting enough, the article doesn’t continue to the next logical and far more interesting possibility: that phones themselves are becoming parts of our bodies. To what extent are smartphones already bionic organs, and how could we tell if they were? I’m actively researching design in this area – stay tuned for more about the body-incorporated phone.

A study provides evidence that talking into a person’s right ear can affect behavior more effectively than talking into the left.

One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain’s left hemisphere superiority for processing verbal information.

I heavily prefer my left ear for phone calls. So much so that I have trouble understanding people on the phone when I use my right ear. Should I be concerned that my brain seems to be inverted?

Read on and it becomes clear that going beyond perceptual psychology, the scientists are terrifically shrewd:

Tommasi and Marzoli’s three studies specifically observed ear preference during social interactions in noisy night club environments. In the first study, 286 clubbers were observed while they were talking, with loud music in the background. In total, 72 percent of interactions occurred on the right side of the listener. These results are consistent with the right ear preference found in both laboratory studies and questionnaires and they demonstrate that the side bias is spontaneously displayed outside the laboratory.

In the second study, the researchers approached 160 clubbers and mumbled an inaudible, meaningless utterance and waited for the subjects to turn their head and offer either their left of their right ear. They then asked them for a cigarette. Overall, 58 percent offered their right ear for listening and 42 percent their left. Only women showed a consistent right-ear preference. In this study, there was no link between the number of cigarettes obtained and the ear receiving the request.

In the third study, the researchers intentionally addressed 176 clubbers in either their right or their left ear when asking for a cigarette. They obtained significantly more cigarettes when they spoke to the clubbers’ right ear compared with their left.

I’m picturing the scientists using their grant money to pay cover at dance clubs and try to obtain as many cigarettes as possible – carefully collecting, then smoking, their data – with the added bonus that their experiment happens to require striking up conversation with clubbers of the opposite sex who are dancing alone. One assumes that, if the test subject happened to be attractive, once the cigarette was obtained (or not) the subject was invited out onto the terrace so the scientist could explain the experiment and his interesting line of work. Well played!

Another MRI study, this time investigating how we learn parts of speech:

The test consisted of working out the meaning of a new term based on the context provided in two sentences. For example, in the phrase “The girl got a jat for Christmas” and “The best man was so nervous he forgot the jat,” the noun jat means “ring.” Similarly, with “The student is nising noodles for breakfast” and “The man nised a delicious meal for her” the hidden verb is “cook.”

“This task simulates, at an experimental level, how we acquire part of our vocabulary over the course of our lives, by discovering the meaning of new words in written contexts,” explains Rodríguez-Fornells. “This kind of vocabulary acquisition based on verbal contexts is one of the most important mechanisms for learning new words during childhood and later as adults, because we are constantly learning new terms.”

The participants had to learn 80 new nouns and 80 new verbs. By doing this, the brain imaging showed that new nouns primarily activate the left fusiform gyrus (the underside of the temporal lobe associated with visual and object processing), while the new verbs activated part of the left posterior medial temporal gyrus (associated with semantic and conceptual aspects) and the left inferior frontal gyrus (involved in processing grammar).

This last bit was unexpected, at first. I would have guessed that verbs would be learned in regions of the brain associated with motor action. But according to this study, verbs seem to be learned only as grammatical concepts. In other words, knowledge of what it means to run is quite different than knowing how to run. Which makes sense if verb meaning is accessed by representational memory rather than declarative memory.

Researchers at the University of Tampere in Finland found that,

Interfaces that vibrate soon after we click a virtual button (on the order of tens of milliseconds) and whose vibrations have short durations are preferred. This combination simulates a button with a “light touch” – one that depresses right after we touch it and offers little resistance.

Users also liked virtual buttons that vibrated after a longer delay and then for a longer subsequent duration. These buttons behaved like ones that require more force to depress.

This is very interesting. When we think of multimodal feedback needing to make cognitive sense, synchronization first comes to mind. But there are many more synesthesias in our experience that can only be uncovered through careful reflection. To make an interface feel real, we must first examine reality.

Researchers at the Army Research Office developed a vibrating belt with eight mini actuators — “tactors” — that signify all the cardinal directions. The belt is hooked up to a GPS navigation system, a digital compass and an accelerometer, so the system knows which way a soldier is headed even if he’s lying on his side or on his back.

The tactors vibrate at 250 hertz, which equates to a gentle nudge around the middle. Researchers developed a sort of tactile morse code to signify each direction, helping a soldier determine which way to go, New Scientist explains. A soldier moving in the right direction will feel the proper pattern across the front of his torso. A buzz from the front, side and back tactors means “halt,” a pulsating movement from back to front means “move out,” and so on.

A t-shirt design by Derek Eads.

Recent research reveals some fun facts about aural-tactile synesthesia:

Both hearing and touch, the scientists pointed out, rely on nerves set atwitter by vibration. A cell phone set to vibrate can be sensed by the skin of the hand, and the phone’s ring tone generates sound waves — vibrations of air — that move the eardrum…

A vibration that has a higher or lower frequency than a sound… tends to skew pitch perception up or down. Sounds can also bias whether a vibration is perceived.

The ability of skin and ears to confuse each other also extends to volume… A car radio may sound louder to a driver than his passengers because of the shaking of the steering wheel. “As you make a vibration more intense, what people hear seems louder,” says Yau. Sound, on the other hand, doesn’t seem to change how intense vibrations feel.

Max Mathews, electronic music pioneer, has died.

Though computer music is at the edge of the avant-garde today, its roots go back to 1957, when Mathews wrote the first version of “Music,” a program that allowed an IBM 704 mainframe computer to play a 17-second composition. He quickly realized, as he put it in a 1963 article in Science, “There are no theoretical limits to the performance of the computer as a source of musical sounds.”

Rest in peace, Max.

UPDATE: I haven’t updated this blog in a while, and I realized after posting this that my previous post was about the 2010 Modulations concert. Max Mathews played at Modulations too, and that was the last time I saw him.

I finally got around to recording and mastering the set I played at the CCRMA Modulations show a few months back. Though I’ve been a drum and bass fan for many years, this year’s Modulations was the first time I’d mixed it for others. Hope you like it!

Modulations 2010
Drum & Bass | 40:00 | May 2010

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 82.7 MB)


1. Excision — System Check
2. Randomer — Synth Geek
3. Noisia — Deception
4. Bassnectar — Teleport Massive (Bassnectar Remix)
5. Moving Fusion, Shimon, Ant Miles — Underbelly
6. Brookes Brothers — Crackdown
7. The Ian Carey Project — Get Shaky (Matrix & Futurebound’s Nip & Tuck Mix)
8. Netsky — Eyes Closed
9. Camo & Krooked — Time Is Ticking Away feat. Shaz Sparks

Over the last few days this video has been so much bombshell to many of my music-prone friends.

It’s called the Multi-Touch Light Table and it was created by East Bay-based artist/fidget-house DJ Gregory Kaufman. The video is beautifully put together, highlighting the importance of presentation when documenting new ideas.

I really like some of the interaction ideas presented in the video. Others, I’m not so sure about. But that’s all right: the significance of the MTLT is that it’s the first surface-based DJ tool that systematically accounts for the needs of an expert user.

Interestingly, even though it looks futuristic and expensive to us, interfaces like this will eventually be the most accessible artistic tools. Once multi-touch surface are ubiquitous, the easiest way to gain some capability will be to use inexpensive or open-source software. The physical interfaces created for DJing, such as Technics 1200s, are prosthetic objects (as are musical instruments), and will remain more expensive because mechanical contraptions will always be. Now, that isn’t to say that in the future our interfaces won’t evolve to become digital, networked, and multi-touch sensitive, or even that their physicality will be replaced with a digital haptic display. But one of the initial draws of the MTLT—the fact of its perfectly flat, clean interactive surface—seems exotic to us right now, and in the near future it will be default.

Check out this flexible interface called impress. Flexible displays just look so organic and, well impressive. One day these kinds of surface materials will become viable displays and they’ll mark a milestone in touch computing.

It’s natural to stop dancing between songs. The beat changes, the sub-rhythms reorient themselves, a new hook is presented and a new statement is made. But stopping dancing between songs is undesirable. We wish to lose ourselves in as many consecutive moments as possible. The art of mixing music is to fulfill our desire to dance along to continuous excellent music, uninterrupted for many minutes (or, in the best case, many hours) at a time. (Even if we don’t explicitly move our bodies to the music, when we listen our minds are dancing; the same rules apply.)

I don’t remember what prompted me to take that note, but it was probably not that the mixing was especially smooth.



A tomato hailing from Capay, California.

LHCSound is a site where you can listen to sonified data from the Large Hadron Collider. Some thoughts:

  • That’s one untidy heap of a website. Is this how it feels to be inside the mind of a brilliant physicist?
  • The name “LHCSound” refers to “Csound”, a programming language for audio synthesis and music composition. But how many of their readers will make the connection?
  • If they are expecting their readers to know what Csound is, then their explanation of the process they used for sonification falls way short. I want to know the details of how they mapped their data to synthesis parameters.
  • What great sampling material this will make. I wonder how long before we hear electronic music incorporating these sounds.

The Immersive Pinball demo I created for Fortune’s Brainstorm:Tech conference was featured in a BBC special on haptics.

I keep watching the HTC Sense unveiling video from Mobile World Congress 2010. The content is pretty cool, but I’m more fascinated by the presentation itself. Chief marketing officer John Wang gives a simply electrifying performance. It almost feels like an Apple keynote.

The iFeel_IM haptic interface has been making rounds on the internet lately. I tried it at CHI 2010 a few weeks ago and liked it a lot. Affective (emotional haptic) interfaces are full of potential. IFeel_IM mashes together three separate innovations:

  • Touch feedback in several different places on the body: spine, tummy, waist.
  • Touch effects that are generated from emotional language.
  • Synchronization to visuals from Second Life

All are very interesting. The spine haptics seemed a stretch to me, but the butterfly-in-the-tummy was surprisingly effective. The hug was good, but a bit sterile. Hug interfaces need nuance to bring them to the next level of realism.

The fact that the feedback is generated from the emotional language of another person seemed to be one of the major challenges—the software is built to extract emotionally-charged sentences using linguistic models. For example, if someone writes “I love you” to you, your the haptic device on your tummy will react by creating a butterflies-like sensation. As an enaction devotee I would rather actuate a hug with a hug sensor. Something about the translation of words to haptics is difficult for me to accept. But it could certainly be a lot of fun in some scenarios!

I’ve re-recorded my techno mix Awake with significantly higher sound quality. So if you downloaded a copy be sure to replace it with the new file!

Awake

Awake
Techno | 46:01 | October 2009

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 92 MB)


1. District One (a.k.a. Bart Skils & Anton Pieete) — Dubcrystal
2. Saeed Younan — Kumbalha (Sergio Fernandez Remix)
3. Pete Grove — I Don’t Buy It
4. DBN — Asteroidz featuring Madita (D-Unity Remix)
5. Wehbba & Ryo Peres — El Masnou
6. Broombeck — The Clapper
7. Luca & Paul — Dinamicro (Karotte by Gregor Tresher Remix)
8. Martin Worner — Full Tilt
9. Joris Voorn — The Deep

I recently started using Eclipse on OS X and it was so unresponsive, it was almost unusable. Switching tabs was slow, switching perspectives was hella slow. I searched around the web for a solid hour for how to make it faster and finally found the solution. Maybe someone can use it.

My machine is running OS X 10.5, and I have 2 GB of RAM. (This is important because the solution requires messing with how Eclipse handles memory. If you have a different amount of RAM, these numbers may not work and you’ll need to fiddle with them.)

  • Save your work and quit Eclipse.
  • Open the Eclipse application package by right-clicking (or Control-clicking) on Eclipse.app and select “Show Package Contents.”
  • Navigate to Contents→MacOS→, and open “eclipse.ini” in your favorite text editor.
  • Edit the line that starts with -”XX:MaxPermSize” to say “-XX:MaxPermSize=128m”.
  • Before that line, add a line that says “-XX:PermSize=64m”.
  • Edit the line that starts with “-Xms” to say “-Xms40m”.
  • Edit the line that starts ith “-Xmx” to say “-Xmx768m”.
  • Save & relaunch Eclipse.

Worked like a charm for me.

Scroll to Top

To Top

Mapping and dimensionality of a cloth-based sound instrument

David Birnbaum*, Freida Abtan†, Sha Xin Wei†, Marcelo M. Wanderley*
*Input Devices and Music Interaction Laboratory / CIRMMT, McGill University
Topological Media Lab / Hexagram, Concordia University
Montreal, Canada

Abstract

This paper presents a methodology for data extraction and sound production derived from cloth that prompts “improvised play” rather than rigid interaction metaphors based on preexisting cognitive models. The research described in this paper is a part of a larger effort to uncover the possibilities of using sound to prompt emergent play behaviour with pliable materials. This particular account documents the interactivation of a stretched elastic cloth with an integrated sensor array called the “Blanket.”

Keywords

pliable interface, gesture tracking, mapping, emergent play

1. Introduction

Interactivity can be characterized in many different ways. Structural models of gesture have informed efforts to design interactive sound systems, particularly for music performance [2]. Principles derived from human-computer interaction and music cognition studies considering task complexity [17], cognitive load [7], and engineering considerations [10] are common bases for design. At the level of design theory, a conceptual framework for digital musical instrument design proposed by two of the authors applies human-machine interaction and semiotics to musical performance [8].

At the same time, theories of embodiment suggest modes of interaction behaviour may emerge outside the cognitive or linguistic realms. The design of a gestural interface with sound feedback need not rely on semiotics or an assumption of an agent actor. Avoiding such assumptions eliminates a reliance on intentionality for control, and promotes emergent play behaviours [18].

Incorporating aspects of both of these paradigms, this project is part of Wearable Sounds Gestural Instruments (WYSIWYG), a research effort aiming to create a suite of soft, cloth-based controllers that transform freehand gestures into sounds. These “sound instruments” [19][5] can be embedded into furnishings or rooms, or used as props in improvised play. Sound responds to diverse input variables such as proximity, movement, and history of activity. The interactions are designed in the spirit of games such as hide-and-seek, blind-man’s buff, and simon-says, working well with a variable number of players in live, ad hoc, co-present events. The design goal for the “wysiwyg” described in this paper was to use a simple fabric-based interface to explore the way synthesis methods may be used to represent interactions with cloth.

Fusing fabric art with digital feedback systems holds many possibilities because the basic interaction characteristics of fabric are so commonly experienced. Fabric is malleable, tangible, textural, and material. It promotes multisensory, haptic modes of exploration and manipulation. It carries a pre-existing context of gesture and expression that need not rely on linguistic tokens to represent interaction modes; instead, the surface dynamics of cloth generate recognizable states based on structural similarities. For example, a “fold” is recognizable even though the set of all possible transformations that could be called a fold is infinite. Characteristics such as these are independent of their specific physical manifestation in cloth.

Textile environments have been widely used in art and sound installations. Electronics have been embedded into articles of clothing as a platform for interactive performance [9][12], textiles themselves have been used as a malleable physical interface [19][11][13], and fabric-based installations have been created on an architectural scale [16][14]. A detailed physical model of textile motion has been created for musical control [3], however the user interface is graphical rather than physical.

2. The system

To explore these concepts, a physical interface was constructed. The “Blanket” is a 3×3-meter square piece of elastic fabric. Sewn into its top surface is a sensor system made up of 25 light-dependent resistors (LDRs) arranged in a 5×5 array. The control surface of the interface consists of the entirety of the cloth. In exhibition, the fabric is stretched by its corners and elevated about 1.5 meters off the ground. It may be touched, stretched, pushed up, pulled down, shaken, scrunched, or interacted with in any way the human body might be applied. Most notably, it is large enough for several people to interact with it simultaneously.

The LDRs become motion sensors with the use of theatrical lighting. Beams of light are strategically positioned perpendicular to the Blanket’s top surface so that when it is put into motion, the sensors are exposed to variable amounts of light as they are brought nearer and further from the center of the beams. The voltage output of the sensor system is sampled by an Arduino A/D converter [1]. Preprocessing, mapping, and synthesis take place in Max/MSP.


(a) The interface at rest
(b) Interaction
Figure 1. The Blanket

3. Gesture tracking

Gesture has been defined as an intentionally expressive bodily motion recognized in a particular cultural context [6]. However the Blanket system does not acquire the gestures of the human interactors. Rather it maps cloth movement to sound parameters that promote precognitive interaction. In this sense, what is tracked is the “gesture” of the fabric rather than that of the humans, using the word gesture to refer to a single contour within the sensor system’s total response arising through the application of some data extraction method. Because intentionality is purposefully omitted from our software model, “noise” is also defined by the data analysis approach, and consists of all of the confounding factors when attempting to isolate a specific cloth movement. Differentiating signal from noise thus necessitates a specification of what constitutes a unitary contour in the data stream. A cloth gesture could be said to be the human interpretation of a contour in feature space arising from motion due to human(s).

Because the goal of this research is to determine how to generate meaningful sound derived from the “raw” physicality of cloth, feature extraction was limited to three functions: absolute activity (the values of all sensors added), sensor velocity, and activity “spike” (a sudden change in value that exceeds a preset threshold). Each of these functions make available a dynamic data stream for mapping to audio parameters.

The sensor system is by its nature a two-dimensional array that moves in three-dimensional space, which prompts the question of how to sample the array. A sampling technique consists of a spatiotemporal sequencing of individual sensor outputs defining a unique “domain”. Domains are orderings of the sensor array that include all discrete sensing points once and only once, but may segment sensor readings into groups. These constraints are inspired by measure theory, in the hopes of obtaining a holistic “snapshot” of the state of the entire interface. (To test for dimensionality of a set of points N, the points inside a given radius Nr may be counted. Because the two-dimensional sensor array may be approximated as a “point cloud”, using all points once and only once within a ball of radius r will best represent its dimensionality.) Contours in the signal correlating to gesture are mapped to sound synthesis parameters depending on how the array is sampled. Three approaches were taken to sample the two-dimensional array, defining the domains of string, sectors, and atoms. Each method offers its own distinct approximation of the gestures traveling through the cloth.

3.1 String

The string model treats the data points as a space-filling curve. In this schema, the two-dimensional array is linearized by scanning all sensor values via a “walk” along the surface of the fabric, consolidating the total output of the sensor array to drive a single synthesis parameter. The domain’s differentiating characteristic is that it is a one-dimensional ordering of the set.

The path of the scanning sequence can be arbitrary, but physical properties of the cloth at each point on its surface have a profound effect on the string. For example, because the corners of the cloth are immobilized by support ropes, there is a damping effect that is most pronounced at the Blanket’s corners and edges, causing an increase in the relative kinetic motion of its center. When the motion of the interface resembles a vibrating membrane, modal physics affects sensor outputs. Two scanning paths were utilized to observe how physical factors influence the perceived meaning of the sound output: a spiral and a switchback.

The spiral walk begins at one of the outer corners of the interface, where motion is dampened by the support ropes, and spirals inward to the Blanket’s center. For the first mode, the points with the lowest kinetic energy are concentrated at the beginning of the walk and those with the highest are at the end. In contrast, the switchback introduces a periodic damping effect, as peripheral points are equally distributed among internal points. (Arbitrarily chosen walks will, of course, always be subject to the effect of damping, light source placement, and human factors arising from the embodiment of interaction be- haviours.) Because the string domain was used to map each sensor value to a virtual mass on a string for scanned synthesis using the scansynth∼ object [15][4], the effects of any walk are evident in the timbre of the resulting sound. Out of the three domains, the string model is most responsive to the kinetic properties of the entire interface.

3.2 Sectors

The sector model is based on two principal functions, namely the dividing of the cloth into sonically-autonomous regions, and data smoothing within those regions by averaging all the points within them.

The matrixctl object in the patch allows a sound programmer to select multiple contiguous regions of the fabric and define them as sectors. The data associated with each sector can then be mapped to the control parameters of a sound instrument. Although no sensor ordering information is preserved within sectors, the membership clause of contiguity assures a relationship between the sectorized data points in two-dimensional space. Since sectorization downsamples the two-dimensional array to extract data values by area, the illusion of a unified response taking place over the entirety of each sector is created. As a result, gestures over the control surface take the form of an interaction between such distinct forces. Using each sector to control a separate instrument results in a polyphonic effect.

Two possible choices for boundary conditions include sectorization by surface geometry, or by physical properties such as damping conditions, proximity to light source, or performance-specific contingencies. Each choice im- plies an assumption about what data should be considered a unified force and so responds according to a different model of meaningful interaction. For example, sectorization by quadrant assumes that the most meaningful distinction between forces is absolute location of an interactor, whereas a partition into concentric regions emphasizes the physics of the control surface. For the final implementation, the decision was made to group sensor data together by quadrant after observing the temporal scale at which a gesture would resolve over the entirety of the interface. Two results of this decision are that participants concentrated in one quadrant are prompted by sound to cooperate, and participants located in separate regions have primary influence over the voice(s) associated with their sector.

The sector approach is not without its weaknesses. Averaging several points runs the risk of including sensor data that is either redundant or not engaged in the interac- tion, as do the constraints imposed by contiguity and the inclusion of each data point exactly once. Additionally, sampling fewer than all sensors may give an equivalent result, while the activity of non-contiguous regions may be accurately represented by one voice due to modal properties of the interface. However these weaknesses can be seen as the price paid for choosing to preserve a holistic representation of the surface.

The quadrants were sonified by mixing four granular synthesizers. The intensity of each voice was controlled by the overall amount of activity in the associated quadrant, while textural properties such as amount of grains and pitch were controlled by rate-of-change parameters.

3.3 Atoms

Mapping each data point to its own individual sound generator utilizes the entirety of the interface’s output without making any assumptions about ordering or location. Treated as ‘atoms,’ each sensor is given a unique voice independent of the state of the entire interface. However, because the synthesizer responds to each input simultaneously, sound is very tightly coupled to gesture, which is expressed as temporal variation in the atoms. A relationship between points is manifest in the response of the sound instrument, but is not inherent in the definition of the domain. Without any ordering or grouping information whatsoever, this domain could be said to have zero dimensions; it makes no assertions about the dimensionality of the ambient object. Because assumptions about dimensionality are minimized, this approach may be said to avoid corruption of the data as a result of mapping choices. Gesture is represented in the sound, but not in the data.

The synthesis implementation consists of a wavetable “scrubbing” technique. Each atom controls the position of the playback head in its own waveform∼ object. Velocity is mapped to the scrubbing speed and direction, so that kinetic energy and direction of motion are represented by pitch and timbre, respectively. A smoothing function has also been applied to act as a threshold, so that minute changes in sensor outputs while the interface is at rest do not cause low frequency noise.

(a) Spiral
(b) Switchback
Figure 2: Two possible paths for defining a “string” domain.

(a) Quadrants
(b) Concentric rings
Figure 3: Two “sector” domains.

Figure 4: Atoms.

4. Discussion

If the strict adherence to the definition of a domain is relaxed, many possibilities emerge. Using subsets of non-contiguous groupings of points might allow greater freedom to account for redundant or unnecessary information. Periodic motion or relative position may in fact be detectable without all of the sensor data.
The three dimension-based mapping strategies outlined here are similar in that they are ways of viewing the motion of a whole object. In practice sound instruments can adopt each paradigm in succession or can incorporate all three into their data extraction process. The three approaches may be used together in the same performance—either in parallel, to control different aspects of the sound feedback, or in sequence, to delineate game stages. Used in parallel, the domains could control multiple timbral features of the same sound instrument. In sequence, the shift from one domain to another could symbolize a state change arising either from interaction events or compositional choices. Further, audio output can be produced by many sound instruments simultaneously, each enforcing a separate abstraction of the same data stream. The aptness of a given domain may be based on the sonic result of perceived gestures, or the ability of sonic feedback to direct gesture, indicate compositional choices, or influence play behavior.

5. Acknowledgments

WYSIWYG is Freida Abtan, David Birnbaum, David Gauthier, Rodolphe Koehly, Doug van Nort, Elliot Sinyor, Sha Xin Wei, and Marcelo M. Wanderley. This project has been funded by Hexagram. The authors would like to thank Eric Conrad, Harry Smoak, Marguerite Bromley, and Joey Berzowska.

6. References

[1] Arduino. http://www.arduino.cc/. Accessed on June 11, 2007.

[2] Cadoz, C. Instrumental gesture and musical composition. In Proceedings of the International Computer Music Conference, pages 1–12. Cologne, Germany, 1988.

[3] Chu, L. L. Musicloth: A design methodology for the development of a performance interface. In Proceedings of the International Computer Music Conference, pages 449–452. Beijing, China, 1999.

Full text (pdf, 74 KB)

[4] Couturier, J. A scanned synthesis virtual instrument. In Proceedings of the Conference on New Interfaces for Musical Expression, pages 176–178. Dublin, Ireland, 2002.

Full text (pdf, 154 KB)

[5] Fantauzza, J., Berzowska, J., Dow, S., Iachello, G., and Sha, X. W. Greeting dynamics using expressive softwear. In Ubicomp Adjunct Proceedings. Seattle, Washington, USA, 2003.

[6] Kendon, A. Gesture: Visible Action as Utterance. Cambridge University Press, 2004.

[7] Levitin, D. J., McAdams, S., Adams, R. L. Control parameters for musical instruments: A foundation for new mappings of gesture to sound. Organized Sound, 7(2):171–189, 2002.

Full text (pdf, 194 KB)

[8] Malloch, J., Birnbaum, D. M., Sinyor, E., Wanderley, M. M. Towards a new conceptual framework for digital musical instruments. In Proceedings of the International Conference on Digital Audio Effects, pages 49–52. Montreal, Canada, 2006.

Full text (pdf, 333 KB)

[9] Maubrey, B. Audio jackets and other electracoustic clothes. Leonardo, 28(2):93–97, 1995.

[10] Miranda, E. R., Wanderley, M. M. New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. A/R Editions, 2006.

[11] Moores, J. Sonic fabric. Make: Technology On Your Time, 5, 2006.

[12] Orth, M., Smith, J. R., Post, E. R., Strickon, J. A., Cooper, E. B. Musical jacket. In International Conference on Computer Graphics and Interactive Techniques: ACM SIGGRAPH 98 Electronic art and animation catalog, page 38. New York, New York, USA, 1998.

[13] Robson, D. Play!: Sound toys for the non musical. In Proceedings of the 1st Workshop on New Interfaces for Musical Expression. Seattle, Washington, USA, 2001.

Full text (pdf, 293 KB)

[14] Toth, J. The space of creativity: Hypermediating the beautiful and the sublime. PhD thesis, European Graduate School, 2005.

Full text (pdf, 1.9 MB)

[15] Verplank, W., Mathews, M., Shaw, R. Scanned synthesis. In Proceedings of the International Computer Music Conference, pages 368–371. Berlin, Germany, 2000.

[16] Volz Christo, W., Jeanne-Claude: The Gates, Central Park, New York City. Taschen America, LLC, 2005.

[17] Wanderley, M. M., Viollet, J., Isart, F., Rodet, X. On the choice of transducer techonlogies for specific musical functions. In Proceedings of the International Computer Music Conference, pages 244–247. Berlin, Germany, 2000.

Full text (pdf, 63 KB)

[18] Sha, X. W. Resistance is fertile: Gesture and agency in the field of responsive media. Configurations, 10(3):439–472, 2002.

Full text (pdf, 330 KB)

[19] Sha, X. W., Serita, Y., Fantauzza, J., Dow, S., Iachello, G., Fiano, V., Berzowska, J., Caravia, Y., Nain, D., Reitberger, W., Fistre, J. Demonstrations of expressive softwear and ambient media. In Ubicomp Adjunct Proceedings, pages 131–136, 2003.

Full text (pdf, 1.0 MB)

← Back to Publications