Image Image Image Image Image

Presented at CHI 2012, Touché is a capacitive system for pervasive, continuous sensing. Among other amazing capabilities, it can accurately sense gestures a user makes on his own body. “It is conceivable that one day mobile devices could have no screens or buttons, and rely exclusively on the body as the input surface.” Touché.

Noticing that many of the same sensors, silicon, and batteries used in smartphones are being used to create smarter artificial limbs, Fast Company draws the conclusion that the market for smartphones is driving technology development useful for bionics. While interesting enough, the article doesn’t continue to the next logical and far more interesting possibility: that phones themselves are becoming parts of our bodies. To what extent are smartphones already bionic organs, and how could we tell if they were? I’m actively researching design in this area – stay tuned for more about the body-incorporated phone.

A study provides evidence that talking into a person’s right ear can affect behavior more effectively than talking into the left.

One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain’s left hemisphere superiority for processing verbal information.

I heavily prefer my left ear for phone calls. So much so that I have trouble understanding people on the phone when I use my right ear. Should I be concerned that my brain seems to be inverted?

Read on and it becomes clear that going beyond perceptual psychology, the scientists are terrifically shrewd:

Tommasi and Marzoli’s three studies specifically observed ear preference during social interactions in noisy night club environments. In the first study, 286 clubbers were observed while they were talking, with loud music in the background. In total, 72 percent of interactions occurred on the right side of the listener. These results are consistent with the right ear preference found in both laboratory studies and questionnaires and they demonstrate that the side bias is spontaneously displayed outside the laboratory.

In the second study, the researchers approached 160 clubbers and mumbled an inaudible, meaningless utterance and waited for the subjects to turn their head and offer either their left of their right ear. They then asked them for a cigarette. Overall, 58 percent offered their right ear for listening and 42 percent their left. Only women showed a consistent right-ear preference. In this study, there was no link between the number of cigarettes obtained and the ear receiving the request.

In the third study, the researchers intentionally addressed 176 clubbers in either their right or their left ear when asking for a cigarette. They obtained significantly more cigarettes when they spoke to the clubbers’ right ear compared with their left.

I’m picturing the scientists using their grant money to pay cover at dance clubs and try to obtain as many cigarettes as possible – carefully collecting, then smoking, their data – with the added bonus that their experiment happens to require striking up conversation with clubbers of the opposite sex who are dancing alone. One assumes that, if the test subject happened to be attractive, once the cigarette was obtained (or not) the subject was invited out onto the terrace so the scientist could explain the experiment and his interesting line of work. Well played!

Another MRI study, this time investigating how we learn parts of speech:

The test consisted of working out the meaning of a new term based on the context provided in two sentences. For example, in the phrase “The girl got a jat for Christmas” and “The best man was so nervous he forgot the jat,” the noun jat means “ring.” Similarly, with “The student is nising noodles for breakfast” and “The man nised a delicious meal for her” the hidden verb is “cook.”

“This task simulates, at an experimental level, how we acquire part of our vocabulary over the course of our lives, by discovering the meaning of new words in written contexts,” explains Rodríguez-Fornells. “This kind of vocabulary acquisition based on verbal contexts is one of the most important mechanisms for learning new words during childhood and later as adults, because we are constantly learning new terms.”

The participants had to learn 80 new nouns and 80 new verbs. By doing this, the brain imaging showed that new nouns primarily activate the left fusiform gyrus (the underside of the temporal lobe associated with visual and object processing), while the new verbs activated part of the left posterior medial temporal gyrus (associated with semantic and conceptual aspects) and the left inferior frontal gyrus (involved in processing grammar).

This last bit was unexpected, at first. I would have guessed that verbs would be learned in regions of the brain associated with motor action. But according to this study, verbs seem to be learned only as grammatical concepts. In other words, knowledge of what it means to run is quite different than knowing how to run. Which makes sense if verb meaning is accessed by representational memory rather than declarative memory.

Researchers at the University of Tampere in Finland found that,

Interfaces that vibrate soon after we click a virtual button (on the order of tens of milliseconds) and whose vibrations have short durations are preferred. This combination simulates a button with a “light touch” – one that depresses right after we touch it and offers little resistance.

Users also liked virtual buttons that vibrated after a longer delay and then for a longer subsequent duration. These buttons behaved like ones that require more force to depress.

This is very interesting. When we think of multimodal feedback needing to make cognitive sense, synchronization first comes to mind. But there are many more synesthesias in our experience that can only be uncovered through careful reflection. To make an interface feel real, we must first examine reality.

Researchers at the Army Research Office developed a vibrating belt with eight mini actuators — “tactors” — that signify all the cardinal directions. The belt is hooked up to a GPS navigation system, a digital compass and an accelerometer, so the system knows which way a soldier is headed even if he’s lying on his side or on his back.

The tactors vibrate at 250 hertz, which equates to a gentle nudge around the middle. Researchers developed a sort of tactile morse code to signify each direction, helping a soldier determine which way to go, New Scientist explains. A soldier moving in the right direction will feel the proper pattern across the front of his torso. A buzz from the front, side and back tactors means “halt,” a pulsating movement from back to front means “move out,” and so on.

A t-shirt design by Derek Eads.

Recent research reveals some fun facts about aural-tactile synesthesia:

Both hearing and touch, the scientists pointed out, rely on nerves set atwitter by vibration. A cell phone set to vibrate can be sensed by the skin of the hand, and the phone’s ring tone generates sound waves — vibrations of air — that move the eardrum…

A vibration that has a higher or lower frequency than a sound… tends to skew pitch perception up or down. Sounds can also bias whether a vibration is perceived.

The ability of skin and ears to confuse each other also extends to volume… A car radio may sound louder to a driver than his passengers because of the shaking of the steering wheel. “As you make a vibration more intense, what people hear seems louder,” says Yau. Sound, on the other hand, doesn’t seem to change how intense vibrations feel.

Max Mathews, electronic music pioneer, has died.

Though computer music is at the edge of the avant-garde today, its roots go back to 1957, when Mathews wrote the first version of “Music,” a program that allowed an IBM 704 mainframe computer to play a 17-second composition. He quickly realized, as he put it in a 1963 article in Science, “There are no theoretical limits to the performance of the computer as a source of musical sounds.”

Rest in peace, Max.

UPDATE: I haven’t updated this blog in a while, and I realized after posting this that my previous post was about the 2010 Modulations concert. Max Mathews played at Modulations too, and that was the last time I saw him.

I finally got around to recording and mastering the set I played at the CCRMA Modulations show a few months back. Though I’ve been a drum and bass fan for many years, this year’s Modulations was the first time I’d mixed it for others. Hope you like it!

Modulations 2010
Drum & Bass | 40:00 | May 2010

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 82.7 MB)


1. Excision — System Check
2. Randomer — Synth Geek
3. Noisia — Deception
4. Bassnectar — Teleport Massive (Bassnectar Remix)
5. Moving Fusion, Shimon, Ant Miles — Underbelly
6. Brookes Brothers — Crackdown
7. The Ian Carey Project — Get Shaky (Matrix & Futurebound’s Nip & Tuck Mix)
8. Netsky — Eyes Closed
9. Camo & Krooked — Time Is Ticking Away feat. Shaz Sparks

Over the last few days this video has been so much bombshell to many of my music-prone friends.

It’s called the Multi-Touch Light Table and it was created by East Bay-based artist/fidget-house DJ Gregory Kaufman. The video is beautifully put together, highlighting the importance of presentation when documenting new ideas.

I really like some of the interaction ideas presented in the video. Others, I’m not so sure about. But that’s all right: the significance of the MTLT is that it’s the first surface-based DJ tool that systematically accounts for the needs of an expert user.

Interestingly, even though it looks futuristic and expensive to us, interfaces like this will eventually be the most accessible artistic tools. Once multi-touch surface are ubiquitous, the easiest way to gain some capability will be to use inexpensive or open-source software. The physical interfaces created for DJing, such as Technics 1200s, are prosthetic objects (as are musical instruments), and will remain more expensive because mechanical contraptions will always be. Now, that isn’t to say that in the future our interfaces won’t evolve to become digital, networked, and multi-touch sensitive, or even that their physicality will be replaced with a digital haptic display. But one of the initial draws of the MTLT—the fact of its perfectly flat, clean interactive surface—seems exotic to us right now, and in the near future it will be default.

Check out this flexible interface called impress. Flexible displays just look so organic and, well impressive. One day these kinds of surface materials will become viable displays and they’ll mark a milestone in touch computing.

It’s natural to stop dancing between songs. The beat changes, the sub-rhythms reorient themselves, a new hook is presented and a new statement is made. But stopping dancing between songs is undesirable. We wish to lose ourselves in as many consecutive moments as possible. The art of mixing music is to fulfill our desire to dance along to continuous excellent music, uninterrupted for many minutes (or, in the best case, many hours) at a time. (Even if we don’t explicitly move our bodies to the music, when we listen our minds are dancing; the same rules apply.)

I don’t remember what prompted me to take that note, but it was probably not that the mixing was especially smooth.



A tomato hailing from Capay, California.

LHCSound is a site where you can listen to sonified data from the Large Hadron Collider. Some thoughts:

  • That’s one untidy heap of a website. Is this how it feels to be inside the mind of a brilliant physicist?
  • The name “LHCSound” refers to “Csound”, a programming language for audio synthesis and music composition. But how many of their readers will make the connection?
  • If they are expecting their readers to know what Csound is, then their explanation of the process they used for sonification falls way short. I want to know the details of how they mapped their data to synthesis parameters.
  • What great sampling material this will make. I wonder how long before we hear electronic music incorporating these sounds.

The Immersive Pinball demo I created for Fortune’s Brainstorm:Tech conference was featured in a BBC special on haptics.

I keep watching the HTC Sense unveiling video from Mobile World Congress 2010. The content is pretty cool, but I’m more fascinated by the presentation itself. Chief marketing officer John Wang gives a simply electrifying performance. It almost feels like an Apple keynote.

The iFeel_IM haptic interface has been making rounds on the internet lately. I tried it at CHI 2010 a few weeks ago and liked it a lot. Affective (emotional haptic) interfaces are full of potential. IFeel_IM mashes together three separate innovations:

  • Touch feedback in several different places on the body: spine, tummy, waist.
  • Touch effects that are generated from emotional language.
  • Synchronization to visuals from Second Life

All are very interesting. The spine haptics seemed a stretch to me, but the butterfly-in-the-tummy was surprisingly effective. The hug was good, but a bit sterile. Hug interfaces need nuance to bring them to the next level of realism.

The fact that the feedback is generated from the emotional language of another person seemed to be one of the major challenges—the software is built to extract emotionally-charged sentences using linguistic models. For example, if someone writes “I love you” to you, your the haptic device on your tummy will react by creating a butterflies-like sensation. As an enaction devotee I would rather actuate a hug with a hug sensor. Something about the translation of words to haptics is difficult for me to accept. But it could certainly be a lot of fun in some scenarios!

I’ve re-recorded my techno mix Awake with significantly higher sound quality. So if you downloaded a copy be sure to replace it with the new file!

Awake

Awake
Techno | 46:01 | October 2009

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 92 MB)


1. District One (a.k.a. Bart Skils & Anton Pieete) — Dubcrystal
2. Saeed Younan — Kumbalha (Sergio Fernandez Remix)
3. Pete Grove — I Don’t Buy It
4. DBN — Asteroidz featuring Madita (D-Unity Remix)
5. Wehbba & Ryo Peres — El Masnou
6. Broombeck — The Clapper
7. Luca & Paul — Dinamicro (Karotte by Gregor Tresher Remix)
8. Martin Worner — Full Tilt
9. Joris Voorn — The Deep

I recently started using Eclipse on OS X and it was so unresponsive, it was almost unusable. Switching tabs was slow, switching perspectives was hella slow. I searched around the web for a solid hour for how to make it faster and finally found the solution. Maybe someone can use it.

My machine is running OS X 10.5, and I have 2 GB of RAM. (This is important because the solution requires messing with how Eclipse handles memory. If you have a different amount of RAM, these numbers may not work and you’ll need to fiddle with them.)

  • Save your work and quit Eclipse.
  • Open the Eclipse application package by right-clicking (or Control-clicking) on Eclipse.app and select “Show Package Contents.”
  • Navigate to Contents→MacOS→, and open “eclipse.ini” in your favorite text editor.
  • Edit the line that starts with -”XX:MaxPermSize” to say “-XX:MaxPermSize=128m”.
  • Before that line, add a line that says “-XX:PermSize=64m”.
  • Edit the line that starts with “-Xms” to say “-Xms40m”.
  • Edit the line that starts ith “-Xmx” to say “-Xmx768m”.
  • Save & relaunch Eclipse.

Worked like a charm for me.

Scroll to Top

To Top

SensorWiki.org: A collaborative resource for researchers and instrument designers

Marcelo M. Wanderley, David M. Birnbaum, Joseph Malloch,
Elliot Sinyor, Julien Boissinot
Input Devices and Music Interaction Laboratory
Schulich School of Music, McGill University
Montreal, Canada

Abstract

This paper describes an online Wiki, a collaborative Web site designed to allow users to edit and add content. It was created at the Input Devices and Music Interaction Laboratory with the aim of promoting and supporting the construction of new musical interfaces. Although many individual universities and research centres offer sources of relevant information online, this project allows for easy sharing and dissemination of information across institutional and international boundaries. In this paper, the internal framework and categorization scheme for the Wiki is profiled, and each section is introduced. The benefits of joining this effort are clearly demonstrated, and the possible future directions of the project are detailed.

Keywords

sensors, Wiki, collaborative website, open content

1. Introduction

A Wiki, a term derived from the Hawaiian for “quick,” is a Web site configured to collect and distribute free information, by allowing site viewers to edit its content [18]. It is made up of two systems working together — a template layer which converts a simple markup language written by users to HTML documents, and a version control log that records the time and page on which each individual edit occurs. These two parallel subsystems facilitate non-destructive editing and help safeguard against vandalism [10]. By its nature, a Wiki makes possible many-to-many communication amongst contributers and users [4]. There are several Wiki software engines available, both proprietary and open source [13] [9] [12] [16] [6]. WikiMatrix.org [17], a Web forum for discussion of Wiki engines, lists 46 separate systems. The size of a Wiki is usually measured by article count, but several other options exist [7], which include:

  • Total size of Wiki in bytes
  • Total equivalent pages in A4 paper
  • Total number and frequency table of words
  • Number of articles in different byte size ranges, such as 50, 250, or 1000 bytes
  • Size of articles N/L, 2N/L, … LN/L where N is the total number of articles and L is the number of languages in which articles are written

The largest Wiki by all measurements is the Wikipedia, an online collaborative encyclopedia project [4]. It utilizes WikiMedia Foundation’s GPL–licensed engine, Mediawiki [15], as does SensorWiki. While some Wikis require a short registration process for editors, as of writing the SensorWiki does not. (As of this publishing there has not yet been excessive amounts of vandalism, so a registration process has not proved necessary. If spam and editing by “bots” becomes an issue, this point will need to be reconsidered.) The advantage of Wikis as compared to traditional websites is that information can be quickly shared amongst all the interested members in a given field or community. Structured Wikis like SensorWiki attempt to combine this open nature with the format consistency and flexibility of a database application [14].

While some of the information compiled and developed by research labs is proprietary, there is an abundance of material that would be made public if the proper forum for its release were made available. The SensorWiki project is just such a forum.

2. Existing resources

There is much published research about sensors [11] [3]. Papers (for instance [1]) and more recently a book [8], address musical applications of sensors. Although these resources are useful because they have undergone a rigorous editing and review process, most of what is available stops short of providing specific data, such as information on sensor purchasing (where, how much), as this information changes often. This is also true for publications in journals and conference proceedings. To find practical information, one must usually conduct online research to determine what is available for a given task, compare specifications and prices, and finally make contact with a company and place an order. But entering the word “sensor” into a search engine such as Google yields millions of superfluous results. The process is not only extremely time-consuming, it is often repeated unnecessarily because the information gathered each time is not organized and preserved.

The creation of a resource tool about sensors in music is an ideal application for a Wiki system for several reasons: It serves as a single place to gather resources and information, it allows and encourages members of different institutions to share their findings and discoveries, and finally it can be updated quickly and easily as new information becomes available. It is then complementary to other sources of information, such as articles and books.

3. SensorWiki.org

SensorWiki is located at www.sensorwiki.org, and is currently organized into three sections:

  • A comprehensive list of sensors, each with their own sensor description page.
  • A database of references on interfaces and interaction.
  • A section containing detailed tutorials related to sensor interface design.
3.1 Sensor list

Sensors in the list are organized in categories according to the physical phenomenon they sense, for instance, rotary position, linear position, or force/pressure/strain.

If a sensor can be used to sense more than one phenomenon, it is included under each category for completeness. However, this is only a repetition of the link; each leads to the same sensor description page.

Each sensor description page includes:

  • An introductory paragraph where background is provided and general issues about the sensor are discussed.
  • A section describing the practical use of the sensor, including ways of constructing conditioning circuits, mounting techniques, and type of signal output.
  • A list of companies that offer this sensor, including a data sheet, price list, and link to the company’s site.
  • Media featuring the sensor, such as images, video, circuit diagrams, and CircuitMaker [2] simulation files.
  • External links and a reference list of resources used in the writing of the article.
3.2 Reference list

The list of references started as a duplicate of the online resource Interactive Systems and Instrument Design in Music Working Group (ISIDM), meant to provide a knowledge-base for researchers and workers in the field [5]. Although the information included in the original knowledge-base hosted at ICMA (International Computer Music Association) and later at McGill is invaluable, it is incomplete and difficult to update (it is a standard HTML webpage and not editable by the public). It is here proposed that SensorWiki provides a much better forum for this knowledge-base, as references are easily added and edited, and the discussion pages allow public communication on changes or direction. Like the original ISIDM webpage, the SensorWiki knowledge-base provides links and references for the following topics:

  • Evolution of interactive electronic systems
  • Interaction & performance
  • Sensors & actuators
  • Interface design
  • Mapping
  • Software tools
  • Dance technology

Each topic consists of three sections: Introductory References, which introduces the topic to beginners, clarifies some of the vocabulary used, and provides references and links to published work that outlines the topic, an exhaustive Bibliography in Computer Music Journal format, and an Internet Directory, linking to useful resources available online. We hope that by moving the ISIDM to the SensorWiki we will achieve the level of collaboration originally intended by the working group.

3.3 Tutorials

SensorWiki also includes tutorial pages oriented toward guiding a reader through specific projects from beginning to end. These are usually prepared by individuals, and subsequently edited in minor ways. Initial examples include an overview of basic sensor interfacing techniques, and a lucid and complete tutorial on integrating the USB with microcontroller projects.

3.4 Applications

It is hoped that this flexible and comprehensive resource will prove useful for researchers who wish to use sensors in their projects. Since it is hosted by a music technology research lab, the Wiki’s content tends to be music-oriented, however the information it provides is also useful for robotics, installations, interactive dance systems, and research in a host of other fields. Reasons for joining the SensorWiki project should be clear—the project will allow everyone in our community to benefit from the knowledge and experience of their colleagues. In the same spirit that the NIME and ICMC conferences foster research, new innovation and collaborations, the Sensor Wiki will allow individuals and schools working in the field to grow and learn faster together than they could apart.

4. Future expansion

The future direction and expansion of the SensorWiki project will depend heavily on groups and individuals not associated with the IDMIL or the Music Technology Area of McGill University. Although the initial contributors guided the design and formatting of the Wiki in a two-year development process, the content provided thus far is intended to merely initiate a dialogue and sharing of information that will benefit all of us equally, and the basic design and layout may change according to the suggestions of new users.

More accounts of individual experiences with the interface design process are much needed. Backgrounds on musical instruments, commercial controllers, and experimental designs could also be included. Plans for expansion over the coming year include a comprehensive list of actuator technologies to match the sensor list, as well as overviews and tutorials on haptic feedback systems.

5. Conclusions

Already, SensorWiki is a valuable repository of sensor information; as of writing there are approximately 49 legitimate content pages, with over 33,543 page views. Other institutions have begun linking to the site, such as the University of Oslo, the University of Washington’s Center for Digital Arts and Experimental Media, Stanford University’s Center for Computer Research in Music and Acoustics, and the Department of Music at Columbia University. We invite researchers and all individuals who wish to share their expertise to participate in the development of the SensorWiki. With broad participation, it could serve as a central place for open music technology information, a summary for students new to the field, and a valuable resource for students, hobbyists, and researchers.

6. Acknowledgments

The authors would like to acknowledge the contributions of many students and colleagues since the inception of this project in 2004, including Paul Kosek, Mark Zadel, Avrum Hollinger, Stephen Sinclair, Simon de Leon, Mark Marshall, and Darryl Cameron. This pro ject is partially funded by a Natural Sciences and Engineering Research Council of Canada (NSERC Discovery Grant) to the first author and by research funds from the Centre for Interdisciplinary Research in Media Music and Technology (CIRMMT) to the third author. Preemptive thanks are also due to all those who have yet to contribute!

7. References

[1] Bongers, B. Physical Interfaces in the Electronic Arts. Interaction Theory and Interfacing Techniques for Real-time Performance. In Wanderley, M. M. and Battier, M., eds., Trends in Gestural Control of Music. Ircam — Centre Pompidou, 2000.
Full text (pdf, 6.2 MB)

[2] CircuitMaker software by Altium (no longer sold or supported). 2006.

[3] Fraden, J. Handbook of Modern Sensors. Physics, Design and Applications. Springer Verlag, 3rd edition, 2004.

[4] Lih, A. Wikipedia as participatory journalism: Reliable sources? Metrics for evaluating collaborative media as a news resource. In Proceedings of the International Symposium on Online Journalism, 2004.
Full text (pdf, 304 KB)

[5] Wanderley, M. M. Report on the First 18 Months of the ICMA/EMF Working Group on Interactive Systems and Instrument Design in Music – ISIDM. In ICMA Array, 2002.

[6] MediaWiki. http://www.mediawiki.org/wiki/mediawiki. 2006.

[7] Meta-Wiki. Wiki stats other than the article count. http://meta.wikimedia.org/wiki/, 2006.

[8] Miranda, E. R., Wanderley, M. M. New Digital Musical Instruments: Control and Interaction beyond the Keyboard. A/R Editions, 2006.

[9] MoinMoin. http://moinmoin.wikiwikiweb.de/. 2006.

[10] O’Murchu, I., Breslin, J. G., Decker, S. Online social and business networking communities. In Proceedings of ECAI 2004 Workshop on Application of Semantic Web Technologies to Web Communities, 2004.
Full text (pdf, 413 KB)

[11] Pallas-Areny, R., Webster, J. Sensors and Signal Conditioning. Wiley Interscience, 2nd edition, 2001.

[12] PhpWiki. http://phpwiki.sourceforge.net/. 2006.

[13] TWiki. http://twiki.org/. 2006.

[14] TWiki:StructuredWiki. http://www.twiki.org/cgi-bin/view/Codev/StructuredWiki. 2006.

[15] Voß, J. Measuring Wikipedia. In Proceedings of the 10th International Conference of the International Society for Scientometrics and Infometrics, 2005.
Full text (pdf, 356 KB)

[16] CTEWiki. http://ctewiki.pbworks.com/. 2006.

[17] WikiMatrix. http://www.wikimatrix.org/. 2006.

[18] Wikipedia:Wiki. http://en.wikipedia.org/wiki/wiki. 2006.

← Back to Publications