Image Image Image Image Image

Presented at CHI 2012, Touché is a capacitive system for pervasive, continuous sensing. Among other amazing capabilities, it can accurately sense gestures a user makes on his own body. “It is conceivable that one day mobile devices could have no screens or buttons, and rely exclusively on the body as the input surface.” Touché.

Noticing that many of the same sensors, silicon, and batteries used in smartphones are being used to create smarter artificial limbs, Fast Company draws the conclusion that the market for smartphones is driving technology development useful for bionics. While interesting enough, the article doesn’t continue to the next logical and far more interesting possibility: that phones themselves are becoming parts of our bodies. To what extent are smartphones already bionic organs, and how could we tell if they were? I’m actively researching design in this area – stay tuned for more about the body-incorporated phone.

A study provides evidence that talking into a person’s right ear can affect behavior more effectively than talking into the left.

One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain’s left hemisphere superiority for processing verbal information.

I heavily prefer my left ear for phone calls. So much so that I have trouble understanding people on the phone when I use my right ear. Should I be concerned that my brain seems to be inverted?

Read on and it becomes clear that going beyond perceptual psychology, the scientists are terrifically shrewd:

Tommasi and Marzoli’s three studies specifically observed ear preference during social interactions in noisy night club environments. In the first study, 286 clubbers were observed while they were talking, with loud music in the background. In total, 72 percent of interactions occurred on the right side of the listener. These results are consistent with the right ear preference found in both laboratory studies and questionnaires and they demonstrate that the side bias is spontaneously displayed outside the laboratory.

In the second study, the researchers approached 160 clubbers and mumbled an inaudible, meaningless utterance and waited for the subjects to turn their head and offer either their left of their right ear. They then asked them for a cigarette. Overall, 58 percent offered their right ear for listening and 42 percent their left. Only women showed a consistent right-ear preference. In this study, there was no link between the number of cigarettes obtained and the ear receiving the request.

In the third study, the researchers intentionally addressed 176 clubbers in either their right or their left ear when asking for a cigarette. They obtained significantly more cigarettes when they spoke to the clubbers’ right ear compared with their left.

I’m picturing the scientists using their grant money to pay cover at dance clubs and try to obtain as many cigarettes as possible – carefully collecting, then smoking, their data – with the added bonus that their experiment happens to require striking up conversation with clubbers of the opposite sex who are dancing alone. One assumes that, if the test subject happened to be attractive, once the cigarette was obtained (or not) the subject was invited out onto the terrace so the scientist could explain the experiment and his interesting line of work. Well played!

Another MRI study, this time investigating how we learn parts of speech:

The test consisted of working out the meaning of a new term based on the context provided in two sentences. For example, in the phrase “The girl got a jat for Christmas” and “The best man was so nervous he forgot the jat,” the noun jat means “ring.” Similarly, with “The student is nising noodles for breakfast” and “The man nised a delicious meal for her” the hidden verb is “cook.”

“This task simulates, at an experimental level, how we acquire part of our vocabulary over the course of our lives, by discovering the meaning of new words in written contexts,” explains Rodríguez-Fornells. “This kind of vocabulary acquisition based on verbal contexts is one of the most important mechanisms for learning new words during childhood and later as adults, because we are constantly learning new terms.”

The participants had to learn 80 new nouns and 80 new verbs. By doing this, the brain imaging showed that new nouns primarily activate the left fusiform gyrus (the underside of the temporal lobe associated with visual and object processing), while the new verbs activated part of the left posterior medial temporal gyrus (associated with semantic and conceptual aspects) and the left inferior frontal gyrus (involved in processing grammar).

This last bit was unexpected, at first. I would have guessed that verbs would be learned in regions of the brain associated with motor action. But according to this study, verbs seem to be learned only as grammatical concepts. In other words, knowledge of what it means to run is quite different than knowing how to run. Which makes sense if verb meaning is accessed by representational memory rather than declarative memory.

Researchers at the University of Tampere in Finland found that,

Interfaces that vibrate soon after we click a virtual button (on the order of tens of milliseconds) and whose vibrations have short durations are preferred. This combination simulates a button with a “light touch” – one that depresses right after we touch it and offers little resistance.

Users also liked virtual buttons that vibrated after a longer delay and then for a longer subsequent duration. These buttons behaved like ones that require more force to depress.

This is very interesting. When we think of multimodal feedback needing to make cognitive sense, synchronization first comes to mind. But there are many more synesthesias in our experience that can only be uncovered through careful reflection. To make an interface feel real, we must first examine reality.

Researchers at the Army Research Office developed a vibrating belt with eight mini actuators — “tactors” — that signify all the cardinal directions. The belt is hooked up to a GPS navigation system, a digital compass and an accelerometer, so the system knows which way a soldier is headed even if he’s lying on his side or on his back.

The tactors vibrate at 250 hertz, which equates to a gentle nudge around the middle. Researchers developed a sort of tactile morse code to signify each direction, helping a soldier determine which way to go, New Scientist explains. A soldier moving in the right direction will feel the proper pattern across the front of his torso. A buzz from the front, side and back tactors means “halt,” a pulsating movement from back to front means “move out,” and so on.

A t-shirt design by Derek Eads.

Recent research reveals some fun facts about aural-tactile synesthesia:

Both hearing and touch, the scientists pointed out, rely on nerves set atwitter by vibration. A cell phone set to vibrate can be sensed by the skin of the hand, and the phone’s ring tone generates sound waves — vibrations of air — that move the eardrum…

A vibration that has a higher or lower frequency than a sound… tends to skew pitch perception up or down. Sounds can also bias whether a vibration is perceived.

The ability of skin and ears to confuse each other also extends to volume… A car radio may sound louder to a driver than his passengers because of the shaking of the steering wheel. “As you make a vibration more intense, what people hear seems louder,” says Yau. Sound, on the other hand, doesn’t seem to change how intense vibrations feel.

Max Mathews, electronic music pioneer, has died.

Though computer music is at the edge of the avant-garde today, its roots go back to 1957, when Mathews wrote the first version of “Music,” a program that allowed an IBM 704 mainframe computer to play a 17-second composition. He quickly realized, as he put it in a 1963 article in Science, “There are no theoretical limits to the performance of the computer as a source of musical sounds.”

Rest in peace, Max.

UPDATE: I haven’t updated this blog in a while, and I realized after posting this that my previous post was about the 2010 Modulations concert. Max Mathews played at Modulations too, and that was the last time I saw him.

I finally got around to recording and mastering the set I played at the CCRMA Modulations show a few months back. Though I’ve been a drum and bass fan for many years, this year’s Modulations was the first time I’d mixed it for others. Hope you like it!

Modulations 2010
Drum & Bass | 40:00 | May 2010

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 82.7 MB)


1. Excision — System Check
2. Randomer — Synth Geek
3. Noisia — Deception
4. Bassnectar — Teleport Massive (Bassnectar Remix)
5. Moving Fusion, Shimon, Ant Miles — Underbelly
6. Brookes Brothers — Crackdown
7. The Ian Carey Project — Get Shaky (Matrix & Futurebound’s Nip & Tuck Mix)
8. Netsky — Eyes Closed
9. Camo & Krooked — Time Is Ticking Away feat. Shaz Sparks

Over the last few days this video has been so much bombshell to many of my music-prone friends.

It’s called the Multi-Touch Light Table and it was created by East Bay-based artist/fidget-house DJ Gregory Kaufman. The video is beautifully put together, highlighting the importance of presentation when documenting new ideas.

I really like some of the interaction ideas presented in the video. Others, I’m not so sure about. But that’s all right: the significance of the MTLT is that it’s the first surface-based DJ tool that systematically accounts for the needs of an expert user.

Interestingly, even though it looks futuristic and expensive to us, interfaces like this will eventually be the most accessible artistic tools. Once multi-touch surface are ubiquitous, the easiest way to gain some capability will be to use inexpensive or open-source software. The physical interfaces created for DJing, such as Technics 1200s, are prosthetic objects (as are musical instruments), and will remain more expensive because mechanical contraptions will always be. Now, that isn’t to say that in the future our interfaces won’t evolve to become digital, networked, and multi-touch sensitive, or even that their physicality will be replaced with a digital haptic display. But one of the initial draws of the MTLT—the fact of its perfectly flat, clean interactive surface—seems exotic to us right now, and in the near future it will be default.

Check out this flexible interface called impress. Flexible displays just look so organic and, well impressive. One day these kinds of surface materials will become viable displays and they’ll mark a milestone in touch computing.

It’s natural to stop dancing between songs. The beat changes, the sub-rhythms reorient themselves, a new hook is presented and a new statement is made. But stopping dancing between songs is undesirable. We wish to lose ourselves in as many consecutive moments as possible. The art of mixing music is to fulfill our desire to dance along to continuous excellent music, uninterrupted for many minutes (or, in the best case, many hours) at a time. (Even if we don’t explicitly move our bodies to the music, when we listen our minds are dancing; the same rules apply.)

I don’t remember what prompted me to take that note, but it was probably not that the mixing was especially smooth.



A tomato hailing from Capay, California.

LHCSound is a site where you can listen to sonified data from the Large Hadron Collider. Some thoughts:

  • That’s one untidy heap of a website. Is this how it feels to be inside the mind of a brilliant physicist?
  • The name “LHCSound” refers to “Csound”, a programming language for audio synthesis and music composition. But how many of their readers will make the connection?
  • If they are expecting their readers to know what Csound is, then their explanation of the process they used for sonification falls way short. I want to know the details of how they mapped their data to synthesis parameters.
  • What great sampling material this will make. I wonder how long before we hear electronic music incorporating these sounds.

The Immersive Pinball demo I created for Fortune’s Brainstorm:Tech conference was featured in a BBC special on haptics.

I keep watching the HTC Sense unveiling video from Mobile World Congress 2010. The content is pretty cool, but I’m more fascinated by the presentation itself. Chief marketing officer John Wang gives a simply electrifying performance. It almost feels like an Apple keynote.

The iFeel_IM haptic interface has been making rounds on the internet lately. I tried it at CHI 2010 a few weeks ago and liked it a lot. Affective (emotional haptic) interfaces are full of potential. IFeel_IM mashes together three separate innovations:

  • Touch feedback in several different places on the body: spine, tummy, waist.
  • Touch effects that are generated from emotional language.
  • Synchronization to visuals from Second Life

All are very interesting. The spine haptics seemed a stretch to me, but the butterfly-in-the-tummy was surprisingly effective. The hug was good, but a bit sterile. Hug interfaces need nuance to bring them to the next level of realism.

The fact that the feedback is generated from the emotional language of another person seemed to be one of the major challenges—the software is built to extract emotionally-charged sentences using linguistic models. For example, if someone writes “I love you” to you, your the haptic device on your tummy will react by creating a butterflies-like sensation. As an enaction devotee I would rather actuate a hug with a hug sensor. Something about the translation of words to haptics is difficult for me to accept. But it could certainly be a lot of fun in some scenarios!

I’ve re-recorded my techno mix Awake with significantly higher sound quality. So if you downloaded a copy be sure to replace it with the new file!

Awake

Awake
Techno | 46:01 | October 2009

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 92 MB)


1. District One (a.k.a. Bart Skils & Anton Pieete) — Dubcrystal
2. Saeed Younan — Kumbalha (Sergio Fernandez Remix)
3. Pete Grove — I Don’t Buy It
4. DBN — Asteroidz featuring Madita (D-Unity Remix)
5. Wehbba & Ryo Peres — El Masnou
6. Broombeck — The Clapper
7. Luca & Paul — Dinamicro (Karotte by Gregor Tresher Remix)
8. Martin Worner — Full Tilt
9. Joris Voorn — The Deep

I recently started using Eclipse on OS X and it was so unresponsive, it was almost unusable. Switching tabs was slow, switching perspectives was hella slow. I searched around the web for a solid hour for how to make it faster and finally found the solution. Maybe someone can use it.

My machine is running OS X 10.5, and I have 2 GB of RAM. (This is important because the solution requires messing with how Eclipse handles memory. If you have a different amount of RAM, these numbers may not work and you’ll need to fiddle with them.)

  • Save your work and quit Eclipse.
  • Open the Eclipse application package by right-clicking (or Control-clicking) on Eclipse.app and select “Show Package Contents.”
  • Navigate to Contents→MacOS→, and open “eclipse.ini” in your favorite text editor.
  • Edit the line that starts with -”XX:MaxPermSize” to say “-XX:MaxPermSize=128m”.
  • Before that line, add a line that says “-XX:PermSize=64m”.
  • Edit the line that starts with “-Xms” to say “-Xms40m”.
  • Edit the line that starts ith “-Xmx” to say “-Xmx768m”.
  • Save & relaunch Eclipse.

Worked like a charm for me.

Scroll to Top

To Top

embodiment

24

Sep
2009

3 Comments

In books

By David Birnbaum

Merleau-Ponty’s philosophy

On 24, Sep 2009 | 3 Comments | In books | By David Birnbaum

9780253219732_lrg

“Yes or no: do we have a body—that is, not a permanent object of thought, but a flesh that suffers when it is wounded, hands that touch?” — The Visible and the Invisible

Merleau-Ponty’s Philosophy by Lawrence Hass was the first full book I read on the great phenomenologist. If you’re fascinated by sensation, perception, synesthesia, metaphor, and flesh (and frankly, who isn’t?), please read it! It offers many wonderful revelations. I’ll briefly review the following topics from the book:

  • Sensation/perception is a false dichotomy.
  • Perception is “contact with otherness.”
  • Synesthesia is a constant feature of experience.
  • The concepts of “reversibility” and “flesh”

Read more…

Tags | , , , , , , , , , , , , ,

03

Sep
2009

One Comment

In cognition
language
music

By David Birnbaum

Organ-ize

On 03, Sep 2009 | One Comment | In cognition, language, music | By David Birnbaum

I was inspired to research the words “organ” and “organized” after I read a statement made by Merleau-Ponty scholar Lawrence Hass that “perceptions are organized (organ-ized) information.” He included the hyphen to emphasize a very interesting point: it may be that our ability to organize our thoughts is rooted in a concrete aspect of embodiment. We have specialized organs and neural pathways for particular ranges of wave frequencies (light for the eyes, sound for the ears, vibration for the skin). So, it’s plausible that organization of thought may have its roots in the configuration of our sense organs. Astounding!

Here’s a typical definition of organize:

  • v. arrange in an orderly way
  • v. to make into a whole with unified and coherent relationships (yourdictionary.com)

These definitions aren’t satisfying. What makes an organization orderly, unified, and coherent? The definition Hass implies is much more illuminating: to be organized is to be divided according to the sense organs of a perceiver. Now we’re getting somewhere!

But moving in a slightly different direction, what the hell are we doing playing a musical instrument called an “organ”? And what does all this mean for Edgard Varèse’s famous definition of music as “organized sound”?

organ

  • n. from the Greek organon meaning “implement”, “musical instrument”, “organ of the body”, literally, “that with which one works” (Online Etymology Dictionary)
  • n. an instrument or means, as of action or performance
    (Dictionary.com)

Substituting “organ” in Varèse’s famous definition with these, the word “music” means:

  • music is sound with which one works
  • music is sound that is a means of action or performance

For the first time I understand what Varèse meant when he said music is “organized sound.” We use the word music to mean sound that is utilized by someone to work or perform. Nothing more, nothing less.

Tags | , , , , , ,

08

Oct
2008

No Comments

In books
cognition
music
neuroscience

By David Birnbaum

Embodied music cognition

On 08, Oct 2008 | No Comments | In books, cognition, music, neuroscience | By David Birnbaum

6a00c11413d7d0819d00fae8d7c728000b-500piThis is Your Brain on Music is a great introductory book on the neuroscience of music. Although I found it weighted a bit too much toward popular science for my liking, that was its stated purpose, and there was still plenty of good information in it.

Here we have an explanation of musical timing as an analogy for a moving body:

Virtually every culture and civilization considers movement to be an integral part of music making and listening. Rhythm is what we dance to, sway our bodies to, and tap our feet to… It is no coincidence that making music requires the coordinated, rhythmic use of our bodies, and that energy be transmitted from body movements to a musical instrument. (57)

‘Tempo’ refers to the pace of a musical piece—how quickly or slowly it goes by. If you tap your foot or snap your fingers in time to a piece of music, the tempo of the piece will be directly related to how fast or slow you are tapping. If a song is a living, breathing entity, you might think of the tempo as its gait—the rate at which it walks by—or its pulse—the rate at which the heart of the song is beating. The word ‘beat’ indicates the basic unit of measurement in a musical piece; this is also called the ‘tactus’. Most often, this is the natural point at which you would tap your foot or clap your hands or snap your fingers. (59)

Levitin also delves into the possible evolutionary reasons for music, noting that music seems to always go with dance, and that the concept of the expert musical performer is very recent:

When we ask about the evolutionary basis for music, it does no good to think about Britney or Bach. We have to think about what music was like around fifty thousand years ago. The instruments recovered from archeological sites can help us understand what our ancestors used to make music, and what kinds of melodies they listened to. Cave paintings, paintings on stoneware, and other pictorial artifacts can tell us something about the role that music played in daily life. We can also study contemporary societies that have been cut off from civilization as we know it, groups of people who are living in hunter-gatherer lifestyles that have remained unchanged for thousands of years. One striking find is that in every society of which we’re aware, music and dance are inseparable.

The arguments against music as an adaptation consider music only as disembodied sound, and moreover, as performed by an expert class for an audience. But it is only in the last five hundred years that music has become a spectator activity—the thought of a musical concert in which a class of “experts” performed for an appreciative audience was virtually unknown throughout our history as a species. And it has only been in the last hundred years or so that the ties between musical sound and human movement have been minimized. The embodied nature of of music, the indivisibility of movement and sound, the anthropologist John Blacking writes, characterizes music across cultures and across times. (257)

I agree. Even though we may use modern technology to exploit musical cognitive faculties for maximum effect, the idea that music/dance is a counter-evolutionary accident seems wrong to me.

You can find the website that accompanies the book at yourbrainonmusic.com.

Tags | ,

27

Feb
2008

No Comments

In books

By David Birnbaum

Enactive perception

On 27, Feb 2008 | No Comments | In books | By David Birnbaum

Action In Perception

I just finished Action In Perception by Alva Noë. It’s a very readable introduction to the enactive view of perceptual consciousness, which argues that perception neither happens in us nor to us; rather, it’s something we do with our bodies, situated in the physical world, over time. Our knowledge of the way in which sensory stimulation varies as we control our bodies is what brings experience about. Without sensorimotor skill, a stimulus cannot constitute a percept. Noë presents empirical evidence for his claim, drawing on the phenomenology of change blindness as well as tactile vision substitution systems. I highly recommend the book.

The emphasis on embodied experience leads to the use of touch as a model for perception, rather than the traditional vision-based approach. Here’s an excerpt:

Touch acquires spatial content—comes to represent spatial qualities—as a result of the ways touch is linked to movement and to our implicit understanding of the relevant tactile-motor dependencies governing our interaction with objects. [Philosopher George Berkeley] is right that touch is, in fact, a kind of movement. When a blind person explores a room by walking about in it and probing with his or her hands, he or she is perceiving by touch. Crucially, it is not only the use of the hands, but also the movement in and through the space in which the tactile activity consists. Very fine movements of the fingers and very gross wanderings across a landscape can each constitute excercises of the sense of touch. Touch, in all such cases, is movement. (At the very least, it is movement of something relative to the perceiver.) These Berkeleyan ideas form a theme, more recently, in the work of [Brian O'Shaughnessy's book "Consciousness and World"]. He writes: “touch is in a certain respect the most important and certainly the most primordial of the senses. The reason is, that it is scarely to be distinguished from the having of a body that can act in physical space”…

But why hold that touch is the only active sense modality? As we have stressed, the visual world is not given all at once, as in a picture. The presence of detail consists not in its representation now in consciousness, but in our implicit knowledge now that we can represent it in consciousness if we want, by moving the eyes or by turning the head. Our perceptual contact with the world consists, in large part, in our access to the world thanks to our possession of sensorimotor knowledge.

Here, no less than in the case of touch, spatial properties are available due to links to movement. In the domain of vision, as in that of touch, spatial properties present themselves to us as “permanent possibilities of movement.” As you move around the rectangular object, its visible profile deforms and transforms itself. These deformations and transformations are reversible. Moreover, the rules governing the transformation are familiar, at least to someone who has learned the relevant laws of visuomotor contingency. How the item looks varies systematically as a function of your movements. Your experience of it as cubical consists in your implicit understanding of the fact that the relevant regularity is being observed.

Virtual and augmented reality interface design practices have already begun to demonstrate these concepts. Head mounted augmented reality displays sense the user’s eye and body movements to construct virtual percepts. Head related transfer functions (HRTFs) synthesize sound as it would be heard by an organism with certain physical characteristics, in a particular environment. It seems to me that an important implication for enactive interface design is that haptic sensory patterns can lead to perceptual experience in all sensory modes (vision, hearing, touch). Thus, all human-computer interaction/user experience can be viewed in a haptic context.

Tags | , , , , , , ,