Image Image Image Image Image

Presented at CHI 2012, Touché is a capacitive system for pervasive, continuous sensing. Among other amazing capabilities, it can accurately sense gestures a user makes on his own body. “It is conceivable that one day mobile devices could have no screens or buttons, and rely exclusively on the body as the input surface.” Touché.

Noticing that many of the same sensors, silicon, and batteries used in smartphones are being used to create smarter artificial limbs, Fast Company draws the conclusion that the market for smartphones is driving technology development useful for bionics. While interesting enough, the article doesn’t continue to the next logical and far more interesting possibility: that phones themselves are becoming parts of our bodies. To what extent are smartphones already bionic organs, and how could we tell if they were? I’m actively researching design in this area – stay tuned for more about the body-incorporated phone.

A study provides evidence that talking into a person’s right ear can affect behavior more effectively than talking into the left.

One of the best known asymmetries in humans is the right ear dominance for listening to verbal stimuli, which is believed to reflect the brain’s left hemisphere superiority for processing verbal information.

I heavily prefer my left ear for phone calls. So much so that I have trouble understanding people on the phone when I use my right ear. Should I be concerned that my brain seems to be inverted?

Read on and it becomes clear that going beyond perceptual psychology, the scientists are terrifically shrewd:

Tommasi and Marzoli’s three studies specifically observed ear preference during social interactions in noisy night club environments. In the first study, 286 clubbers were observed while they were talking, with loud music in the background. In total, 72 percent of interactions occurred on the right side of the listener. These results are consistent with the right ear preference found in both laboratory studies and questionnaires and they demonstrate that the side bias is spontaneously displayed outside the laboratory.

In the second study, the researchers approached 160 clubbers and mumbled an inaudible, meaningless utterance and waited for the subjects to turn their head and offer either their left of their right ear. They then asked them for a cigarette. Overall, 58 percent offered their right ear for listening and 42 percent their left. Only women showed a consistent right-ear preference. In this study, there was no link between the number of cigarettes obtained and the ear receiving the request.

In the third study, the researchers intentionally addressed 176 clubbers in either their right or their left ear when asking for a cigarette. They obtained significantly more cigarettes when they spoke to the clubbers’ right ear compared with their left.

I’m picturing the scientists using their grant money to pay cover at dance clubs and try to obtain as many cigarettes as possible – carefully collecting, then smoking, their data – with the added bonus that their experiment happens to require striking up conversation with clubbers of the opposite sex who are dancing alone. One assumes that, if the test subject happened to be attractive, once the cigarette was obtained (or not) the subject was invited out onto the terrace so the scientist could explain the experiment and his interesting line of work. Well played!

Another MRI study, this time investigating how we learn parts of speech:

The test consisted of working out the meaning of a new term based on the context provided in two sentences. For example, in the phrase “The girl got a jat for Christmas” and “The best man was so nervous he forgot the jat,” the noun jat means “ring.” Similarly, with “The student is nising noodles for breakfast” and “The man nised a delicious meal for her” the hidden verb is “cook.”

“This task simulates, at an experimental level, how we acquire part of our vocabulary over the course of our lives, by discovering the meaning of new words in written contexts,” explains Rodríguez-Fornells. “This kind of vocabulary acquisition based on verbal contexts is one of the most important mechanisms for learning new words during childhood and later as adults, because we are constantly learning new terms.”

The participants had to learn 80 new nouns and 80 new verbs. By doing this, the brain imaging showed that new nouns primarily activate the left fusiform gyrus (the underside of the temporal lobe associated with visual and object processing), while the new verbs activated part of the left posterior medial temporal gyrus (associated with semantic and conceptual aspects) and the left inferior frontal gyrus (involved in processing grammar).

This last bit was unexpected, at first. I would have guessed that verbs would be learned in regions of the brain associated with motor action. But according to this study, verbs seem to be learned only as grammatical concepts. In other words, knowledge of what it means to run is quite different than knowing how to run. Which makes sense if verb meaning is accessed by representational memory rather than declarative memory.

Researchers at the University of Tampere in Finland found that,

Interfaces that vibrate soon after we click a virtual button (on the order of tens of milliseconds) and whose vibrations have short durations are preferred. This combination simulates a button with a “light touch” – one that depresses right after we touch it and offers little resistance.

Users also liked virtual buttons that vibrated after a longer delay and then for a longer subsequent duration. These buttons behaved like ones that require more force to depress.

This is very interesting. When we think of multimodal feedback needing to make cognitive sense, synchronization first comes to mind. But there are many more synesthesias in our experience that can only be uncovered through careful reflection. To make an interface feel real, we must first examine reality.

Researchers at the Army Research Office developed a vibrating belt with eight mini actuators — “tactors” — that signify all the cardinal directions. The belt is hooked up to a GPS navigation system, a digital compass and an accelerometer, so the system knows which way a soldier is headed even if he’s lying on his side or on his back.

The tactors vibrate at 250 hertz, which equates to a gentle nudge around the middle. Researchers developed a sort of tactile morse code to signify each direction, helping a soldier determine which way to go, New Scientist explains. A soldier moving in the right direction will feel the proper pattern across the front of his torso. A buzz from the front, side and back tactors means “halt,” a pulsating movement from back to front means “move out,” and so on.

A t-shirt design by Derek Eads.

Recent research reveals some fun facts about aural-tactile synesthesia:

Both hearing and touch, the scientists pointed out, rely on nerves set atwitter by vibration. A cell phone set to vibrate can be sensed by the skin of the hand, and the phone’s ring tone generates sound waves — vibrations of air — that move the eardrum…

A vibration that has a higher or lower frequency than a sound… tends to skew pitch perception up or down. Sounds can also bias whether a vibration is perceived.

The ability of skin and ears to confuse each other also extends to volume… A car radio may sound louder to a driver than his passengers because of the shaking of the steering wheel. “As you make a vibration more intense, what people hear seems louder,” says Yau. Sound, on the other hand, doesn’t seem to change how intense vibrations feel.

Max Mathews, electronic music pioneer, has died.

Though computer music is at the edge of the avant-garde today, its roots go back to 1957, when Mathews wrote the first version of “Music,” a program that allowed an IBM 704 mainframe computer to play a 17-second composition. He quickly realized, as he put it in a 1963 article in Science, “There are no theoretical limits to the performance of the computer as a source of musical sounds.”

Rest in peace, Max.

UPDATE: I haven’t updated this blog in a while, and I realized after posting this that my previous post was about the 2010 Modulations concert. Max Mathews played at Modulations too, and that was the last time I saw him.

I finally got around to recording and mastering the set I played at the CCRMA Modulations show a few months back. Though I’ve been a drum and bass fan for many years, this year’s Modulations was the first time I’d mixed it for others. Hope you like it!

Modulations 2010
Drum & Bass | 40:00 | May 2010

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 82.7 MB)


1. Excision — System Check
2. Randomer — Synth Geek
3. Noisia — Deception
4. Bassnectar — Teleport Massive (Bassnectar Remix)
5. Moving Fusion, Shimon, Ant Miles — Underbelly
6. Brookes Brothers — Crackdown
7. The Ian Carey Project — Get Shaky (Matrix & Futurebound’s Nip & Tuck Mix)
8. Netsky — Eyes Closed
9. Camo & Krooked — Time Is Ticking Away feat. Shaz Sparks

Over the last few days this video has been so much bombshell to many of my music-prone friends.

It’s called the Multi-Touch Light Table and it was created by East Bay-based artist/fidget-house DJ Gregory Kaufman. The video is beautifully put together, highlighting the importance of presentation when documenting new ideas.

I really like some of the interaction ideas presented in the video. Others, I’m not so sure about. But that’s all right: the significance of the MTLT is that it’s the first surface-based DJ tool that systematically accounts for the needs of an expert user.

Interestingly, even though it looks futuristic and expensive to us, interfaces like this will eventually be the most accessible artistic tools. Once multi-touch surface are ubiquitous, the easiest way to gain some capability will be to use inexpensive or open-source software. The physical interfaces created for DJing, such as Technics 1200s, are prosthetic objects (as are musical instruments), and will remain more expensive because mechanical contraptions will always be. Now, that isn’t to say that in the future our interfaces won’t evolve to become digital, networked, and multi-touch sensitive, or even that their physicality will be replaced with a digital haptic display. But one of the initial draws of the MTLT—the fact of its perfectly flat, clean interactive surface—seems exotic to us right now, and in the near future it will be default.

Check out this flexible interface called impress. Flexible displays just look so organic and, well impressive. One day these kinds of surface materials will become viable displays and they’ll mark a milestone in touch computing.

It’s natural to stop dancing between songs. The beat changes, the sub-rhythms reorient themselves, a new hook is presented and a new statement is made. But stopping dancing between songs is undesirable. We wish to lose ourselves in as many consecutive moments as possible. The art of mixing music is to fulfill our desire to dance along to continuous excellent music, uninterrupted for many minutes (or, in the best case, many hours) at a time. (Even if we don’t explicitly move our bodies to the music, when we listen our minds are dancing; the same rules apply.)

I don’t remember what prompted me to take that note, but it was probably not that the mixing was especially smooth.



A tomato hailing from Capay, California.

LHCSound is a site where you can listen to sonified data from the Large Hadron Collider. Some thoughts:

  • That’s one untidy heap of a website. Is this how it feels to be inside the mind of a brilliant physicist?
  • The name “LHCSound” refers to “Csound”, a programming language for audio synthesis and music composition. But how many of their readers will make the connection?
  • If they are expecting their readers to know what Csound is, then their explanation of the process they used for sonification falls way short. I want to know the details of how they mapped their data to synthesis parameters.
  • What great sampling material this will make. I wonder how long before we hear electronic music incorporating these sounds.

The Immersive Pinball demo I created for Fortune’s Brainstorm:Tech conference was featured in a BBC special on haptics.

I keep watching the HTC Sense unveiling video from Mobile World Congress 2010. The content is pretty cool, but I’m more fascinated by the presentation itself. Chief marketing officer John Wang gives a simply electrifying performance. It almost feels like an Apple keynote.

The iFeel_IM haptic interface has been making rounds on the internet lately. I tried it at CHI 2010 a few weeks ago and liked it a lot. Affective (emotional haptic) interfaces are full of potential. IFeel_IM mashes together three separate innovations:

  • Touch feedback in several different places on the body: spine, tummy, waist.
  • Touch effects that are generated from emotional language.
  • Synchronization to visuals from Second Life

All are very interesting. The spine haptics seemed a stretch to me, but the butterfly-in-the-tummy was surprisingly effective. The hug was good, but a bit sterile. Hug interfaces need nuance to bring them to the next level of realism.

The fact that the feedback is generated from the emotional language of another person seemed to be one of the major challenges—the software is built to extract emotionally-charged sentences using linguistic models. For example, if someone writes “I love you” to you, your the haptic device on your tummy will react by creating a butterflies-like sensation. As an enaction devotee I would rather actuate a hug with a hug sensor. Something about the translation of words to haptics is difficult for me to accept. But it could certainly be a lot of fun in some scenarios!

I’ve re-recorded my techno mix Awake with significantly higher sound quality. So if you downloaded a copy be sure to replace it with the new file!

Awake

Awake
Techno | 46:01 | October 2009

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Download (mp3, 92 MB)


1. District One (a.k.a. Bart Skils & Anton Pieete) — Dubcrystal
2. Saeed Younan — Kumbalha (Sergio Fernandez Remix)
3. Pete Grove — I Don’t Buy It
4. DBN — Asteroidz featuring Madita (D-Unity Remix)
5. Wehbba & Ryo Peres — El Masnou
6. Broombeck — The Clapper
7. Luca & Paul — Dinamicro (Karotte by Gregor Tresher Remix)
8. Martin Worner — Full Tilt
9. Joris Voorn — The Deep

I recently started using Eclipse on OS X and it was so unresponsive, it was almost unusable. Switching tabs was slow, switching perspectives was hella slow. I searched around the web for a solid hour for how to make it faster and finally found the solution. Maybe someone can use it.

My machine is running OS X 10.5, and I have 2 GB of RAM. (This is important because the solution requires messing with how Eclipse handles memory. If you have a different amount of RAM, these numbers may not work and you’ll need to fiddle with them.)

  • Save your work and quit Eclipse.
  • Open the Eclipse application package by right-clicking (or Control-clicking) on Eclipse.app and select “Show Package Contents.”
  • Navigate to Contents→MacOS→, and open “eclipse.ini” in your favorite text editor.
  • Edit the line that starts with -”XX:MaxPermSize” to say “-XX:MaxPermSize=128m”.
  • Before that line, add a line that says “-XX:PermSize=64m”.
  • Edit the line that starts with “-Xms” to say “-Xms40m”.
  • Edit the line that starts ith “-Xmx” to say “-Xmx768m”.
  • Save & relaunch Eclipse.

Worked like a charm for me.

Scroll to Top

To Top

Wittgenstein

25

Feb
2010

No Comments

In space
visualization

By Dave

Visualizing everything (part one)

On 25, Feb 2010 | No Comments | In space, visualization | By Dave

Not “everything” as in one-at-a-time, but as in everything at once. Macro. Meta. Big.

This first picture is a visualization of the entire history of the universe, recently produced by the WMAP space probe. WMAP’s mission is to listen to the faint reverberation that is still bouncing around since the Big Bang. Analysis of the WMAP data gives us a information not only about the size of the universe, but also its size over time. Here’s what it found.

Time moves along the horizontal axis, and the size of the universe surrounds the vertical axis. It’s interesting to note that to express the size of the universe, you only need a single up-slanting line (the top edge of the cone), but in this image the line is wrapped around the horizontal axis to generate the cone structure you see. It associates the linear measurement of “size” with our idea of three-dimensional “space”. I think this visualization device works well to make the calculated size of the universe seem more tangible and real.

Now we have some idea of what the universe looks like over time. But if you disregard time and ask, what does the very largest superstructure of the visible universe look like?

Our best science says it looks like a giant morsel of luminescent bread.

It also looks strangely like a neural network, as my friend Aram pointed out. The spindles you see are billions of galaxies clumped and stretched together, called “filaments.” The dark spots are empty spaces containing nothing, called “voids,” and they have diameters of many bajillions of bajillions of whatever unit of distance you like. (Why the cube? I searched for a while and couldn’t find an explanation.)

Here’s a similar image, with map labels.

If you zoom in far enough to see (incomprehensibly giant) galaxies as single pixels, this is what you get.

How did mere humans come up with these images? They took Wittgenstein’s timeless advice: “Don’t think, look!”

Now we can visualize—we can appreciate—the magnitude of the two familiar dimensions of experience, space and time. The result is profound awe; there is really no other reaction one can have to the above images. On the other hand, these images don’t speak to the phenomenology of experience. They don’t depict the thoughts and processes that comprise our mental lives. For that we are going to need visual philosophy, which I’ll post about soon.

Tags | , , , , ,

03

Dec
2009

No Comments

In language

By Dave

The meaning of ‘most’

On 03, Dec 2009 | No Comments | In language | By Dave

William Shakespeare, who knew a thing or two about words, advised that “An honest tale speeds best, being plainly told.” But the exact meaning of plain language isn’t always easy to find. Even simple words like “most” and “least” can vary greatly in definition and interpretation, and are difficult to put into precise numbers.

Until now.

Thrilling!

In a groundbreaking new linguistic study, Prof. Mira Ariel of Tel Aviv University’s Department of Linguistics has quantified the meaning of the common word “most.” [The study] “is quite shocking for the linguistics world,” she says.

“I’m looking at the nature of language and communication and the boundaries that exist in our conventional linguistic codes,” says Prof. Ariel. “If I say to someone, ‘I’ve told you 100 times not to do that,’ what does ’100 times’ really mean? I intend to convey ‘a lot,’ not literally ’100 times.’ Such interpretations are contextually determined and can change over time.”

I’ve noticed that I exaggerate modally—I choose a number and run with it for a while. Currently it’s 5, as in, “I’ve told you 50 times; I had to wait for five hours.” I don’t mean some specific number, I just mean to use it as a placeholder for exaggeration purposes. There must be a term for this. Linguists?

When people use the word “most,” the study found, they don’t usually mean the whole range of 51-99%. The common interpretation is much narrower, understood as a measurement of 80 to 95% of a sample — whether that sample is of people in a room, cookies in a jar, or witnesses to an accident.

So many problems are caused when we try to communicate with words about whose meaning we think we agree when actually we don’t agree at all. But Professor Mira Ariel is helping sort it out by empirically determining what it is that we mean. Wittgenstein showed that the meaning of words cannot extend beyond how they’re used. So empirical studies like this one can help us immensely. I’m betting this kind of research will also help artificial intelligence research.

“‘Most’ as a word came to mean “majority” only recently. Before democracy, we had feudal lords, kings and tribes, and the notion of “most” referred to who had the lion’s share of a given resource — 40%, 30% or even 20%,” she explains. “Today, ‘most’ clearly has come to signify a majority — any number over 50 out of a hundred. But it wasn’t always that way. A two-party democracy could have introduced the new idea that ‘most’ is something more than 50%.”

I can’t tell from this short article whether Professor Ariel has done research to support her assertion that modern democracy really is the source for the lexical definition of “most” as meaning between 51% and 100%. But if true it’s pretty interesting because it shows that the word “most” may be political—that is, an expression of power or authority—rather than geometrical or mathematical, which is what I had always assumed.

Here’s the full article.

Tags | , , , , , ,

01

Jul
2009

No Comments

In books

By David Birnbaum

Tactility in the Tractatus Logico-Philisophicus

On 01, Jul 2009 | No Comments | In books | By David Birnbaum

I’ve written before about the later writings of Wittgenstein and the metaphor of the word as a manual tool. However, in Ludwig’s first published work, the Tractatus Logico-Philosophicus, his theory of language is that sentences represent states of affairs, the so-called picture theory of language. Although he later abandoned that viewpoint for the tool-based one, I was intrigued by this historically significant switch-up, so I read through the Tractatus with special attention to its visual and tactile metaphors. Here are a few examples.

2.013
Each thing is, as it were, in a space of possible states of affairs. This space I can imagine empty, but I cannot imagine the thing without the space.
2.0131
A spatial object must be situated in infinite space. (A spatial point is an argument-place.)
A speck in the visual field, though it need not be red, must have some colour: it is, so to speak, surrounded by colour-space. Notes must have some pitch, objects of the sense of touch some degree of hardness, and so on.

This is a prime example of a sensory metaphor used throughout the book. Objects are described in a visual way, as being seen as situated within a possibility space or belief space. They themselves have extension, but we perceive them as taking up some amount of the visual field (surrounded by context, which here is represented as other possibilities for their position or physical attributes).

2.151
Pictorial form is the possibility that things are related to one another in the same way as the elements of the picture.
2.1511
That is how a picture is attached to reality; it reaches right out to it.
2.1512
It is laid against reality like a measure.
2.15121
Only the end-points of the graduating lines actually touch the object that is to be measured.

Again we get a visual metaphor described in terms of physicality. How is a picture “attached” to reality? It reaches out to it. And while it touches reality, it only just touches it, in a tangential way. Wittgenstein starts with a visual image and then writes “attached”, “reaches out”, “laid against”, and “touch”—all haptic metaphors.

2.1514
The pictorial relationship consists of the correlations of the picture’s elements with things.
2.1515
These correlations are, as it were, the feelers of the picture’s elements, with which the picture touches reality.

Now we have moved from the picture as a “measure,” a passive geometry tool, to a picture as an agent. Not just any agent, but an agent with a capacity for haptic perception. What is a “feeler”? To me that word means a mobile extremity with sense organs which can be used to find out about the world. So, what Wittgenstein seems to be saying here is that when we generate a picture in our mind, it’s as if we are extending our hand into the world.

4.002

Language disguises thought. So much so, that from the outward form of the clothing it is impossible to infer the form of the thought beneath it, because the outward form of the clothing is not designed to reveal the form of the body, but for entirely different purposes.

In other words, thoughts are like physical objects. A word envelops a thought. We have a thought and then we toss a word-robe over it and shove it onto the stage of discourse where it can interface with other enrobed thoughts.

4.411
It immediately strikes one as probable that the introduction of elementary propositions provides the basis for understanding all other kinds of proposition. Indeed the understanding of general propositions palpably depends on the understanding of elementary propositions.

Once again, a tactile metaphor (“palpably”) is used for emphasis and to indicate comprehensive understanding.

5.557

What belongs to its application, logic cannot anticipate.
It is clear that logic must not clash with its application.
But logic has to be in contact with its application.
Therefore, logic and its application must not overlap.

I.e.,
logicapplication

Is “overlap” a haptic metaphor or a visual one? It could be either, or both.

6.432
How things are in the world is a matter of complete indifference for what is higher. God does not reveal himself in the world.
6.4321
The facts all contribute only to setting the problem, not to its solution.
6.44
It is not how things are in the world that is mystical, but that it exists.
6.45
To view the world sub specie aeterni is to view it as a whole—a limited whole.
Feeling the world as a limited whole—it is this that is mystical.

To feel is to know, silently, mystically.

I was pretty surprised at how easy it seems to foresee Wittgenstein’s turn from the eye to the hand. He presents what he calls a picture theory of language, but it repeatedly leads to a description of solid objects in space, or bodies moving and feeling and contacting each other. Of course I’m reading with a very biased perspective. Not only is my goal to hunt for tactile metaphors but I also know how the story ends some 30 years later. Still, I can’t help but feel that the tactile language in the Tractatus may foreshadow the shift to come.

Tags | , ,

07

Sep
2008

No Comments

In books
language
tactility

By David Birnbaum

Philosopher deathmatch, and how words are like tools

On 07, Sep 2008 | No Comments | In books, language, tactility | By David Birnbaum

9780060936648I just finished reading Wittgenstein’s Poker. From the jacket:

In October 1946, philosopher Karl Popper arrived at Cambridge to lecture at a seminar hosted by his legendary colleague Ludwig Wittgenstein. It did not go well: the men began arguing, and eventually, Wittgenstein began waving a fire poker toward Popper. It lasted scarcely 10 minutes, yet the debate has turned into perhaps modern philosophy’s most contentious encounter, largely because none of the eyewitnesses could agree on what happened. Did Wittgenstein physically threaten Popper with the poker? Did Popper lie about it afterward?

The authors provide a comprehensive biographical and historical context for the incident, and use it as a springboard into the two men’s respective philosophies. It’s an enjoyable look at two self-important, short-tempered intellectuals and their rivalry.

As I mentioned in this post, I find Wittgenstein’s philosophy of language often invokes touch themes. In the following excerpt from Poker (originating from one of his lectures), Wittgenstein makes a point about a colleague’s statement, “Good is what is right to admire,” utilizing a haptic metaphor:

The definition throws no light. There are three concepts, all of them vague. Imagine three solid pieces of stone. You pick them up, fit them together and you now get a ball. What you’ve now got tells you something about the three shapes. Now consider you have three balls of soft mud or putty. Now you put the three together and mold out of them a ball. Ewing makes a soft ball out of three pieces of mud. (68)

Another example stems from Wittgenstein’s midlife change in philosophical outlook. In his first publication, the Tractatus Logicio-Philosophicus, he was preoccupied with the “picture theory of language”—the idea that sentences describe “states of affairs” that can be likened to the contents of a picture. Later, he developed a theory of language based on words as tools for conveying meaning. In my reading, he shifted from a vision-based to a haptic-based (in fact, a distinctly physical-interaction-based) understanding of how language works.

The metaphor of language as a picture is replaced by the metaphor of language as a tool. If we want to know the meaning of a term, we should not ask what it stands for: we should instead examine how it is actually used. If we do so, we will soon recognize that there is no underlying single structure. Some words, which at first glance look as if they perform similar functions, actually operate to distinct sets of rules. (229)

Here’s the relevant passage directly from Philisophical Investigations:

It is like looking into the cabin of a locomotive. We see handles all looking more or less alike. (Naturally, since they are all supposed to be handled.) But one is the handle of a crank which can be moved continuously (it regulates the opening of a valve); another is the handle of a switch, which has only two effective positions, it is either off or on; a third is the handle of a brake-lever, the harder one pulls on it, the harder it brakes; a fourth, the handle of the pump: it has an effect only so long as it is moved to and fro. (PI, I, par. 12)

Words as the physical interface to meaning. Love it!

Tags | , , ,

28

May
2008

One Comment

In books
tactility

By David Birnbaum

Wittgenstein

On 28, May 2008 | One Comment | In books, tactility | By David Birnbaum

The philosophy of Ludwig Wittgenstein seems to come up often, so I decided to read up on what all the hoopla is about. This semi-biographical introduction to his major works is fascinating and easy to read. It seemed pretty comprehensive as well, though I’m no expert.

Wittgenstein was primarily concerned with logic and language, but I found his emphasis on know-how as opposed to know-that, and his view that skill supersedes knowledge of rules, to have a certain ‘embodiment’ quality to it. Excerpts:

So, language can indeed be said to be governed by rules; but those rules are for the most part only implicit in native speakers’ common usage. They can be derived from common usage by anyone who pays attention to it, but they are rarely operative in it: we do not normally use rules to work out what is correct. Rather, we have a fairly reliable ‘feeling’ for what sounds right in a given case. Rules can be formulated to codify our usage, but our usage is not ultimately based on such rules. (191)

Our traditional concept of pain, however, is not in competition with concepts that classify the same phenomena in terms of their underlying conditions. No physiologist could convince me that what in my own case I call ‘pain’ may not in fact be pain, or that my pain was in truth not where I felt it but in the brain. Why is this so?

Concepts are an expression of our interests. We group things together and call them by a common name according to those resemblances we find striking or important. And in different contexts we may be interested in different aspects of things. To classify phenomena scientifically, by their underlying structures or causes, is not always what we want. For instance, when taking an aesthetic attitude towards things, we are concerned entirely with their appearances. Invisible micro-structures become wholly irrelevant. And another area in which the scientific urge to leave behind the surface for underlying causes is often out of place is the realm of feeling: where our primary interest is in people’s conscious experience. Their suffering and well-being is important to us in its own right, and not merely as an indication of some underlying physiological conditions. Therefore physiological concepts like ‘lesion of tissue’ — whatever their importance for diagnosis and therapy — can never be in competition with, or act as substitutes for, our traditional concepts of feelings and emotions that are taught and understood through their links with natural expressive behaviour and characterized by the special authority we have in their first-person use. (254)

Tags |