Tuesday, 5 November 2013

Arduino for psychophysics 

Chrisantha Fernando 

I've been thinking about spending this week on a quite different project involving doing some psychophysics experiments with an arduino involving an array of LEDs and a button. Nothing too complicated, but I want the system to log the data and send it through serial to the computer for collection. 

The idea is to replicate and extend the following experiment 2, which is intended to look at the reaction time of subjects when required to identify whether an LED on one OR the other side of a fixation point is active. This can be contrasted with RT in AND and XOR tasks, with two stimuli L and R of a fixation point. As the discrimination becomes more complex, how does the RT scale, and can this be explained by parallel or serial processing? 


Arduino has already been used for this purpose here... 



Basically I need a variant of the Posner test.


Modifying Sensory Dynamics and Morphologies - pt.1


This project has arisen from my interests around sensory experience. The few years I had working with people with disability before I came to 'ac(k)ademia has given me a certain perspective on the multitude of ways in which humans interface with the world on a sensorimotor level.

My recently completed MSc on 'Evolving Behaviour through Sensorimotor Contingencies' gave me a chance to explore the fascination I have around this topic from a robotics perspective, as framed by Dr Fernando's and Prof Szathmäry's theory of Darwinian Neurodynamics.

Furthermore, a invitation to the eSMCS summer school on embodiment and morphological computation allowed me the opportunity to learn about the role of the physical body in interactions with the world.

So, taking a line of enquiry that seeks to explore how sensorimotor contingencies and morphology affect behaviour - particularly what one finds 'interesting' during autonomous explorations in the world - I shall be modifying people's sensory dynamics and morphologies before releasing them into the wilds of Snowdonia.

Sensorimotor contingencies and active sensing/perception.

A sensorimotor contingency is a lawful relation between the sensor and motor functions of a system, specifically the manner in which a sense is dependant upon the motor actions or routines affecting it.

An early definition comes from J. Kevin O'Regan and Alva Noë:
"[...] the structure of the rules governing the sensory changes produced by various motor actions, that is, what we call the SMCs..." (2001, p. 941).
Sensorimotor contingencies are specific to each sense according to how we use that sense. Vision is used according to the rapid eye movements one utilises to scan the visual field as well as head, neck, body positioning that all determine a motor routine specific to the act of seeing. Smell is used according to the subsumption of normal automatic breathing processes in favour of a regulated breathing pattern that allows one to explore the field of odours present.

The structure of the relations between sensor and motor functions then describes the nature of particular sensorimotor contingencies. Furthermore, the role of motor actions in a continuous exploration of the senses suggests an intricate link between action and perception ...an active sensing of the world.

It can also be observed that we are still able to sense when we are neither acting nor performing motor routines, eg. sensing the wind against one's skin or hearing. This is a discussion around afferent nervous system and attentional processes that I do not wish to jump into at this point but I simply bring the idea into play to highlight the perspective from which my research here is studied from. That is, the process of active sensing is perhaps most salient to the fact of motor exploration in the world.

Of course, audition is the sense which, above all others, may be said to be performed better by indeed not acting, but rather paying attention. Though the movements of the head and pinnae and motor action of the outer hair cells - allowing active control of auditory gain (Roberts & Rutherford, 2008; Thomas & Vater, 2004) -  are certainly constitutive of the sensorimotor contingencies around audition, the sense of audition is experienced as "remarkably divorced from the subject's ongoing motor activity" (Schroeder et al, 2010).

It is this dichotomy between audition and the other senses in respect of active perception that I wish to remedy! 

Here's how...

Modifying the dynamics of audition.

If there is a tendency to be more still, to move less, to take a stationary position in order to listen then how would the reverse be? That is, what if to listen is to move, to scan the soundfield with successive motor movements, perhaps even something of a routine like that of the saccadic eye movements that constitute active vision and which allow one to form a more complete image of the world by rapidly shifting focus around the visual field?

I am going to modify the dynamics of audition according to motion in a manner which makes this reverse scenario a possibility.

System and signal path.

Kit needed:
  • binaural microphones/earphones: Roland CS-10EM
  • binaural microphone preamp & recording device: Zoom H4N
  • duplex stereo audio interface: Behringer UCA202
  • audio processing: Raspberry Pi (Rpi) running Pure Data
  • motion detection: accelerometer/gyroscope/magnetometer somewhere on the subject's body


The microphones in the earpiece of the Roland's capture the soundfield from the natural perspective of the human head. This signal will be preamplified and recorded for later analysis by the Zoom before being sent to the Behringer interface and into the Rpi to be processed in Pure Data. The signal is then sent from Pure Data out of the Rpi through the Behringer and back to the earphone of the Roland's.

System dynamics.

If no processing is performed on the signal passing through Pure Data then audition may be experienced as 'normal', while accounting for any filtering, colouration or latency observed in the audio signal path. This mode may be used as a ground truth for subjects' experience with the system. 

In order that the subjects can experience audition as a predominantly active sense there are a number of ways that the audio may be processed according to motion (as measured from the subject's head or torso).

A few modified audition-motion contingencies to start:
  • a general non-directional measure of motion can be taken, with a lower value of motion resulting in a higher degree of masking of the stereo audio signal by white/pink/brown noise.
  • a 2-dimensional measure of motion (left-right, forward-backward), with motion toward left or right resulting in less noise masking of that side of the stereo audio image starting from the centre and working further toward the periphery as a greater angle is covered in that motion, while the degree of masking is inversely proportional to the velocity of the motion so that continued forward motion results in an auditory fovea of sorts.
  • a multi dimensional measure of motion (up to 9D possible in theory), with the similar motion-masking relations as in the above scenario but the audio being processed with a spatial audio algorithm.
The practical feasibility of the higher dimensional scenarios rests largely on the processing power of the Rpi. This will be tested in due course.

Progress and things to work on...

After spending the best part of a week getting a stable Raspberry Pi setup running full duplex stereo audio and Pure Data processing the audio I am now at the very exciting stage of working on the audio algorithms which will govern the dynamics of the modified audition-motion contingencies.There's plenty of good spatial algorithms out there for Pure Data to look at and base my work on.

As well as exploring some of the examples above I might also look at processing the signal with some sort of temporal shifting to give a sense of sonic memory. I imagine this could be similar to the idea of vision as an external memory being built up with saccadic eye movements around the field, as espoused by O'Regan and Noë (2001). 

At this stage I imagine the experimental procedure will be prepared for by encouraging subjects to take a silent walk through the woods while voluntarily engaging in a process of open listening, simply allowing their attention to be drawn to whatever sounds are most salient, whatever appears most interesting before any thoughts about what those sounds might mean arise. This could be done as a group or in pairs perhaps. The field of ecological audio acoustic ecology (see: Westerkamp, Truax, Schafer) offers rich methodologies for this kind of activity.

Analysis of the actual modified listening session could be performed against a ground truth of the subjects using the modified audition system while no processing is performed on the signal. Much more needs to be thought of as to how to then analyse the data. Live commentary performed by the subjects or commentary post-session?

Next post... more of this + modified morphologies!


O'Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and brain sciences, 24(5), 939-972.

Thomas, J. A., Moss, C. F., & Vater, M. (Eds.). (2004). Echolocation in bats and dolphins. University of Chicago Press.

Roberts, W. M., & Rutherford, M. A. (2008). Linear and nonlinear processing in hair cells. Journal of Experimental Biology, 211(11), 1775-1780.

Schroeder, Charles E., et al. "Dynamics of active sensing and perceptual selection." Current opinion in neurobiology 20.2 (2010): 172-176.

Monday, 4 November 2013

Voxel Project (Part 2)

Control System

In the original VoxCAD simulator there are four materials; soft, hard and two which inflate to 20% their original value but out of phase with each other. For the single cube project we want to be able to control each cube edge independently so that we can explore what dynamics a cube can accomplish. This does however somewhat complicate the control system, we now need to be able to control 12 edges via a PC and to be able to set the extension value (or rate of expansion) for each edge.

The cube itself is designed with hydraulic syringes so we need some mechanism to apply and reduce pressure to the system. The obvious choice is again to use syringes connected to some form of motor control. There are many comercial systems available, mostly used in the medical profession, but these are large and expensive machines used for delivering exact millilitre dosses and not practical for our requirement.

On the hobby scale there are a few examples of syringe pumps, these mostly use a threaded bar and guide-rail system to pull the syringe plunger back and forth. YouTube syringe pump example

The quick prototype I have below is designed on an oil rig system of rotating wheel and pulley bar. It does take a lot of torque to move the syringe forward and the original Lego Mindstorm motor does not have the power required (although this is not what it was designed for). We have some high torque servos which should hopefully have the power to move the mechanism. The design does have the advantage that there are limited components and these are joined by simple dowel joints. The required parts could be easily manufactured in bulk using the 3D-printer.

More designes to come :)