Tuesday, 21 January 2014

Voxel Project (Part 4)

This week starts another 'wacky robot week', so thought I would keep the blog updated as I go this time rather than a single update months down the line.

Core aim of the week is to get a cube capable of locomotion. That said I have been thinking about the cube for the last few months and think that not only could this be a very useful device for researching models within Darwinian Neurodynamics (and others) but may also be an excellant educational 'toy'. For both these applications we need to have a robot that is both robust to continual use (unlike the previous version with bits falling off) and easy to store, move and assemble. To this end I spent time sourcing new parts that would stand up to the wear and tear but that required minimal tools to construct (i.e. 'build your own robot cube at home').

The image below shows the robot cube construction kit, injection moulded corners provide a smaller firmer 3 way conner, these are designed to fit 19mm flexible PVC pipe, which provides the movement and flex required, in addition being transparent it allows you to see what the syringes are doing.



One advantage of the new pieces was a radical reduction in weight, because of this a single syringe could be used along the cube edges without upsetting the dynamics. This saved the fiddly job of connecting the syringes together and made construction much easier. Conveniently the syringes and piston end fit perfectly into the PVC pipe and allow you to exactly set how far the syringes can extend and compress and make sure each edge is identical (note: pvc pipe cutters would be much better than the hobby knife for exact cuts). The process is now simply cutting 12 identical pieces for each edge. 


Each edge is made of two pieces a small connector for the plunger end and a larger piece that covers the whole syringe allowing a gap at the end for the hose to escape and some flexibility. This edge then connects to a reciprocal syringe for the control cube.

The edges are built into squares and finally into the cube. All the parts are a simple push fit, strong enough to hold the cube together during use but loose enough to allow some flex when needed and to be taken apart just as easily.



The weight loss of the new version allowed the robot to be run on just air rather than the hydraulic water system used for SpongeBot. This makes construction much easier and less messy. The dynamics are however slower and less accurate than with the water but I don't think this should be a major problem for the Darwinian Neurodynamics project as we are looking at temporal predictors which will need to learn and adapt to this slow response. It may also be useful for the servos, previously if a single servo stuck or drew too much current the system would behave erratically trying to pull itself apart, with air there is enough give that the servos should not stall (in theory).

Putting the new cube through its paces, I am currently controlling the cube manually rather than through the control cube. Takes a while to juggle the syringes and can only control one edge at a time but did manage to record a full cube step. Took 3 mins to cover 4.5cm, but not bad for its first baby step. The dynamics of this little guy has given him the name StretchBot (son of SpongeBot).




Next stage is to get the control cube rigged up (will try to update the control cube as well this week). Will keep you posted and hopefully next video we'll be flying along.

Few pics:




Monday, 20 January 2014

Voxel Project (Part 3)

During the Snowdonia hackademia retreat in November 2013 the voxel project made a massive evolutionary jump. Virtually the entire week was spent on the manufacturing and construction of a mega control cube. This cube would power 12 servo syringe modules, each of which would be connected to a voxel cube edge.

Each servo syringe module followed the same oil rig design first tested in the previous post 'Voxel Project (Part 2)', however the arm was extended to provide better power transfer and the Lego motors were replaced with high torque servo motors. To allow for easy wiring all of the twelve modules were attached together to form a 3 x 4 mega control cube. All of the servos were connected to an AdaFruit servo controller board and from there to an Arduino Mega control board. A huge huge thanks goes to Boris and Rollo for all their help with the sawing, screwing and fiddly nut fastening that went into making each module.


The control cube now allowed twelve hydraulic syringes to be powered simultaneously and with adjustment of the servo timing and rotation rate a smooth motion of all syringes could be maintained. One issue that did arise was if a servo got stuck or drew to much current then all other servos would behave erratically until the issue freed itself, the system also continually looked for the weakest point with screws, nuts and lego parts often working loose. 

Unfortunately most of the week was spent building the control cube and very little time was available to work on the actual robot cube itself. I did manage to put together codename: SpongeBot. This robot was constructed from the same dual syringe setup used in codename Squeaky (see Voxel Project part 1) but the corners were made from plumbing parts to give a more stable corner, the edges were made from car wash sponges to allow the edge to bend and flex as required. There was no time available for actual experiments but we were able to create Arduino scripts (many thanks to Chrisantha) to put both the control cube and SpongeBot through its paces. Video below shows the SpongeBot left, control cube centre and the laptop controlling the action on the right (video was kindly made by Rollo and is time lapsed as actual speed was a tad sedate.)


Overall I am extremely happy with the progress made over the retreat and the SpongeBot robot. There was a huge amount learnt during the construction and testing that will be applied to the next generation. 

-- update --
Main things learnt, that will need modifying: 
a) Construction and transport - Both the robot and control cube look amazing and in the case of the control cube, scary! However transport has been an issue and this should be taken into account and an effort made to either shrink the control cube or at least disassemble into smaller parts. 

b) Storage - On the same note as above the cubes are currently connected together by hydraulic tubes, this means the whole apparatus has to stay rigged up in one piece and takes up considerable space. 

c) Hydraulics - For the SpongeBot cube we needed to use hydraulics as air was simply to compressable to get nice fluid movement (if any) but this did mean that the syringes and pipes needed to be filled with water and then connected up to the control cube syringes. Inevitably air entered the system which in turn reduced the effectiveness of syringe system. It would be nice to have some form of reservoir system to remove trapped air. Again during storage and movement it meant there was a chance of water leaking out, so emptying and top up process would be nice.

d) SpongeBot components - The plumbing parts used to make SpongeBot made a considerable difference compared with the original squash ball setup, providing well needed stability. However these were also easily damaged during transport and storage. The other issue was once a syringe piston had been removed from the syringe it never had as good a seal as originally, so often leaked. It would be nice to have a setup where the syringes could be left intact during construction. Finally the sponges used for the edges provided a nice flexible edge but as can be seen in the video hamper any form of locomotion as they are deeper than the corners and the movement is simply centred around them. 

To do:
a) Create a new cube capable of locomotion
b) Enhancements to the control cube to rectify some of the problems above

Few pics:





Tuesday, 5 November 2013

Arduino for psychophysics 

Chrisantha Fernando 

I've been thinking about spending this week on a quite different project involving doing some psychophysics experiments with an arduino involving an array of LEDs and a button. Nothing too complicated, but I want the system to log the data and send it through serial to the computer for collection. 

The idea is to replicate and extend the following experiment 2, which is intended to look at the reaction time of subjects when required to identify whether an LED on one OR the other side of a fixation point is active. This can be contrasted with RT in AND and XOR tasks, with two stimuli L and R of a fixation point. As the discrimination becomes more complex, how does the RT scale, and can this be explained by parallel or serial processing? 

http://www.indiana.edu/~psymodel/papers/townoz95.pdf

Arduino has already been used for this purpose here... 


http://code.google.com/p/arduino-v-neusci/wiki/ControlPWM

http://code.google.com/p/arduino-v-neusci/wiki/Documentation

Basically I need a variant of the Posner test.

http://en.wikipedia.org/wiki/Posner_cueing_task
http://openeuroscience.wordpress.com/tutorials/human-psychophysics-using-arduino/



Modifying Sensory Dynamics and Morphologies - pt.1

Background.

This project has arisen from my interests around sensory experience. The few years I had working with people with disability before I came to 'ac(k)ademia has given me a certain perspective on the multitude of ways in which humans interface with the world on a sensorimotor level.

My recently completed MSc on 'Evolving Behaviour through Sensorimotor Contingencies' gave me a chance to explore the fascination I have around this topic from a robotics perspective, as framed by Dr Fernando's and Prof Szathmäry's theory of Darwinian Neurodynamics.


Furthermore, a invitation to the eSMCS summer school on embodiment and morphological computation allowed me the opportunity to learn about the role of the physical body in interactions with the world.


So, taking a line of enquiry that seeks to explore how sensorimotor contingencies and morphology affect behaviour - particularly what one finds 'interesting' during autonomous explorations in the world - I shall be modifying people's sensory dynamics and morphologies before releasing them into the wilds of Snowdonia.



Sensorimotor contingencies and active sensing/perception.

A sensorimotor contingency is a lawful relation between the sensor and motor functions of a system, specifically the manner in which a sense is dependant upon the motor actions or routines affecting it.

An early definition comes from J. Kevin O'Regan and Alva Noë:
"[...] the structure of the rules governing the sensory changes produced by various motor actions, that is, what we call the SMCs..." (2001, p. 941).
Sensorimotor contingencies are specific to each sense according to how we use that sense. Vision is used according to the rapid eye movements one utilises to scan the visual field as well as head, neck, body positioning that all determine a motor routine specific to the act of seeing. Smell is used according to the subsumption of normal automatic breathing processes in favour of a regulated breathing pattern that allows one to explore the field of odours present.

The structure of the relations between sensor and motor functions then describes the nature of particular sensorimotor contingencies. Furthermore, the role of motor actions in a continuous exploration of the senses suggests an intricate link between action and perception ...an active sensing of the world.


It can also be observed that we are still able to sense when we are neither acting nor performing motor routines, eg. sensing the wind against one's skin or hearing. This is a discussion around afferent nervous system and attentional processes that I do not wish to jump into at this point but I simply bring the idea into play to highlight the perspective from which my research here is studied from. That is, the process of active sensing is perhaps most salient to the fact of motor exploration in the world.


Of course, audition is the sense which, above all others, may be said to be performed better by indeed not acting, but rather paying attention. Though the movements of the head and pinnae and motor action of the outer hair cells - allowing active control of auditory gain (Roberts & Rutherford, 2008; Thomas & Vater, 2004) -  are certainly constitutive of the sensorimotor contingencies around audition, the sense of audition is experienced as "remarkably divorced from the subject's ongoing motor activity" (Schroeder et al, 2010).


It is this dichotomy between audition and the other senses in respect of active perception that I wish to remedy! 


Here's how...



Modifying the dynamics of audition.

If there is a tendency to be more still, to move less, to take a stationary position in order to listen then how would the reverse be? That is, what if to listen is to move, to scan the soundfield with successive motor movements, perhaps even something of a routine like that of the saccadic eye movements that constitute active vision and which allow one to form a more complete image of the world by rapidly shifting focus around the visual field?

I am going to modify the dynamics of audition according to motion in a manner which makes this reverse scenario a possibility.

System and signal path.

Kit needed:
  • binaural microphones/earphones: Roland CS-10EM
  • binaural microphone preamp & recording device: Zoom H4N
  • duplex stereo audio interface: Behringer UCA202
  • audio processing: Raspberry Pi (Rpi) running Pure Data
  • motion detection: accelerometer/gyroscope/magnetometer somewhere on the subject's body

+
*

The microphones in the earpiece of the Roland's capture the soundfield from the natural perspective of the human head. This signal will be preamplified and recorded for later analysis by the Zoom before being sent to the Behringer interface and into the Rpi to be processed in Pure Data. The signal is then sent from Pure Data out of the Rpi through the Behringer and back to the earphone of the Roland's.

System dynamics.

If no processing is performed on the signal passing through Pure Data then audition may be experienced as 'normal', while accounting for any filtering, colouration or latency observed in the audio signal path. This mode may be used as a ground truth for subjects' experience with the system. 

In order that the subjects can experience audition as a predominantly active sense there are a number of ways that the audio may be processed according to motion (as measured from the subject's head or torso).

A few modified audition-motion contingencies to start:
  • a general non-directional measure of motion can be taken, with a lower value of motion resulting in a higher degree of masking of the stereo audio signal by white/pink/brown noise.
  • a 2-dimensional measure of motion (left-right, forward-backward), with motion toward left or right resulting in less noise masking of that side of the stereo audio image starting from the centre and working further toward the periphery as a greater angle is covered in that motion, while the degree of masking is inversely proportional to the velocity of the motion so that continued forward motion results in an auditory fovea of sorts.
  • a multi dimensional measure of motion (up to 9D possible in theory), with the similar motion-masking relations as in the above scenario but the audio being processed with a spatial audio algorithm.
The practical feasibility of the higher dimensional scenarios rests largely on the processing power of the Rpi. This will be tested in due course.

Progress and things to work on...

After spending the best part of a week getting a stable Raspberry Pi setup running full duplex stereo audio and Pure Data processing the audio I am now at the very exciting stage of working on the audio algorithms which will govern the dynamics of the modified audition-motion contingencies.There's plenty of good spatial algorithms out there for Pure Data to look at and base my work on.

As well as exploring some of the examples above I might also look at processing the signal with some sort of temporal shifting to give a sense of sonic memory. I imagine this could be similar to the idea of vision as an external memory being built up with saccadic eye movements around the field, as espoused by O'Regan and Noë (2001). 


At this stage I imagine the experimental procedure will be prepared for by encouraging subjects to take a silent walk through the woods while voluntarily engaging in a process of open listening, simply allowing their attention to be drawn to whatever sounds are most salient, whatever appears most interesting before any thoughts about what those sounds might mean arise. This could be done as a group or in pairs perhaps. The field of ecological audio acoustic ecology (see: Westerkamp, Truax, Schafer) offers rich methodologies for this kind of activity.


Analysis of the actual modified listening session could be performed against a ground truth of the subjects using the modified audition system while no processing is performed on the signal. Much more needs to be thought of as to how to then analyse the data. Live commentary performed by the subjects or commentary post-session?


Next post... more of this + modified morphologies!



Bibliography.


O'Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and brain sciences, 24(5), 939-972.

Thomas, J. A., Moss, C. F., & Vater, M. (Eds.). (2004). Echolocation in bats and dolphins. University of Chicago Press.


Roberts, W. M., & Rutherford, M. A. (2008). Linear and nonlinear processing in hair cells. Journal of Experimental Biology, 211(11), 1775-1780.



Schroeder, Charles E., et al. "Dynamics of active sensing and perceptual selection." Current opinion in neurobiology 20.2 (2010): 172-176.

Monday, 4 November 2013

Voxel Project (Part 2)

Control System

In the original VoxCAD simulator there are four materials; soft, hard and two which inflate to 20% their original value but out of phase with each other. For the single cube project we want to be able to control each cube edge independently so that we can explore what dynamics a cube can accomplish. This does however somewhat complicate the control system, we now need to be able to control 12 edges via a PC and to be able to set the extension value (or rate of expansion) for each edge.

The cube itself is designed with hydraulic syringes so we need some mechanism to apply and reduce pressure to the system. The obvious choice is again to use syringes connected to some form of motor control. There are many comercial systems available, mostly used in the medical profession, but these are large and expensive machines used for delivering exact millilitre dosses and not practical for our requirement.

On the hobby scale there are a few examples of syringe pumps, these mostly use a threaded bar and guide-rail system to pull the syringe plunger back and forth. YouTube syringe pump example

The quick prototype I have below is designed on an oil rig system of rotating wheel and pulley bar. It does take a lot of torque to move the syringe forward and the original Lego Mindstorm motor does not have the power required (although this is not what it was designed for). We have some high torque servos which should hopefully have the power to move the mechanism. The design does have the advantage that there are limited components and these are joined by simple dowel joints. The required parts could be easily manufactured in bulk using the 3D-printer.






More designes to come :)

Tuesday, 29 October 2013

Voxel Project

Introduction to Voxels

Traditional robots are typically composed of hard fixed components linked together by hinges, joints and powered by motors and gears. These have success in may areas but lack the adaptability and precision that we see in the natural world, the simplest example is to list the robots capable of lifting an egg and placing in a saucepan. In recent years work has progressed on a new range of 'soft body' robots that use new composite materials and manufacturing methods to create more biologically inspired robots. The most notable of these is the robot tentacle [1], which has shown workable examples of potential use for; bomb disposal, surgery tools and cleanup tasks in hazardous environments. As impressive as this design is it is still crafted by the human hand and can only ever be as good as it's human inventor.

A parallel line of research is into evolving soft body robots within virtual worlds. These are able to utilise a number of simulated materials and produce designs optimally evolved for a given fitness function within their environment. Karl Sims is probably the father of this field producing evolved creatures back in 1994 [2], hardware and software have moved on since then and new techniques commonly use the VoxCAD voxel simulator as a testbed. A voxel is a 3D simulated cube that can be given particular material properties (stiffness, stretch, periodic inflation) these voxels are stacked together and due the the periodic inflating nature of some of these voxels the whole structure can move.


Hackademia Project

Voxels within a simulated environment can be evolved to perform a range of motion tasks and can be optimised via a fitness function to generate designs optimised for speed, energy use or specific environmental factors. However can an evolved solution within the simulated environment perform equally well when that design is copied over to the real world?

To test this hypothesis we are first going to need some voxels. Within the simulator there are typically four materials used; hard, soft and passive, periodic volume increase of 20% in 500 time steps, and out of phase 20% periodic volume increase of 20%.

Hard and soft are relatively easy to manufacture, however a periodic volume increase is much more of an issue. The biggest problem is to create a cube that will increase in volume and yet still retain the cuboid shape and not simply turning into a balloon. The second issue is how to get all faces of the cube to expand equally and also allow for a certain amount of deformation in the cube required to allow motion to happen. The final issue is the same, but how to get a consistant inflation / deflation in all of the phase and out of phase cubes simultaneously.

Below are some of the prototypes that I have been working on:

1.0 : Voxels to cuboids

After much experimenting with different ideas the only way to retain the cube shape was to inflate the design along the cubes edges. The first of may simplifications was to assume that the overall structure would only be one voxel deep, so in effect a 2D collection of voxels. This allowed me to only have to worry about inflating the front face, but to retain stability I would also inflate the rear face leaving the internal edges fixed so rather than a cube we more technically have a cuboid design.

The first step is to be able to push out the required edges, after a little trip to B&Q and raiding the lego pneumatics tray.


Syringes were cut down to reduce the size of the cuboid, this in turn reduces the amount the cuboid would need to be inflated. Next step is to link the edges together, this time a trip to JD Sport.

With squash balls (blue spot for maximum hang time), elastic bands and dowels to complete the design we can build a full cuboid.



[ After discussions with Chris Jack on how to control a collection of 25 (min required for a locomotion design) of these cuboids and a good discussion with Chrisantha on the project direction the decision was made to concentrate on controlling a single cube. This allows me to fully understand the dynamics of the problem and how to effectively scale up and link the cubes in future. This also allows us to explore the rather interesting property of controlling each edge of the cube independently and seeing what dynamics we can get and control with 12 degrees of freedom. ]

2.0  Codename: Squeaky
The first task was to move from cuboid to full cube.  For this I created a quick mockup allowing expansion in all three dimensions that would allow me to check for issues and work on the control system for independently controlling each of the 12 faces. [ Mental note; dowel, syringes and zip ties makes an awful squeaky racket!]

Awaiting a few connectors so that I can complete this project but a few issues have arose, with the extra  flexibility introduced with the extra dimension the design has a habit of rotating on the horizontal plane and collapsing upon itself. This may be simply due to unequal pressure from the elastic bands or too much flexibility in the squash balls. The design will work for testing a control systems as long as the cube does not go over it's collapse threshold point, but a better connection method is top of the todo list.


To Do:
1) New connection method (codename: ******** - well that would give it away!)
2) Control system prototype (control 12 faces independently)



[1] http://www.seas.harvard.edu/suo/papers/279.pdf
[2] http://creativemachines.cornell.edu/sites/default/files/ALIFE10_Hiller.pdf
[3] http://creativemachines.cornell.edu/sites/default/files/GECCO09_Hiller.pdf

Monday, 26 August 2013

Just published: From Blickets to Synapses. How brians do causal inference.

http://onlinelibrary.wiley.com/doi/10.1111/cogs.12073/abstract?deniedAccessCustomisedMessage=&userIsAuthenticated=false

From Blickets to Synapses: Inferring Temporal Causal Networks by Observation


Keywords:

  • Causal inference;
  • Rational process model;
  • Neuronal replicator hypothesis;
  • Polychronous groups;
  • Backwards blocking;
  • Screening-off

Abstract

How do human infants learn the causal dependencies between events? Evidence suggests that this remarkable feat can be achieved by observation of only a handful of examples. Many computational models have been produced to explain how infants perform causal inference without explicit teaching about statistics or the scientific method. Here, we propose a spiking neuronal network implementation that can be entrained to form a dynamical model of the temporal and causal relationships between events that it observes. The network uses spike-time dependent plasticity, long-term depression, and heterosynaptic competition rules to implement Rescorla–Wagner-like learning. Transmission delays between neurons allow the network to learn a forward model of the temporal relationships between events. Within this framework, biologically realistic synaptic plasticity rules account for well-known behavioral data regarding cognitive causal assumptions such as backwards blocking and screening-off. These models can then be run as emulators for state inference. Furthermore, this mechanism is capable of copying synaptic connectivity patterns between neuronal networks by observing the spontaneous spike activity from the neuronal circuit that is to be copied, and it thereby provides a powerful method for transmission of circuit functionality between brain regions.
Read it, or don't. 
Chrisantha