Pages

Monday, March 26, 2012

www.braincomputerinterfaces.org

I've just started http://www.braincomputerinterfaces.org to help bridge the link between brain research and actual deployment of BCI for real customers. The purpose is:

- To allow researchers to share their work and see what else is being done
- To allow industry to see what research is being done, and if there's anything that can be applied.
- To allow the BCI industry to share their work with integrators (like IBM)
- To allow integrators to get a flavour of what tech is out there, and what is coming

I can only find one similar community - on LinkedIn: http://www.linkedin.com/groups?gid=1103077 but it is a private group not searchable via Google, and also some do not have linkedin profiles, so I think it's worth providing another space for debate.

Please do contribute to the conversation!

Monday, December 19, 2011

"I'm a bus" Part 1 - The idea

I'm working on an exciting little project at the moment, with some colleagues of mine from Hursley, which could have an impact on how people travel to work...

2500+ people work at IBM Hursley and so there is a minibus which picks people up from nearby Chandler's Ford and takes them to work three times each morning, and returns them at night. A while ago we instrumented the bus with a GPS tracking device and a button which allowed people to see its whereabouts using a map on their smart phone, and see how many free seats there were on board.

This greatly helps reduce the number of people travelling in their cars and back every day. But this is just one route - it would be impossible to provide a minibus from all the many locations that people come from to reach Hursley! But what if the cars coming in and out of Hursley were also able to act as minibuses, and pick up other IBMers along the way? Then, we would have mini-minibuses coming from Winchester, Portsmouth, Alton, and many other places!

So the idea of the "I'm a bus" scheme is that car drivers using a smart phone can record their journey to and from work, and upload it as a bus route on the "I'm a Bus" website. Every day, when they leave home, their route becomes active, showing the car's current location. Passengers who need a lift in to work can stand in a safe place on the route and request a lift, via their own phone. The driver's smart-phone will announce using voice technology that "Joe Bloggs has requested a lift in 1000 meters on Hursley Road" and will display a picture of them from our internal employee directory, so they can be recognised. The shared journey will be recorded, and you can see the environmental savings you and everyone else combined (in terms of miles saved and carbon dioxide) has made on the website.

This means people can measure their contribution to lowering greenhouse gas emissions, develop a wider network as they pick up new faces en route to work, and also, as the driver, you can specify a preferred contribution, which the passenger will see prior to requesting a lift. This might be a "cup of coffee", or a suggested contribution per mile.

Of course, once this is running, there are lots of enhancements, which could be made, including allowing traffic / weather information to be shared from web feeds or from other driver's information, sharing planned routes as well as impromptu routes, and provide a full list of possible options for getting around (eg ties with bus, train and taxi services)

So.... this is the *idea*. Turning it into a working system is something we've been working on for a while. It will rely on good intuitive technology that doesn't require lots of learning, and it will require enough people to use it, to make it effective. In my next post, I will reveal how we're getting on!

Friday, April 22, 2011

Kinect Libraries

Today I've been looking around at the libraries available for Kinect development. There's lots of hacking going on and so I wanted to create a list of libraries so as not to get confused. I've made it public in case it's of use to others (and feel free to correct me if I have made an error).

OpenKinect http://openkinect.org
This is the main page for the open kinect libraries for Linux, OSX and Windows. There is a highlevel API and wrappers for Python, C, ActionScript, C++, C#, Java, Javascript and Lisp.

depthJS http://depthjs.media.mit.edu/
An extention for Chrome/Safari from MIT media lab that allows javascript access to the kinect. Unfortunately only the source is available and it's not a simple install at the moment, but looks promising.

SensorKinect http://www.primesense.com/ https://github.com/avin2/SensorKinect
PrimeSense are a company that make the PrimeSensor, a product on which the Kinect was based. They have drivers to convert real images to depth maps (called PrimeSense/Sensor). However they have also created a driver for the Kinect, called SensorKinect.

OpenNI - http://openni.org
OpenNI supplies a set of APIs to be implemented by the sensor devices, and a set of APIs to be implemented by the middleware components. A very interesting PDF is the OpenNI Userguide.

NITE - http://www.primesense.com/?p=515
This is "middleware" made by PrimeSense, which plugs into the OpenNI framework

FAAST - http://projects.ict.usc.edu/mxr/faast/
The Flexible Action and Articulated Skeleton Toolkit sits on top of NITE, and provides tracking of whole skeletons. (Windows only)

openNI Kinect http://www.ros.org/wiki/openni_kinect
This focusses on the integration of the Kinect with ROS (Robot OS)

I will keep updating as I discover more!

Monday, October 18, 2010

Emotiv Headset and Locked In Syndrome

In March, a salesman working for IBM had a stroke, which left him with complete paralysis, unable to use his muscles, and without the ability to speak. His brain however is working fine - a condition called Locked-In Syndrome. His means of communicating is by his eyes - looking up for yes, and down for no. He has to wait for someone to ask him if he'd like to speak before being able to do so. Then, using a letter chart, that someone must point at letters one by one, until a confirmation is received. This is the first letter of the sentence. The process must be repeated for every other letter in the sentence, until the full sentence is spelt out.

The salesman in question is called Shah, and Sarah (my wife) is his Occupational Therapist (OT), who saw the mind-reading headset from
Emotiv that I was using in IBM and thought about the potential this could have for Shah. The headset was designed for the gaming industry, and measures facial expressions, excitement/boredom levels and can be trained to listen for particular thoughts, which can perform an action on the computer.

Sarah asked me to demonstrate it to the consultant, speech therapist, psychologist and hospital ward manager - who agreed it may have some potential, and who took it to the ethics committee. Shah was informed about the headset and what it can do - but that we weren't aware of any person with the symptom who had tried it before. Shah, being a bit of a techy, was up for pioneering it - so I met him last week for his first go.

Just like the many other people I have demonstrated the headset with, Shah was instantly able to train the system so that one action - the "push" action - would push the floating "Emotiv cube" into the distance. As we trained, the ability to push (and stop pushing) at will, improved.

Adding a second action adds a complication. Now the unit has to distinguish not just whether you think or don't think, but which thought you are having - much harder! And like everyone, it will take a bit of time to practise getting sufficiently good at this to control. However Shah is up for the challenge, and has a fantastically supportive family who are willing to help him train.

Last week, after one week of use, Shah had managed to train the headset very well - though when tired the Emotiv skill can go down as well as up. Working with the speech therapist, we have connected the output of the headset to the input of
"The Grid 2" - a piece of software by Sensory Software which allows a user, normally via eye tracking, keyboard, mouse or switches, to control their environment, write emails and surf the internet. We have initially set up 3 different menu items, and thinking "push" starts highlighted these options in turn at 10 second intervals. Thinking push again, will select one of the options. Sounds easy perhaps - but if I tell you to *not* think of a rabbit - you can't help but think of a rabbit! It's hard to think about when to start and stop thinking, and then change to think about the thought you are supposed to be thinking about! I think this will take some practice to achieve, but once achieved could be widened to many more options, with very little extra practice.

Often people ask about using the headset for people with different conditions, and in different places. What with Shah being a fellow IBMer who likes gadgets, and his OT being my wife, and with the particular medical condition he had, conditions were ideal to try out the headset. It will take time for us all to get it right - but for now it is looking good. I will try to keep you updated!

Thursday, May 27, 2010

Wearable computing devices

Time to do a bit of blogging again. I've recently been asked about various wearable computing devices... here's a starter list, but I'd be interested in any other links you may have.

AR Glasses: These plug into a phone and can overlay data like directions / emails etc whilst travelling.

Gesture Gloves: Very accurate hand movement recognition.

Gesture Watch: Accelerometer, temperature, pressure sensors - also can act as wireless hub for other sensors.

Sixth Sense (wearable video/projector): Gesture recognition, ability to detect objects and augment real objects with data by projecting back on to them. Very cool.

Emotiv Headset: Brain reading device built for gamers which reads facial expression, excitement/engagement, trained actions and head movement.

NeuroSky Mindset: Brain reading device which measures excitement/engagement. Chip has the ability to add other sensors.

Mobile Phone Jewellery
: Article from 10 years ago about IBM going into jewellery that can be used as a mobile phone.


We have used some of these sensors in the ETS Lab... more in a follow up post!

Saturday, February 14, 2009

X200 Laptop from IBM On Demand Community


IBM set up the On Demand Community programme some years ago, which provides money or IBM products to eligible community organisations and schools where IBM employees and retirees are actively volunteering (measured by recording hours in an online recording system), and in support of specific projects.

Four years ago, the 1st Chandler's Ford BB DofE Group applied for some funding for training, and received over £500. Training does not come cheap, with some Walking Group Leader courses running at around £400 per person, and the Assessment at a similar cost. Our First Aid courses are run by St. Johns Ambulance in Eastleigh, who help us by letting us donate a relatively small amount for a 2 day course. Leaders generally have to pay for this themselves, so it was good that, thanks to the contribution from IBM, the group could subsidise some of this training.

At the end of 2008 we applied for an IBM technology grant, and were very happy to be awarded a new X200 laptop. Using this, we will be able to maintain records of participants, leaders, training materials, and other resources needed to run the group - as well as using it for route planning using MemoryMap and helping young people access the DofE website if they need to get advice on completing their Skills, Volunteering and Physical sections of the Award.

We run the Award by charging participants as little as possible in order to prevent young people from missing out on opportunities due to the cost. This means however, we are completely reliant on donations to "enhance" the group in ways that this donation from IBM has, and will do. The BB owns one laptop which we can borrow for route planning, but with 6 groups attempting to route plan using the same computer, it makes a big difference to have our own! So a public thanks again to IBM for this donation.

Tuesday, January 13, 2009

Using Brain Computer Interfaces for something useful

There are now a whole collection of Brain Computer Interfaces around... and this latest one from the games company Mattel uses the power of concentration to make a ping pong ball hover. The technology is supplied by NeuroSky, who have also provided the technology for a Star Wars game - $50-$100 for the fun of using "brain waves to allow players to manipulate a sphere within a clear 10-inch-tall training tower..." - something I could imagine myself doing just the once. Never-the-less it's quite good to see the technology coming down in price.

What fascinates me however is how we might be able to use brain waves for real personal benefit and integrate it more with our every day computer interaction experience. People who have severe motor difficulties could use it as a rehabilitative aid. Performing product research, it can be used for detecting when people are excited or not when they experience a particular product or website. When we are browsing on-line, we could combine it with eye tracking to provide feedback on what really interests us, and then tailor our online experience accordingly. When we go to the shops... the cinema... driving in the car... feedback can be provided to enterprise systems in all sorts of situations, which in turn can affect the environment around us. I'm really looking forward to talking with a number of customers about how they can integrate this and other interface technology in their own innovative projects in 2009!