This video shows the low power SCAMP3 vision chip system tracking and counting multiple objects at low power. The APRON environment was used to develop the tracking algorithm, and simulate it running on the device.
Apologies for the boring bits in this video, the bugs just won’t do what I tell them to. Anyway, the task was to use a SCAMP3 device to count bugs entering the field of the camera. I only own one bug, but the algorithm will count multiple bugs. You can see the count in the top right.
APRON was involved with several projects at Telluride 2010. In particular it was used for a sensory fusion project, where a robot had to learn the relationships between visual, audio and motor spaces. This short clip is the robot learning the relationship between its arm and its neck using vision as the stimuli.
This video shows an attendee interacting with the robot head, to gain an intuitive understanding into how various aspects of the neural model were developing.
A quick video showing some of the Playstation Eye capabilities being used in APRON. The screen capture software is a little slow (and consumes a great deal of CPU), but frame rates at QVGA can easily reach 180 FPS, even with non-linear transformations and sobel being calculated.
I’ve added support for the Playstation EYE camera. Why? Good question! This cheap little camera actually packs a powerful punch, and could be compared with expensive industrial vision sensors. It can capture images of 320×240 at around 180FPS (!), it has a very low noise sensor, and all the parameters are fully controllable. This sure beats a webcam at 28 FPS. It is also capable of some image transforms itself, such as translation, rotation, scaling, and also some cool ones like lens correction. Features such as white balance, gain, exposure can all be set in real-time.
The APRON plug-in is actually a wrapper for the “free for commercial use” driver by CodeLaboratories, and I must say they have done a superb job.
Most of the APRON algorithms included in the demos will use this camera from now on (just check the filenames). This feature has really increased the power of APRON as image capture was traditionally the bottleneck. Not any more!
APRON is capable of performing spiking neural network simulations. Here we see 6 interconnected layers of Izhikevich neurons, with various projections between the layers. The input stimulus is from a webcam. Although I’ve no idea what the model is actually doing, it looks nice none the less.
I should point out that the screen capture software used here is not recording at the full rate. This makes the video look sticky. The performance of this algorithm is limited by the frame-rate of the camera.
In benchmarks it takes approximately 2ns to update a neuron (2.0GHz Intel Core2 DUO)
APRON can also handle colour images. Those extra two dimensions of data can be really handy for visualisation and segmentation. Check out the video below. Also shown are some more features of the APRON environment.
Here is a video of APRON executing a self organising map. It’s trivialised to highlight certain features of the APRON simulation environment. The main feature here, is that APRON can explode developing receptive fields implemented with LinkMaps, so you can see them adapt with the presentation of stimuli. Unlike a normal SOM, this approach limits the inhibitory radius, creating a patchy response layer.
Check out this video of real-time optic flow being calculated in the APRON environment. The approach is a basic block-matching algorithm, but the APRON environment allows you to interactively analyse and debug the running algorithm.