Thursday, August 30, 2012

Robots with vision piece together damaged coral

The deep waters west of Scotland are characterized by the occurrence of large reef-forming corals that provide homes to thousands of animals. But Scottish corals are threatened by adverse impacts of bottom fishing that damages and kills large areas of reef.

At present, the only solution to the problem is to employ scuba divers to reassemble the coral fragments on the reef framework. However, the method has had only a limited success because the divers cannot spend long periods underwater nor reach depths of over 200 meters where some of the deep-sea coral grows.

Now, however, researchers at Heriot-Watt University (Edinburgh, Scotland) are embarking on a project that will see the teams of scuba divers replaced by a swarm of intelligent robots.

The so-called "Coralbots" project is a collaborative effort led by Dr. Lea-Anne Henry from the School of Life Sciences in partnership with Professor David Corne from the School of Mathematical and Computer Science and Dr. Neil Robertson and Professor David Lane from the School of Engineering and Physical Sciences.

Their idea is to use the small autonomous robots to seek out coral fragments and re-cement them to the reef. To help them do just that, the computers on board the robots will distinguish the fragments from other objects in the sea through the use of object recognition software which is under development.



If the researchers can realize their goals, swarms of such robots could be instantaneously deployed after a hurricane or in a deep area known to be impacted by trawling, and rebuild a reef in days or weeks.

While it might seem pretty ambitious, the folks at Heriot-Watt have got plenty of experience in underwater robotics and signal and image processing. At the university's Ocean Systems Lab, they have previously developed obstacle avoidance and automatic video analysis algorithms, as well as autonomous docking and pipeline inspection systems.

The team of researchers working on the new project is supported by Heriot-Watt Crucible Funding which is specifically designed to kick-start ambitious interdisciplinary projects.

Reference: Underwater robots to 'repair' Scotland's coral reefs. BBC technology news.

Wednesday, August 29, 2012

Competition time

Since its launch in 1990, the Hubble telescope has beamed hundreds of thousands of images back to Earth, shedding light on many of the great mysteries of astronomy.

But of all the images that have been produced by the instruments on board the telescope, only a small proportion of them are visually attractive, and an even smaller number are ever actually seen by anyone outside the small groups of scientists that publish them.

To rectify that matter, the folks at the European Space Agency (ESA) decided to hold a contest that would challenge members of the general public to take never-before-publicized images from Hubble's archives and to make them more visually captivating through the use of image processing techniques.

This month, after sifting through more than 1000 submissions, the ESA has decided on the winner of its so-called Hubble's Hidden Treasures competition -- a chap by the name of Josh Lake from the USA who submitted a stunning image of NGC 1763, part of the N11 star-forming region in the Large Magellanic Cloud.

Lake produced a two-color image of the NGC 1763 which contrasted the light from glowing hydrogen and nitrogen. The image is not in natural colors because hydrogen and nitrogen produce almost indistinguishable shades of red light, but Lake processed the images to separate out the blue and red, dramatically highlighting the structure of the region.


Through the publicity gained from the exercise, the organizers of the competition have undoubtedly attracted numerous people to the Hubble web site to see the many other spectacular images produced by the other folk that entered the contest.

Here at Vision System Design, I’d like to emulate the success of the Hubble's Hidden Treasures competition by inviting systems integrators to email me any astonishing images that they may have taken of their very own vision systems in action.

My "Vision Systems in Action" competition may not come with any prizes, but I can promise that the best images that we receive will be published in an upcoming blog, providing the winners with lots of publicity and, potentially, a few sales leads as well.

If you do decide to enter, of course, please do take the time to accompany any image you submit with a brief description of the vision system and what it is that it is inspecting. Otherwise, you will be immediately disqualified!

The "Vision Systems in Action" competition will close on September 15, 2012. You can email your entries to me at andyw@Pennwell.com.

Tuesday, August 21, 2012

Hiding from the enemy

Camouflage is widely used by folks in the military to conceal personnel and vehicles, enabling them to blend in with their background environment or making them resemble anything other than what they really are.

In modern warfare, however, a growing number of sensors can 'see' in parts of the spectrum where people cannot. Therefore, designing camouflage for a wide variety of terrains, and enabling it to be effective across the visual, ultraviolet, infrared and radar bands of the electromagnetic spectrum is crucial.

One way to do this is to examine how the natural camouflage of animals enables them to hide from predators by blending in with their environment, and then mimicking those very same techniques using man-made materials.

Thinking along such lines, a team of researchers from Harvard University (Cambridge, MA, USA)  announced this month that they have developed a rather interesting system that allows robots inspired by creatures like starfish and squid to camouflage themselves against a background.

To create the camouflage, the researchers create fine micro-channels in sheets of silicone using 3-D printers which they then use to dress the robots. Once they are covered with the sheets, the researchers can pump colored liquids into the channels, causing the robots to mimic the colors and patterns of their environment.


 The system's camouflage capabilities aren't limited to visible colors, however. By pumping heated or cooled liquids into the channels, the robots can also be thermally camouflaged. What's more, by pumping fluorescent liquids through the micro-channels, the silicone sheets wrapped around the robots can also be made to glow in the dark.

According to Stephen Morin, a postdoctoral fellow in the Department of Chemistry and Chemical Biology at Harvard University, there is an enormous amount of spectral control that can be exerted with the system. In the future, he envisages designing color layers with multiple channels which can be activated independently

Dr. Morin believes that the camouflage system that the Harvard researchers have developed will provide a test bed that will help researchers to answer some fundamental questions about how living organisms most efficiently disguise themselves.

For my money, however, it might be more lucrative to see if the camouflage could be deployed to help the military hide its personnel in the field more effectively.

Reference: Camouflage and Display for Soft Machines, Science magazine, 17 August 2012:  Vol. 337 no. 6096 pp. 828-83.

Friday, August 17, 2012

The tender trap

One of the great advantages of being the head editorial honcho here at Vision Systems Design magazine is that I'm able to spend a great deal of my time visiting systems builders who develop image processing systems that are deployed to inspect products in industrial environments.

During the course of my conversations with the engineers at these companies, I'm always intrigued to discover -- and later convey to the readers of our magazine -- how they integrate a variety of hardware components and develop software using commercially available image processing software packages to achieve their goals.

Although it's always intellectually stimulating to hear how engineers have built such systems, what has always interested me more are the reasons why engineers choose to use the hardware or software that they do.

Primarily, of course, such decisions are driven by cost. If one piece of hardware for example, is less expensive than another and will perform adequately in any given application, then it’s more likely than not to be chosen for the job.

The choice of software, on the other hand, isn't always down to just the price of the software package itself. If a small company has invested time and money training its engineers to create programs using one particular software development environment, it's highly likely that that same software will be chosen time after time for the development of any new systems. The cost involved in retraining engineers to learn enough about a new package might be simply too exorbitant, even though it might offer some technical advantages.

To ensure that they do not get stuck trapped with outmoded software, however, engineering managers at systems builders need to meet up with a number of image processing software vendors each year -- including the one that they currently use -- and ask them to provide an overview of the strategic direction that they plan to take in forthcoming years.

If it becomes clear during such a meeting that there is a distinct lack of such direction on the software vendor's part, then those engineering managers should consider training at least one of their engineers to use a new package that might more effectively meet the demands of their own customers in the future.

Certainly, having attended more than a few trade shows this year, it's become fairly obvious to me which software vendors are investing their own money in the future and which are simply paying lip service to the task. And if you don't know who I'm talking about, maybe you should get out more.

Monday, August 13, 2012

My big head

It's been known for quite some time that the overall size of the brain of an individual can be used to judge how intelligent he or she is. More specifically, it's been discovered that the size of the brain itself accounts for about 6.7 percent of individual variation in intelligence.

More recent research has pinpointed the brain's lateral prefrontal cortex, a region just behind the temple, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence.

Now, new research from Washington University in St. Louis suggests that another 10 percent of individual differences in intelligence can be explained by the strength of the neural pathways connecting the left lateral prefrontal cortex to the rest of the brain.

Washington University's Dr. Michael W. Cole -- a postdoctoral research fellow in cognitive neuroscience -- conducted the research that provides compelling evidence that those neural connections make a unique contribution to the cognitive processing underlying human intelligence.

The discovery was made after the Washington University researchers analyzed functional magnetic resonance brain images captured as study participants rested passively and also when they were engaged in a series of mentally challenging tasks, such as indicating whether a currently displayed image was the same as one displayed three images ago.

One possible explanation of the findings is that the lateral prefrontal region is a "flexible hub" that uses its extensive brain-wide connectivity to monitor and influence other brain regions. While other regions of the brain make their own special contribution to cognitive processing, it is the lateral prefrontal cortex that helps co-ordinate these processes and maintain focus on tasks at hand, in much the same way that the conductor of a symphony monitors and tweaks the real-time performance of an orchestra.

Now this discovery, of course, could have some important implications. Imagine for, example, a future where employers insisted that all their prospective employees underwent such a scan as part of their interviewing process so that they could ensure that  they always hired folks with lots of gray matter.

That thought might worry you, but not me. You see, my old man was always telling me that I had a big head. Then again, maybe he never meant his remarks to be taken as a complement.

Interested in reading more about the uses of magnetic resonance imaging in medical applications? Here's a compendium of five top news stories on the subject that Vision Systems Design has published over the past year.

1. MRI maps the development of the brain

Working in collaboration with colleagues in South Korea, scientists at Nottingham University (Nottingham, UK) aim to create a detailed picture of how the Asian brain develops, taking into account the differences and variations which occur from person to person.

2. Ultraviolet camera images the brain

Researchers at Cedars-Sinai Medical Center (Los Angeles, CA, USA) and the Maxine Dunitz Neurosurgical Institute are investigating whether an ultraviolet camera on loan from NASA's Jet Propulsion Laboratory could help surgeons perform brain surgery more effectively.

3. Imaging technique detects brain cancer

University of Oxford (Oxford, UK) researchers have developed a contrast agent that recognizes and sticks to a molecule called VCAM-1 that is present in large amounts on blood vessels associated with cancer that has spread to the brain from other parts of the body.

4. Imaging the brain predicts the pain


Researchers from the Stanford University School of Medicine (Stanford, CA, USA) have developed a computer-based system that can interpret functional magnetic resonance (fMRI) images of the brain to predict thermal pain.

5. Camera takes a closer look at the workings of the brain

Optical imaging of blood flow or oxygenation changes is useful for monitoring cortical activity in healthy subjects and individuals with epilepsy or those who have suffered a stroke.