Wednesday, May 30, 2012

Your image could save a stream

Vision systems have traditionally been employed in industrial applications to inspect products to ensure that they meet the requirements set down by manufacturers.

But with the advent of the camera-based mobile phone, new applications are coming on stream that allow individual members of the public to take part in so-called "citizen based"  projects in which they can capture images to help inspect the state of the environment.

In one such project called Creek Watch, folks across the world can monitor watersheds and report their conditions using an iPhone application developed by IBM Research. Every update provides data that local water authorities can then use to track pollution, manage water resources and plan environmental programs.

The free Creek Watch app is claimed to be easy to use. All individuals have to do is to stop by any waterway and, with the phone's GPS enabled, take a photo and submit three crucial pieces of data on the water level, flow rate and trash found.

"That’s all it takes to play your part in helping conserve and protect your local water resources," said Christine Robson, an IBM computer scientist who helped develop Creek Watch. "No expertise or training is required. This is an exercise in crowd sourcing, where every individual is encouraged to become a citizen scientist and get engaged with their environment."

A new update to the app makes it easy for users to share their photos and findings on Facebook and Twitter, if they want to. The IBM researchers think that such postings are expected to encourage more users to use the app and allow them to collect more data.

IBM Research aggregates the Creek Watch reports and makes them available at, where water control boards and other interested parties can filter the data and view it as an interactive map or download a spreadsheet. The California State Water Control Board is the first entity to partner with IBM and use Creek Watch to monitor the thousands of miles of creeks and steams across its jurisdiction.

With the app in use in 25 countries so far, IBM researchers hope that Creek Watch adoption will continue to grow across the globe. "The iPhone's GPS system automatically ties each Creek Watch submission to a precise location, allowing water experts anywhere in the world to find local data to use for critical water management decisions," said Jeff Pierce, who leads the mobile computing research team at IBM's Almaden facility and helped develop Creek Watch.

I can't help but think that this Creek Watch app is a rather good idea. Let's hope the idea will encourage other folks in the computer business to develop similar apps that will empower the general public to use their cameras for the benefit of mankind.

Friday, May 25, 2012

An infra-red comeback for the beetles

I've always been fascinated by the field of biomimetics. It's always amazed me how researchers involved in that field can rip apart biological systems found in animals and plants and then create new man-made technology that effectively performs the same function.

So you can imagine how interested I was when I read this week that a team of German researchers from the University of Bonn (Bonn, Germany) have concluded that the sensors of black fire beetles might even be more sensitive than un-cooled infrared sensors designed by man!

Apparently, the critters in question use their sensors to detect forest fires, even from great distances, since their wood-eating larvae can only develop in freshly burned trees. Naturally enough, since they have had years of experience in doing so, you might have expected that they would perform the function rather well.

Now, with the help of other researchers at the Forschungszentrum caesar (Bonn, Germany) and the Technische Universit├Ąt Dresden (Dresden, Germany), the researchers at Bonn have figured out how the beetle's infrared sensor actually works, and they have started to work on building their very own biomimetic copy.

The researchers say that they have discovered that each beetle is kitted out with tiny cuticula spheres, smaller than the diameter of a fine hair that are filled with water that absorbs infra-red radiation very well. When these heat up, the water expands suddenly, and the resulting change in pressure is immediately detected by sensory cells.

One they had figured that out, the researchers had to determine just how sensitive the sensors actually were. Naturally, that led them to think that if they could only put mini transmitters on the beetles, they would then be able to determine how far they flew to a burnt area and from that calculate the minimum radiated heat from the fire that the beetles were be attracted to. But at a length of about 1 cm, the poor beetles were too small to carry a transmitter for long distances.

So the researchers relied on data from an event that happened in August 1925 when a large oil depot in Coalinga, California went up in flames. Reports from that era mentioned that the huge blaze attracted masses of charcoal beetles. Since the fire was in the forestless Central Valley of California, the researchers deduced that the beetles must have flown in from large forests on the western foothills of the Sierra Nevada about 130 kilometers away.

The results of their calculations from that data indicate that infrared sensors of the beetles may be able to detect infra-red radiation better than any man-made uncooled infrared sensors currently available on the market.

Rather cool stuff, I thought. But better yet, if the researchers really can build a biomimetic replica of the beetles' sensors, the infra-red detector based on the beetles' biology might change the way we detect forest fires, or detect leaks in petrochemical plants, forever.

Wednesday, May 23, 2012

Secret chamber exposed on CCTV camera

When my brother's sewer pipe blocked up last year, he called out the helpful chaps from Dyno-Rod who took a closer look with their CCTV equipment.

From the CCTV footage, they were able to determine that the cause of the problem was nothing more than a bunch of roots that had grown into the pipe work from a tree that had been planted close to the house. After that, it was simply a case of hauling the tree out of the backyard, getting the roots out of the pipe and relining it.

Now one might think that performing such work analyzing the footage from CCTV cameras might be a little repetitive, not to mention dull and boring. But for some involved in the industry, it can actually be quite exciting!

That’s right. Take the case of another UK-based outfit called Lanes for Drains (Leeds, UK),  for example, who earlier this year used their own sewer surveillance technology to reveal the 200-year-old hidden secret of one of the UK's largest man-made reservoirs.

It all started when the company in question was called in to work on a £5.5m project to repair a dam at the 108 hectare Chasewater reservoir which is situated near Lichfield in Staffordshire. The dam was built at the same time as the reservoir was created way back in the halcyon days of 1796, making it one of the oldest reservoir dams in the UK.

More specifically, the folks at Lanes for Drains were asked to carry out a CCTV survey on a 100 meter long brick-lined drawdown culvert designed to control the release of water from the reservoir. But when they did, they found that the culvert, which was 1 meter high and 0.9 meter wide, was 70 per cent blocked with silt and bricks.

So taking things in hand, they used a Kaiser-Whale recycling jet vacuum tanker to clear the debris while continuously monitoring progress with HD quality video footage from an Rovver (Remote Operated Video Vehicle Enhanced Receiver) crawler camera manufactured by Ipek (Hirschegg, Austria).

During the process, the team discovered a large chamber not identified on the plans. The hidden chamber, measuring 1.5m by 2m, was discovered 25 meters into the culvert -- 3.5 meters under the floor of the reservoir!

Lanes for Drains' lead engineer Dave Faris said that it was quite special to be able to work on a structure that had not been seen for over 200 years. And now, we can all take a look at the hidden chamber too, since the company has released an image of it captured from the camera aboard the Rovver.

I know my brother will be interested to hear the news. But if his sewers ever block up again, I'm certain that the Dyno-Rod team he calls out will fail to find anything quite as interesting as the Lanes for Drains chaps did.

Friday, May 18, 2012

How to get more business

The president of the small to medium-sized machine builder had always made himself a reasonable living from developing custom-based vision systems to inspect one particular type of widget.

And although the market for such widget inspection systems wasn't all that large, his system, or variants of it, had been purchased and widely used by most of the widget makers in the industry.

Recognizing the fact that the Internet might provide his company with more exposure, the President decided to hire a developer to create a web site that would explain to any new potential customers the capabilities of his company.

And that's exactly what he did. Prior to developing the web site, however, a member of the web site development team went to visit the outfit to find out more about the system. Once there, he was treated to an hour long dissertation by the company's marketing manager who explained to him exactly why there was a need to inspect such widgets and a very brief description of the system that they had developed.

After the meeting, the web developer went back to his office and created a stunning web site for the company. Not only did the web site provide a background of the company and its university origins, it also detailed the inspection problems faced by the widget manufacturers. Unfortunately, however, there was only the briefest description of the system itself, the functions it performed and its performance – all listed in bullet points alongside a rather sorry-looking photograph.

When the web site was launched, the President was confident that it would offer the world an insight into the capabilities of his company and lead to a number of new leads, not just from companies involved in widget manufacturing, but other outfits that might be faced with similar inspection problems.

Sadly, of course, that didn't happen. Having won orders with most of the widget makers already, the website attracted no new customers whatsoever. The president and the marketing manger were disappointed – not in the least because they had spent a considerable amount of money developing it. And they were both at a loss to understand why the reaction had been so poor.

Some months later, a journalist from a magazine that covered the field of vision systems design came to call upon the company. Unlike the previous interview, however, the journalist grilled the president to discover exactly how the system had been designed. He went away confident that his description of the hardware chosen for the system and the software that had been written for it by the engineering team would prove a hit with his readers.

And it was. When the article was published, it became immediately apparent to many of the engineering readers how the company could tweak the widget-inspection system to help them inspect their own products too.

The president is now a happy camper. Having received several enquiries from some potential new customers, he is now looking forward to expanding his business into new markets. More importantly, however, he has at last recognized the importance of publicizing the technical capabilities of his technical team rather than just promoting a single product line.

Wednesday, May 16, 2012

Vision system helps students find a place in the library

A team of researchers from the Stevens Institute of Technology (Hoboken NJ, USA) have developed a vision-based system that will enable students to find a seat in the university’s library during the busiest periods of the semester.

“We noticed that we and many of our fellow students spent precious study time looking for seating in the S.C. Williams library,” says team leader and computer engineer Richard Sanchez.

But rather than grumble and head back to their dormitories, computer engineer Sanchez’ team decided to develop a system to solve the problem once and for all. Called Seatfinder, it's an innovative way of detecting what seats are available in the library that uses image processing to identify the presence of an individual.

With the assistance of Professor Bruce McNair, Distinguished Service Professor of Electrical & Computer Engineering and the Stevens IT department, the team deployed an IP camera with network connectivity to capture a live feed from the library.

After capturing images of the seating, the IP camera transmits a live video feed over a wired network to a remote computer using an open source application called iSpy. Next, a motion detection algorithm is used to trigger a snapshot of the live camera feed any time an individual leaves or enters a table in the library.

The team's code then processes the captured image to determine which areas are free or occupied, after which it updates a website with an image that represents occupied and empty chairs around the table. By checking the web site, other students can then find a space to sit and study a lot more quickly.

Having successfully proven their system, the Seatfinder team now hopes to add more cameras and monitor more tables to increase the sophistication of the system. Eventually they envision deploying it at other venues where space can become scarce, such as restaurants, movie theatres or parking lots.

But I can see even more opportunities for the system, one of which is on public transportation. If you have ever traveled on any form of rail transportation in any large metropolitan area during the rush hour period, for example, you will know what a problem it is to find anywhere to sit.

While the Seatfinder system can't obviously produce more seating on a full train, if some of our rail companies could deploy such a system in each of their carriages, it would reduce the time that hapless commuters spend walking up and down the carriages in the desperate hope of finding a spare seat. Those commuters lucky enough to be able to access a website from the train, that is.

Friday, May 11, 2012

A day at the races

Many years ago, whilst working on an electronics journal in the United Kingdom, I was invited to a press conference to witness the unveiling of what was claimed to be the next big thing in electronic systems.

To ensure that as many folks turned up as possible, the organizers of the event decided to hold the conference at one of England's famous race tracks,  where they invited the press to remain after the company presentations to enjoy the rest of the day betting on the horses.

As it transpired, the press conference itself turned out to be a crashing bore -- the system itself had already been launched months earlier in the US, and most of the press knew about it already. Sadly, the view of the racetrack wasn't much better. You see, although the organizers had erected a tent as close to the racecourse as possible, the view from it was somewhat restricted. The result was that the horses could only been seen for mere seconds as they raced by.

Now, I'm pleased to say, a solution to this problem is a hand, thanks to a couple of savvy students from the University of Arizona (Tucson, Arizona, USA) who have come up with a solution based, of course, on the use of a vision system. That’s right. David Matt and Kenleigh Hobby’s new 'jockey cam' is a smart camera-based helmet that can stream real-time video from a jockey's head, putting the viewers right in the saddle rather than stuck by the side of the track.

The two entrepreneurs have even launched a new company called EquiSight to market the system, and have captured the attention of ESPN, the Ireland Tourism Board, racetracks around the world and venture capital investors.

Since starting the company, it's been a whirlwind ride for the pair. In December 2011 they presented their system to more than 600 racing and gaming executives at the 38th Annual Symposium on Racing and Gaming in Tucson. In February 2012, they filmed 30 jockey-cam videos at prestigious race tracks and training centers on the East Coast. And in March 2012, Wasabi Ventures (San Mateo, CA, USA) selected EquiSight to receive venture capital support.

EquiSight, which now holds three provisional patents on its technology, also recently inked an agreement with an engineering design firm to explore the potential application of helmet-cam technology for the military and law enforcement.

As a member of the press, of course, I'm now looking forward to the day when I'm invited to take a look at the system at a press conference here in the good old US of A. Needless to say, if it's held at a race track, it's bound to be more enjoyable than the last press conference I attended at such an event all those years ago.

Thursday, May 10, 2012

Playing in a virtual sandbox

When I was a little lad, there was nothing I enjoyed more than playing in a sandbox in the back yard of my parents' house during the summer. That old sandbox became a place where I could construct my own virtual worlds filled with castles, mountains, river and oceans. It was, as I recall, jolly good fun.

But in this age when computers dominate most every aspect of our lives, it should come as no surprise that even the humble sandbox has now been transformed into a digital experience.

That’s right. As part of an NSF-funded project on freshwater lake and watershed science, the good folks at UC Davis (Davis, CA, USA) have created a sandbox that allows users to create topographic models by shaping real sand which is augmented in real time by an elevation color map, topographic contour lines, and simulated water!

While it sounds like great fun for kids of all ages, the sandbox hardware built by project specialist Peter Gold of the UC Davis Department of Geology has actually been developed to teach geographic, geologic, and hydrologic concepts such as how to read a topography map, the meaning of contour lines, watersheds, catchment areas, and levees.

To do just that, the system makes use of the Microsoft Kinect camera which continuously collects images from the sandbox as the user interacts with the sand. The 30 FPS images from the camera are fed into a statistical evaluation filter which filters out moving objects such as hands or tools, reducing the noise inherent in the Kinect's depth data stream, and filling in missing data.

After some software processing, the resulting topographic surface is then rendered by the projector suspended above the sandbox, with the effect that the projected topography exactly matches the topography of the real sand. The software uses a combination of several OpenGL shaders to color the surface by elevation using customizable color maps and to add the real-time topographic contour lines.

At the same time, a water flow simulation is run in the background using another set of GLSL shaders. The simulation is run such that the water flows exactly at real speed assuming a 1:100 scale factor, unless turbulence in the flow forces too many integration steps for the driving graphics card to handle.

The researcher say that the software they developed to create the virtual sandbox is based on the Vrui VR development toolkit and the Kinect 3D video processing framework. For those wishing to develop their own sandbox, they say that the software will soon be available for download under the GNU General Public License.

That's surely good news for all those of us who would love to take a trip back to our childhood by sampling the new virtual delights of the sandbox in our own living rooms.

More information on the augmented reality sandbox can be found here.

Friday, May 4, 2012

A rotting vision of the future

There's no doubt that autonomous robots fitted out with vision systems can perform some pretty useful tasks. For the military, such robots can help prevent injuries in the battlefield by providing soldiers with a remote insight into the nefarious misdoings of the enemy.  On civvy-street, they can be used for the equally important purposes of improving the environment or keeping the borders of countries safe.

But these conventional robots are predominantly made of rigid resilient materials, many of which are non-biodegradable and have a negative impact on the natural ecology.

That means that any robot deployed in the environment must be continually tracked and, once it has reached the end of its useable life, must be recovered, dismantled, and made safe. But there is also the risk that the robot will be irrecoverable with consequent damage to the eco-system.

Now one might think that there's not a lot that can be done about this. After all, the computers, power sources and imagers used in such robotic devices are all man-made, and many of them are composed of some pretty toxic substances. And there doesn’t appear to be any alternative to using them.

But apparently, the academic folks at the University of Bristol (Bristol, UK) think differently. They believe that it might be possible to build robots that decompose once they have reached the end of their mission. While it all might sound a bit far-fetched, the idea has won Dr. Jonathan Rossiter, Senior Lecturer in the University of Bristol’s Department of Engineering Mathematics, a two-year grant of over £200,000 from the Leverhulme Trust (London, UK) to work on developing robots that rot.

That's right. Dr. Rossiter, together with Dr. Ioannis Ieropoulos, Senior Research Fellow at the Department of Engineering, Design and Mathematics at the University of the West of England (UWE, Bristol, UK), aim to show that autonomous soft robotic artificial organisms can exhibit an important characteristic of biological organisms -- graceful decomposition after death.

Since there will no longer be the need to track and then recover such robots, the deployment of large numbers of biodegradable robots in the environment will become inherently safe. Hundreds or thousands of robots could therefore potentially be deployed, safe in the knowledge that there will be no environmental impact.

Now I'm sure that there are some among you who believe that Dr. Rossiter plans are all a bit pie in the sky. But I'm not one of them. This year, for example, we have already seen the development of micro lens arrays produced by a mineral precipitation by researchers at the Max Planck Institute of Colloids and Interfaces (Potsdam, Germany).

Nevertheless, I'm going to be very interested to see if and how those UK researchers can build an entire robot that is totally biodegradable. I guess we'll just have to wait a couple more years to find out.

For more information on the work of the researchers at the Bristol Robotics Laboratory -- including a robot powered on a diet of flies -- click here.

Wednesday, May 2, 2012

Optical elements fogged up no more

Developing a vision-based system for the transportation sector is a far cry from building a system that performs inspection tasks in a factory setting. In transportation systems, many environmental issues such as extremes in temperature and humidity must be taken into account. What's more, engineers must also contend with dealing with the dirt and dust found in the natural environment that could affect the performance of their systems.

Now, thanks to researchers at the Massachusetts Institute of Technology (MIT, Cambridge, MA, USA), those environmental issues commonly addressed by systems' developers in the world of transportation may finally become a thing of the past.

That's right. You see, what the MIT researchers have done is to develop a new type of glass with a nano-textured array of conical features on its surface that not only resists fogging and glare but is self-cleaning too.

The surface pattern on the glass itself consists of an array of nanoscale cones that are five times as tall as their base width of 200 nanometers. It is created using coating and etching techniques adapted from the semiconductor industry. Fabrication begins by coating a glass surface with several thin layers, including a photoresist layer, which is then illuminated with a grid pattern and etched away; successive etchings produce the conical shapes.

Ultimately, the researchers hope that the surface pattern can be made using an inexpensive manufacturing process that could be applied to a plethora of optical devices, the screens of smart phones and televisions, solar panels, car windshields and even windows in buildings.

According to mechanical engineering graduate student Kyoo-Chul Park, a photovoltaic panel can lose as much as 40 percent of its efficiency within six months as dust and dirt accumulate on its surface. But a solar panel protected by the new self-cleaning glass would be more efficient because more light would be transmitted through its surface. What's more, while conventional glass might reflect more than 50 percent of the light, the anti-reflection surface would reduce this reflection to a negligible level.

The researchers say they drew their inspiration from nature, where textured surfaces ranging from lotus leaves to desert-beetle carapaces and moth eyes have developed in ways that often fulfill multiple purposes at once. Although the arrays of pointed nano-cones on the surface appear fragile, the researchers say they should be resistant to a wide range of forces, ranging from impact by raindrops in a strong downpour or wind-driven pollen and grit to direct poking with a finger. Further testing will be needed to demonstrate how well the nano-textured surfaces hold up over time in practical applications.

In a nod to the vision industry, the researchers say that -- aside from solar panels -- the new glass could be used in optical devices such as microscopes and cameras that are be used in humid environments, where both the anti-reflective and anti-fogging capabilities could be useful. In touch-screen devices, the glass would not only eliminate reflections, but would also resist contamination by sweat.

The news will surely peak the interest of those engineers currently using more elaborate ways to contend with such issues in the transportation sector. For them, the commercialization of products based on such technology will not come soon enough.