Thursday, December 29, 2011

Image processing is all in the mind?

If you are anything like me, you probably gave out one or two video games as presents to some of your younger relatives over the holiday season. If you did, however, you ought to be aware of the danger involved, and the potential repercussions of your actions.

Apparently, according to research carried out by academics in the UK and Sweden, some video game players are becoming so immersed in their virtual gaming environments that -- when they stop playing -- they transfer some of their virtual experiences to the real world.

That's right. Researchers led by Angelica Ortiz de Gortari and Professor Mark Griffiths from Nottingham Trent University's International Gaming Research Unit, and Professor Karin Aronsson from Stockholm University, have revealed that some gamers experience what they call "Game Transfer Phenomena" (GTP), which results in them doing things in the real world as if they were still in the game!

Extreme examples of GTP have included gamers thinking in the same way as when they were gaming, such as reaching for a search button when looking for someone in a crowd and seeing energy boxes appear above people's heads.

Aside from the game players, though, I wonder if this research might also have some implications for software developers working in the vision systems business, many of whom also work long hours staring at computer screens, often taking their work home with them.

How many of these individuals, I wonder, also imagine that they are performing image-processing tasks when going about their daily routine? Have you, for example, ever believed that you were performing a hyperspectral analysis when considering whether or not to purchase apples in the supermarket, optical character recognition to check the sell-by date on the fruit, or even a histogram equalization on the face of the attractive young lady at the checkout line?

While Professor Mark Griffiths, director of the International Gaming Research Unit at Nottingham Trent University, said that he found that intensive gaming may lead to negative psychological, emotional, or behavioral consequences, the same might hold true for those of us who spend too much time at work developing image-processing software.

Thank goodness, then, that we will soon be able to look forward to a few more days respite from our toils to celebrate the New Year.

Happy New Year.

Thursday, December 22, 2011

Christmas time is here again

It's that time of year again. That time when many of us will be erecting a fir in the corner, decking the halls with boughs of holly, and sitting back to enjoy a glass of mulled wine as we roast chestnuts over an open fire.

That's right. It's Christmas, the festive season in which we put our work to one side for a while to enjoy a few well-deserved days off to spend with our friends and family.



But before the festivities can begin, there are numerous chores that must be performed. And one of these, of course, is to send Christmas greetings to all our friends and colleagues.

Traditionally, such messages of comfort and joy have been sent via the postal service. After purchasing a box of Christmas cards, many of us spend hours writing individual messages inside them, after which the cards are duly inserted into envelopes, addressed, and taken down to the Post Office where they are mailed.

In these days of automation, however, some of us no longer leave the comfort of our armchairs to perform the task, preferring to use e-mail greeting cards instead. While such e-mail messages may never have the quite the same personal appeal as a real piece of card with a Christmas scene printed upon it, they certainly are a cost-effective alternative to sending out the real thing.

With such electronic wizardry automating our traditional time-intensive Christmas labors, it's interesting to consider by what means we might be delivering our Christmas greetings to our friends and colleagues in the future.

Well, I think the folks at Edmund Optics might have found the answer. To distribute a Christmas message to their audience in the vision systems design industry, the innovative Edmund Optics team has produced a rather amusing video that they have uploaded onto YouTube where it can be viewed by all and sundry.

But this isn't just any video greeting. Oh no. The entertaining video features a number of Edmund Optics' employees playing a familiar Christmas tune on the company's own range of telecentric lenses. That's right. Watch carefully and you will see the so-called "Telecentric Bell Choir" [click for YouTube video] ringing the lenses to play that Christmas favorite "Carol of the Bells."

From my perspective, this form of sending holiday greetings to friends and family is clearly the wave of the future. What's more -- for Edmund Optics at least -- it might be a way to generate a whole new market for its acoustically-enabled telecentric product line.

Happy holidays!

Wednesday, December 21, 2011

Recycling light bulbs with vision

The United Nations is urging countries across the globe to phase-out old style incandescent light bulbs and switch to low-energy compact fluorescent light (CFL) bulbs to save billions of dollars in energy costs as well as help combat climate change.

One issue with such bulbs is that they contain minute traces of mercury, however, and hence should be recycled to prevent the release of mercury into the environment rather than just tossed into a dumpster.

This, of course, has created an enormous opportunity to automate the collection of old CFL bulbs -- an opportunity that one machine maker in the UK has clearly identified.

That's right. Partly thanks to the stealthy deployment of a machine vision system, London, UK-based Revend Recycling has now developed a machine to collect light bulbs in exchange for discount vouchers or other consumer rewards.



When using the so-called "Light Bulb Recycling Reverse Vending" machine, an individual is guided through the recycling process by a touch-screen menu. After the unwanted bulbs are placed into the machine, they are then identified by the vision system, after which the machine softly drops the bulbs into a storage container.

The machine then automatically dispenses a reward incentive voucher, which can be chosen from a large selection of different rewards on the touch-screen.

To enable recovery and recycling statistics to be collated, the recycling data captured from every light bulb received are transmitted to a secure central database. An embedded computer system in the machine also determines when the light bulb storage container in the machine is full, and when it is, the machine automatically sends a text or email when it nears full capacity, so that it can be emptied.

So far, the vision based recycling machine has proved to be a bit of a hit. The Scandinavian modern-style furniture and accessories store Ikea, for example, recently inked an agreement with Revend Recycling, and soon the store's customers in the UK, Germany, and Denmark will have the option to recycle used light bulbs with the machines. As an added feature, the recycling machines can also be purchased with an add-on system for collecting domestic batteries.

Friday, December 16, 2011

New standard aims to accelerate image processing

It goes without saying that computer vision has become an essential ingredient of many modern systems, where it has been used for numerous purposes including gesture tracking, smart video surveillance, automatic driver assistance, visual inspection, and robotics.

Many modern consumer computer-based devices -- from smart phones to desktop computers -- can be capable of running vision applications, but to do so, they often require hardware-accelerated vision algorithms to enable them to work in real time.

Consequently, many hardware vendors have developed accelerated computer vision libraries for their products: CoreImage by Apple, IPP by Intel, NPP by Nvidia, IMGLIB and VLIB by TI, the recently announced FastCV by Qualcomm.

As each of these companies develops its own API, however, the market fragments, creating a need for an open standard that will simplify the development of efficient cross-platform computer vision applications.

Now, Khronos' (Beaverton, OR, USA) vision working group aims to do just that, by developing an open, royalty-free cross-platform API standard that will be able to accelerate high-level libraries, such as the popular OpenCV open-source vision library, or be used by applications directly.

The folks at the Khronos Group say that any interested company is welcome to join the group to make contributions, influence the direction of the specification, and gain early access to draft specifications before a first public release within 12 months.

The vision working group will commence work during January 2012. More details on joining Khronos can be found at http://www.khronos.org/members/ or e-mailing info@khronos.org.

Wednesday, December 14, 2011

Embedded vision is just a game

In the late 1970s, game maker Atari launched what was to become one of the most popular video games of the era -- Asteroids.

Those of our readers old enough to remember might recall how thrilling it was back then to navigate a spaceship through an asteroid field which was periodically traversed by flying saucers, shooting and destroying both while being careful not to collide with either.

Today, of course, with the advent of new home consoles such as the Sony Playstation, Microsoft Xbox, and Nintendo Wii -- and the availability of a plethora of more graphically pleasing and complex games -- one might be forgiven for thinking that games like Asteroids are ancient history.

Well, apparently not. Because thanks in part to some rather innovative Swedish image-processing technology, it looks as if old games might be about to make a comeback.

That’s right. This month, eye tracking and control systems developer Tobii Technology (Danderyd, Sweden) took the wraps off “EyeAsteroids,” a game it claimed was the world’s first arcade game totally run by eye control.




In the company’s EyeAsteroid game, players have the chance to save the world (yet again) from an impending asteroid collision. As a slew of asteroids move closer to Earth, the gamer looks at them in order to fire a laser that destroys the rocks and saves the world from destruction.

Henrik Eskilsson, the chief executive officer of Tobii Technology, believes the addition of eye control to computer games is the most significant development in the gaming industry since the introduction of motion control systems such as the Nintendo Wii. And if he’s right, that’s a big market opportunity for all sorts of folks that are associated with the vision systems business.

But perhaps more importantly, the game might make interested parties take a look at another of the company’s product offerings: an image recognition system that can find and track the eyes of drivers in order to inform an automotive safety system of the driver’s state, regardless of changes in environmental conditions.

Because while saving Earth from asteroids using the eyes might be good fun and games, saving lives on the highway through tracking the eyes of motorists is a much more distinguished achievement, and one that -- in the long run -- might also prove to be a more lucrative business opportunity.

Nevertheless, those folks more captivated by the former application of the technology will be only too pleased to know that the EyeAsteroids game is available for purchase by companies and individuals. Tobii Technology plans a limited production run of 50 units that will be available for $15,000 each.

Friday, December 9, 2011

It's a bug's life (revisited)

Remotely operated unmanned aerial vehicles (UAVs) equipped with wireless video and still cameras can be used in a variety of applications, from assisting the military and law enforcement agencies in surveillance duties, to inspecting large remote structures such as pipelines and electric transmission lines.

Typically, however, such vehicles are quite sizable, bulky beasts due to the fact that they must carry a power source as well as large cameras. And that can limit the sorts of applications that they can effectively handle.

Now however, it would appear that researchers at the University of Michigan College of Engineering have come up with an interesting idea that might one day see vehicles as small as insects carrying out such duties in confined spaces.

And that’s because the vehicles that they are proposing to "build" are, in fact, real insects that could be fitted out with the necessary technology to turn them into mobile reconnaissance devices.




The work is at an early stage of development at the moment of course. To date, professor Khalil Najafi, the chair of electrical and computer engineering, and doctoral student Erkan Aktakka are figuring out ways to "harvest" energy from either the body heat or movement of the insects as a means to power the cameras, microphones, and other sensors and communications equipment that they might carry.

As interesting as it all sounds, there are obviously bigger engineering challenges ahead than just conquering the energy harvesting issue. One obvious problem is how the researchers will eventually control the insects once they have been fitted out with their energy harvesting devices and appropriate vision systems.

Then again, they may not need to. If a plague of such insects were dropped in Biblical proportions upon a rogue state for clandestine monitoring purposes by our armed forces, the chances that one of them would reveal some useful information would be pretty high.

The religious and political consequences of letting loose high-tech pestilent biotechnology on such countries, however, might be so profound that the little fliers never get off the ground.

Editor's note: The research work at the university was funded by the Hybrid Insect Micro Electromechanical Systems program of the Defense Advanced Research Projects Agency under grant No. N66001-07-1-2006.

Wednesday, December 7, 2011

I can see clearly now

While I've always been short-sighted, until not long ago it was always pretty easy for me to read books or magazines while wearing the same set of glasses that helped me see at great distances.

But over the past couple of years, it became apparent that I not only needed glasses to correct for myopia but also to assist with looking at things closer to hand.

To solve my dual myopic-hyperopic headache, I turned to my local optician who suggested that a pair of bifocals or varifocal lenses might do the trick. And it did. Thanks to her recommendation, I now sport a rather expensive pair of glasses with varifocals that enable me to focus my eyes onto both objects far and near.

As great as these varifocals are, however, I accept that they aren't for everyone. In fact, some people dislike them as they find it difficult to get used to which areas of the lens they have to look through!

One optics company -- Roanoke, Virginia-based PixelOptics -- has come up with a unique solution to the problem: an electronic set of glasses called emPower that has a layer of liquid crystals in each lens that can instantly create a near-focus zone, either when the user either touches a small panel on the side of the frames or in response to up and down movements of the head.

Under development for 12 years, the new system, which is protected by nearly 300 patents and patent applications pending around the world, looks to be yet another interesting option for those folks with optical issues like mine.

I'd like to think that there might be some use for this technology in the industrial marketplace, too, but I haven't quite figured out where that might be yet.

I can't, for example, envisage any system integrator actually manually swiping cameras fitted with such lenses to change their focal length while they might be inspecting parts at high speed in an industrial setting. Nor could I imagine that many engineers would build a system to move such a camera up and down to do the same -- an autofocus system would surely be a lot more effective!

Nevertheless, I'm keeping an open mind about the whole affair, because the imaging business is replete with individuals that can take ideas from one marketplace and put them to use in others.

Monday, December 5, 2011

In your eye

While industrial vision systems might seem pretty sophisticated beasts, none has really come close to matching the astonishing characteristics of the human eye.

Despite that fact, even the human eye is often less than perfect, as those suffering from short or long-sightedness will testify. Those folks inevitably end up seeking to correct such problems either by wearing spectacles or contact lenses.

Not content with developing a contact lens to correct vision anomalies, however, a team of engineers at the University of Washington and Aalto University, Finland, have now developed a prototype contact lens that takes the concept of streaming real-time information into the eye a step closer to reality.

The contact lens itself has a built-in antenna to harvest power sent out by an external source, as well as an IC to store the energy and transfer it to a single blue LED that shines light into the eye.

One major problem the researchers had to overcome was the fact that the human eye -- with its minimum focal distance of several centimeters -- cannot resolve objects on a contact lens. Any information projected on to the lens would probably appear blurry. To resolve the issue, they incorporated a set of Fresnel lenses into the device to focus light from the LED onto the retina.

After demonstrating the operation and safety of the contact on a rabbit, they found that significant improvements are now necessary before fully functional, remotely powered displays actually become a reality. While the device, for example, could be wirelessly powered in free space from approximately 1 m away, this was reduced to about 2 cm when placed on the rabbit's eye.

Another issue facing the researchers is to create a future version of the contact lens with a display that can produce more than one color at a higher resolution. While the existing prototype lens only has one single controllable pixel, they believe that in the future such devices might incorporate hundreds of pixels that would allow users to read e-mails or text messages.

I don’t know about you, but I can’t wait for the time in which I might be able to have such e-mails, text messages, or, heaven forbid, Twitter feeds, projected directly into my eye, especially when I am on vacation. No, I wouldn’t even put my pet rabbit through such an ordeal.



Shown: Conceptual model of a multipixel contact lens display. 1: LED. 2: Power harvesting circuitry. 3: Antenna. 4. Interconnects. 5. Transparent polymer. 6. Virtual image.

Wednesday, November 30, 2011

Complementary technologies compete

Over the past few years, advances in imaging technology have led to the development of some astonishing products in the medical field. Perhaps none has proved more useful at diagnosing brain activity as functional magnetic resonance imaging or fMRI.

But the fMRI technology does have its drawbacks. While it has a good spatial resolution of a few millimeters, it suffers from a poor temporal resolution of a few seconds.

In contrast, electroencephalography (EEG) -- a complementary technique that records the electrical signals from the coordinated activity of large numbers of nerve cells through electrodes attached to the scalp -- has the opposite problem.

While it has the advantage of being able to detect rapid changes in neural activity with millisecond temporal resolution, it suffers from a poor ability to pinpoint the location of brain activity. In other words, it has poor spatial resolution.

Hence the usefulness of EEG is limited, not just because its spatial resolution is comparatively poor, but also due to the fact that it can also be insensitive because of the many signals from the brain that are mixed together. It does, however, have the advantage of being portable and comparatively cheap, and therefore is appropriate for a clinical setting, unlike an MRI scanner that is large and comparatively expensive.

Fortunately, in research labs at Cardiff University Brain Research Imaging Centre (CUBRIC), it is now possible to perform EEG and fMRI simultaneously, and this fact may lead to the birth of a new diagnostic system thanks to the marriage of both the technologies.

That's right. At Cardiff University, a team led by professor Richard Wise proposes to improve the spatial resolution of EEG by using EEG and fMRI measurements acquired simultaneously on healthy volunteers to discover correlations between the EEG and fMRI data from which they will produce a statistical model.

Subtle features of the EEG signal, which are not normally easily identified but which are associated with the spatial location of the source of neural activity, will be highlighted by their association with the fMRI data, which is good at pinpointing locations in space.

Having established the relationship between the EEG and fMRI data in mathematical terms, EEG data alone will then be used to simulate fMRI scans. These simulated fMRI scans might then one day be used by clinicians as a new means to diagnose brain activity -- minimizing the requirement for an fMRI scan to be carried out on a patient.

It's an interesting idea, for sure, and one that holds the possibility of seeing an advanced medical imaging technology partly doing itself out of a job!

Wednesday, November 23, 2011

Optical 3-D dental scanner wins VISION 2011 prize

Anyone who has been to the dentist can testify to the fact that undergoing root canal and crown therapy or being measured up for dentures isn't the most pleasant of experiences.

So I was particularly pleased to see that a dental scanner that promises to take the misery out of such a process has won this year's EUR5000 top prize at the VISION 2011 trade fair in Stuttgart.





Today, creating a model of the mouth is a fairly primitive procedure. Teeth are cast using an impression compound that is placed in the mouth of a patient and left to set. A resulting plaster model of the teeth is then prepared from the impression, after which the model is digitized using a stationary scanner. In a final step, dentures can be produced from the model with the aid of a CAD/CAM system.

But all of that is set to change thanks to the new system developed by the prize-winning engineers at the Austrian Institute of Technology (AIT), which will obviate the need for dentists to make dental impressions of the mouth, making the entire process less unpleasant and time-consuming.

The AIT system itself is based on a small 3-D scanner that is placed inside the mouth. The scanner illuminates the mouth with light after which two cameras capture images in real time. A data file -- which previously had to be created in the numerous stages described earlier -- is then created and transmitted to a PC over a USB port where the 3-D model can be visualized (see video).


According to the AIT researchers, a complete jaw arch can be measured in 3 to 5 minutes, and the accuracy of the completed model is to within 20 microns.

The stereo method for measuring the location of the teeth and the design of the scanner have been patented jointly by Klagenfurt am Worthersee-based startup a.tron3d and AIT. But those outside the dental industry can license the stereo software on an individual basis -- as PC software, as a program library for Windows and Linux, or as firmware for embedded devices such as smart cameras.

For its part, a.tron3d -- which holds the exclusive rights for the dental industry -- plans to release the scanner, called the Bluescan-I, by March 2012.

Sadly, that'll not be of too much use to me since I have already had much dental treatment on my teeth using the older, more primitive measurement method. But the good news is that it will certainly help new patients who will no longer have to experience almost choking when their mouths are full of that rather horrid tasting impression compound.

Friday, November 18, 2011

Smart cards and 3-D imaging

Traveling to Europe can be an exhilarating experience. The chance to make contact with the Old World and its customs can be both delightful and enchanting. But it can also be frustrating, especially for visitors from the United States.

My visit to VISION 2011 in Stuttgart was no exception. Stopping off to catch up with my brother in the UK after the show, I discovered that the many (petrol) gas stations in the country were unable to accept my credit cards at the pump due to the fact that they had not been enabled with a so-called "chip and PIN."

That's right. In the UK, at least, it's common for credit and debit cards to come equipped with an embedded microprocessor which is interrogated by any number of automated terminals to provide goods and services once the user has entered a Personal Identification Number (PIN) that is uniquely associated with the card.

As frustrated as I was by the inability of the gas pumps to accept my chip-less card, my brother Dave saw the beasts as just a small step toward a completely automated future -- one in which vision systems could play an important role.

You see, having spent the past three days trawling around the VISION 2011 show, he had come across many companies that were developing 3-D vision systems. And while some of these were to be used in rather specific bin-picking applications or in capturing images of traffic on German highways, others could be used to capture images of the human body.

Capturing such images, Dave said, could create an enormous market far bigger than the field of machine vision -- especially if such 3-D images of the body could then be made small enough that they could be downloaded onto the memory of a credit-card-sized device.

Imagine, he inferred, if a complete image of an individual's body were to be encapsulated in such a way. Gone would be the need to wander around a store to search for an item of clothing that fits. Upon entering the store, a computer system would simply interrogate a user's card to identify an individual by his size and highlight where appropriate clothes could be found.

Medical professionals could benefit too. Upon entering a doctor's surgery, the current image of an individual's body could be immediately compared to a past image contained on the individual's credit card, providing doctors with an instant indication of any dramatic charges to body size that might indicate any medical problems.

Dave believes that there's enormous potential for such technology. But as much as he believes that such devices might make our lives so much easier in the future, I only wish I had one of those existing European chip and PIN cards today so that I might have been able to top up the tank at the gas station.

Wednesday, November 16, 2011

A booming business?

If you watch the daily news on television, you might be forgiven for thinking that Europe is in a complete financial and economic mess. But you wouldn't think so if you attended last week's VISION Show 2011. For there, a record number of companies and attendees filled the halls of the Messe Stuttgart, proving that despite the problems that might face the folks in the Eurozone, the vision industry still appears to be booming.

That's right. There's no doubt about it. This year's VISION 2011 show in Stuttgart, Germany was an unparalleled success. More than 350 exhibitors attended the show, an increase of 8.4% on the number that were there last year. And from the number of interested parties that were walking the aisles of the show, I'd say that interest in the industry is as high as it has ever been.

But what is really going on in the market? Is it booming or stagnant? To find out, I attended the annual networking reception held in the halls of the show, where Gregory Hollows from the Automated Imaging Association (AIA, United States) was joined by Sung-ho Huh from the Korean Machine Vision Association (KMVA) and Isabel Yang from the China Machine Vision Union (CMVU) to present the state of the market in their various countries.

Out from the crisis of 2009 which saw the vision systems market down 20%, Gregory Hollows -- the vice-chair of the AIA board of directors -- said that the vision systems market in the US had rallied, experiencing 4% growth this year. Not bad news on the US front, then.

Korea's Sung-ho Huh had pretty much the same to say. According to him, the Korean market is in pretty good shape, too, and had experienced growth of 5% this year. Of course, as one might expect, the picture from China was even rosier, with Isabel Yang telling us that the vision system market in China had experienced a growth rate of around 10%.

After the reception, of course, came the analysis. A few folks that I spoke to were somewhat worried about prospects for the market next year. While they had a reasonable 2011, they weren't expecting things to stay as positive in 2012. Others were interested to know how they might enter the more lucrative Asian marketplace, which they saw as a prime opportunity worth exploiting. And there were a few, I must admit, that didn't believe that the Chinese economy was quite as rosy as it was painted, citing a number of enormous factory openings there that had been put on hold due to weak demand in the West.

Interpreting market figures from any of the above organizations, no matter how carefully they are researched, might never provide a true indication of how vibrant the vision systems marketplace is. Perhaps the only true market indicator could be found by counting the number of companies and attendees at VISION 2011 itself. If that's anything to go by, I'd say that the vision business is still in pretty good shape!

Friday, November 4, 2011

Mistaken identity

Next week marks the start of one of the biggest event in Vision Systems Design's calendar -- the VISION 2011 show in Stuttgart Germany.

Accompanying me to the show this year will be Susan Smith, our publisher; Judy Leger, our national sales manager; and the latest addition to our editorial team, Dave Wilson.

As many of you may know, Dave joined the magazine just last month to increase our presence in the important European market. But what some of you may not know that Dave is also my identical twin brother, a fact I thought I'd make perfectly clear before the show begins, in order to diminish the confusion that will inevitably arise as a case of mistaken identity on the show floor.

You see, although numerous vision system algorithms have been developed over the years to differentiate between products of a similar nature, I'm sorry to say that most human beings' visual systems -- even those in the machine vision industry -- seem to be incapable of differentiating between the two of us, despite the fact that I clearly inherited all the brains and good looks.

For that reason, I have programmed my brother's central processing unit to respond to the greeting "Hi, Andy" whenever he hears it, after which he will instigate a verbal subroutine, which will explain that he is simply a poor imitation of the real thing.

However, if you do run into the man instead of me, you will find that he is equally as willing to learn what new technologies are being discussed at the show.

He would be especially delighted to discuss any applications of machine vision related to the use of smart cameras and hyperspectral imaging. Please be sure to bend the man's ear if you see him!

Tuesday, November 1, 2011

Win $50,000 courtesy of the US Government!

Today's troops often confiscate remnants of destroyed documents from war zones, but reconstructing entire documents from them is a daunting task.

To discover if they can unearth a more effective means to do just that, the folks at DARPA have come up with a challenge that they hope will encourage individuals to develop a more automated solution.

That's right. The defense organization is hoping that by offering a whopping $50,000 in prize money, entrants to its so-called "Shredder Challenge" will generate some ideas that it might be able to make use of.

The Challenge itself consists of solving five individual puzzles embedded in the content of documents that have all been shredded by different means. To participate in the challenge, participants must download the images of the shredded documents from the challenge web site, reconstruct the documents, solve the puzzles, and submit the correct answers before Dec. 4, 2011.

Points will be awarded to those who provide the correct answers to the mandatory questions associated with each puzzle. $1,000 will be awarded for each point scored up to $50,000 for a perfect score. DARPA will then award one cash prize of up to $50,000 to the participant who scored the highest total number of points by the deadline.

Registration is open to all eligible parties at www.shredderchallenge.com, which provides detailed rules and images of the shredded documents for the five problems.





Clearly, this is an application that would benefit from the expert knowledge of those in the image processing field who might be able to develop -- or deploy -- a set of vision-based algorithms to reconstruct the documents and hence solve the puzzles.

Interestingly enough, of course, several individuals contributing to the discussion forums on the Shredder Challenge web site are taking exactly that approach...

Wednesday, October 26, 2011

Camera runs at 40,000 frames per second

A camera invented at the Langley Field Laboratory has captured images at an astonishing 40,000 frames/s, providing researchers with a great deal of insight concerning the phenomenon of knock in spark-ignition engines over a six-year period.

The high-speed motion picture camera operates on a principle that its inventors call optical compensation. The photosensitive film used in the camera is kept continuously in motion and the photographic images are moved with the film such that each image remains stationary relative to the film during the time of its exposure.

That's right. This isn't a digital camera at all, but a film camera. But perhaps even more remarkable is that it that was invented in February 1936! The first working version of the camera was constructed in the Norfolk Navy Yard during 1938 and the camera operated successfully first time on December 16, 1938 at Langley Field.

Now, thanks to an article written by Cearcy Miller, interested readers can not only discover exactly how the camera was designed but also view some high-speed motion pictures of familiar objects that illustrate the quality of the photographs taken by the camera at the time.

If you thought that high-speed imaging was a relatively new idea, why not check out how the engineers solved the problem all those years ago!

Friday, October 21, 2011

From dissertation to product release

In 2006, Ren Ng's PhD research on lightfield photography won Stanford University's prize for best thesis in computer science as well as the internationally recognized ACM Dissertation award.

Since leaving Stanford, Dr. Ng has been busy starting up his own company called Lytro (Mountain View, CA, USA), to commercialize a camera based on the principles of lightfield technology while making it practical enough for everyday use [see the Vision Insider blog entry "Lightfield camera headed for the consumer market"].



This week saw the result of his efforts, as Lytro took the wraps off a set of three cameras that can all capture the color, intensity, and direction of all the light in a scene, enabling users to focus the images they take after the fact.

The cameras themselves aren't all that different -- except for the paint job on the outside. The first two, in Electric Blue and Graphite, cost $399 and are capable of storing 350 pictures. A Red Hot version -- at the somewhat higher price of $499 -- is capable of storing 750.

With no unnecessary modes or dials, the cameras feature just two buttons (power and shutter) and have a glass touch-screen that allows pictures to be viewed and refocused directly before they are downloaded to a computer.

To illustrate the capabilities of the new cameras, a number of Lytro employees and select testers have taken some snaps and uploaded the results to the company's so-called Living Pictures Gallery, where they can be viewed and refocused on the web.

As savvy a marketing idea as that is, I can't say the same behind the company's choice of computer platform which runs its free desktop application that imports pictures from camera to computer. Rather than produce software for the enormously popular Windows PC, the company chose to support Mac OS X in its initial release.

Despite this minor upset, the company does have more exciting projects in the works. Next year, for example, it plans to launch a set of software tools that will allow the lightfield pictures to be viewed on any 3-D display and to enable viewers to shift the perspective of the scene.

Wednesday, October 19, 2011

Sent to Coventry

This week, I dispatched our European editor Dave Wilson off to the Photonex trade show in Coventry in the UK to discover what novel machine-vision systems might be under development in Europe.

Starting off early to beat the traffic jams on the motorway, he arrived at the Ricoh show grounds at the ungodly hour of eight in the morning. But that gave him a good two hours to plan the day ahead before the doors of the show opened -- which is exactly what he did.

Whilst browsing through the technical seminar program organized by the UK Industrial Vision Association (UKIVA) over a breakfast of Mexican food, one presentation in particular caught his eye.

Entitled “3D imaging in action,” it promised to reveal how a Sony smart camera and a GigE camera could be used together to create a 3-D image-processing system that could analyze the characteristics of parts on a rotating table.

The demonstration by Paul Wilson, managing director of Scorpion Vision (Lymington, UK; www.scorpionvision.co.uk), would illustrate the very techniques that had been used by a system integrator who had developed a robotic vision system that could first identify -- and then manipulate -- car wheels of different sizes and heights.

And indeed it did. During the short presentation, Wilson explained how the Scorpion Vision software developed by Tordivel (Oslo, Norway; www.tordivel.no) had been used to create the application which was first capturing three-dimensional images of the parts and then making measurements on them. The entire application ran under an embedded version of Windows XP on the Sony smart camera.

Interestingly enough, software such as Tordivel’s allows applications such as this to be developed by a user with few, if any, programming skills. Instead, they are created through a graphical user interface from which a user chooses a number of different tools to perform whatever image-analysis tasks are required.

The ease by which such software allows system integrators to build systems runs in stark contrast to other more traditional forms of programming, or even more contemporary ones that make use of graphical development environments. Both of these require a greater level of software expertise and training than such non-programmed graphical user interfaces.

Even so, the more sophisticated and easier the software is to use, the more expensive it is likely to be, a fact that was not lost on Scorpion Vision’s managing director as he spoke to our man Dave at the show.

Nevertheless, he also argued that higher initial software costs can often be quickly offset by the greater number of systems that can be developed by a user in any given period of time -- an equally important consideration to be taken into account when considering which package to use to develop your own 3-D vision system.

Friday, October 14, 2011

Steve Jobs, the OEM integrator, and me

Back in 1990, I decided to start my own publishing company. Transatlantic Publishing, as the outfit was called, was formed specifically to print a new magazine called The OEM Integrator, a journal that targeted folks building systems from off-the-shelf hardware and software.

I hadn't given much thought to that publication for years, until last week that is, when my brother telephoned me to say that he had unearthed a copy of the premier issue of the publication, complete with the entire media pack that was produced to support it.

Intrigued to discover what gems might have been written way back then, I asked him to email me a PDF of one or two of the stories that had appeared in that first issue.

As you can imagine, I had to chuckle when I opened the email attachment. For there, in all its glory, was a roundup of new hardware and software products that had been announced for none other than the Apple NuBus, a 32-bit parallel computer bus incorporated into computer products for a very brief period of time by Steve Jobs' Apple Computer. [UPDATE: Click here to read the article from the 90s!]

Despite my enthusiasm for the new bus board standard, NuBus didn't last too long, and when Apple switched to the PCI bus in the mid-1990s, NuBus quickly vanished.

But the bus that my brother chose to write an even lengthier piece on had even less success in the marketplace. His article touted the benefits of the Futurebus -- a bus that many then believed would be the successor to the VME. Sadly, however, the effort to standardize this new bus took so long that everyone involved lost interest, and Futurebus was hardly used at all.

Both these articles point out one important fact that all industry commentators would do well to take heed of. If you are going to make predictions as to what new technology is going to set the world on fire, you've got to be very, very careful indeed!

Wednesday, October 12, 2011

Technology repurposing can be innovative, too

More years ago than I care to remember, the president of a small engineering company asked me if I would join several other members of his engineering team on a panel to help judge a competition that he was running in conjunction with the local high school.

The idea behind the competition was pretty simple. Ten groups of students had each been supplied with a pile of off-the-shelf computer peripherals that the engineering company had no longer any use for, and tasked with the role of coming up with novel uses for them.

As the teams presented their ideas to the panel, it became obvious that they were all lateral thinkers. Many of them had ripped out the innards of the keyboards, mice, and loudspeakers they had been provided with and repurposed them in unusual and innovative ways to solve specific engineering problems.

Recently, a number of engineering teams across the US have taken a similar approach to solving their own problems, too, but this time with the help of more sophisticated off-the-shelf consumer technology -- more specifically, inexpensive smart phones.

Engineers at the California Institute of Technology, for example, have taken one of the beasts and used it to build a "smart"petri dish to image cell cultures. Those at the University of California-Davis have transformed an iPhone into a system that can perform microscopy. And engineers at Worcester Polytechnic Institute have developed an app that uses the video camera of a phone to measure heart rate, respiration rate, and blood oxygen saturation.

Taking existing system-level components and using them in novel ways may never win those engineers the same accolades that the designers of the original components often receive. But the work of such lateral thinkers is no less original. Their work just goes to show that great product ideas do not necessarily have to be entirely game-changing. Sometimes, repurposing existing technology can be equally as innovative.

Friday, October 7, 2011

Software simplifies system specification

National Instruments' NI Week in Austin, TX was a great chance to learn how designers of vision-based systems used the company's LabVIEW graphical programming software to ease the burden of software development.

But as useful as such software is, I couldn't help but think that it doesn't come close to addressing the bigger issues faced by system developers at a much higher, more abstract level.

You see, defining the exact nature of any inspection problem is the most taxing issue that system integrators face. And only when that has been done can they set to work choosing the lighting, the cameras, and the computer, and writing the software that is up to the task.

It's obvious, then, that software like LabVIEW only helps tackle one small part of this problem. But imagine if it could also select the hardware, based simply on a higher-level description of an inspection task. And then optimally partition the software application across such hardware.

From chatting to the NI folks in Texas, I got the feeling that I'm not alone in thinking that this is the way forward. I think they do, too. But it'll probably be a while before we see a LabVIEW-style product emerge into the market with that kind of functionality built in.

In the meantime, be sure to check out our October issue (coming online soon!) to see how one of NI's existing partners -- Coleman Technologies -- has used the LabVIEW software development environment to create software for a system that can rapidly inspect dinnerware for flaws.

Needless to say, the National Instruments software didn't choose the hardware for the system. But perhaps we will be writing an article about how it could do so in the next few years.

Wednesday, October 5, 2011

Spilling the military vision beans

While there are many fascinating application challenges that have been resolved by machine-vision systems, there are many that go unreported.

That's because the original equipment manufacturers (OEMs) that create such vision-based machines are required to sign non-disclosure agreements (NDAs) with their customers to restrict what information can be revealed.

Oftentimes, it’s not just the specifications of the machine that are required to be kept under wraps. These NDAs also restrict the disclosure of the challenge that needed to be addressed before the development of the system even commenced.

Now, you might think that the development of vision systems for the military marketplace might be an even more secretive affair. After all, building a vision system to protect those in battle would initially appear to be much more imperative than keeping quiet about a machine that inspects food or fuel cells.



While the specifics of military designs are almost impossible to obtain legally, that's not true, however, for depictions of the systems that the military would like to see developed in the future.

Often such descriptions are found in extensive detail on numerous military procurement sites, even down to the sorts of software algorithms and hardware implementations that are required to be deployed.

Could it be that in doing so, though, the military minds are handing over potentially constructive information to research teams in rogue states? If they are, then surely they are making a mockery of the very International Traffic in Arms Regulations (ITAR), which control the export and import of defense-related materials and services.

Thursday, September 29, 2011

Imaging is all in the mind

In the 1983 science-fiction movie classic Brainstorm, a team of scientists invents a helmet that allows sensations and emotions to be recorded from a person's brain and converted to tape so that others can experience them.

While this seemed quite unbelievable thirty years ago, it now appears that scientists at the University of California-Berkeley are bringing these futuristic ideas a little closer to reality!

As farfetched as it might sound, the university team in professor Jack Gallant's laboratory has developed a system that uses functional magnetic resonance imaging (fMRI) and computational algorithms to "decode" and then "reconstruct" visual experiences such as watching movies.

UC-Berkeley's Dr. Shinji Nishimoto and two other research team members served as guinea pigs to test out the system, which required them to remain still inside an MRI scanner for hours at a time.

While they were in the scanner, they watched two separate sets of movie trailers, while the fMRI system measured the blood flow in their occipitotemporal visual cortexes. On a computer, the images of the blood flow taken by the scanner were then divided into sections, after which they were fed into a computer that learned which visual patterns in the movie corresponded with particular brain activity.

Brain activity evoked by a second set of clips was then used to test a movie reconstruction algorithm developed by the researchers. This was done by feeding random YouTube videos into the computer program. The 100 clips that the program decided were closest to the clips that the subject had probably seen based on the brain activity were then merged to produce a continuous reconstruction of the original clips.

The researchers' ideas might one day lead to the development of a system that could produce moving images that represent dreams and memories, too. If they do achieve that goal, however, I can only hope that the images are just as blurry as the ones that they have produced already. Anything sharper might be a little embarrassing!

Tuesday, September 27, 2011

Mugging more effective than infrared imaging

All technology can be used for both good and evil purposes. Take infrared cameras, for example. While they can be used to provide a good indication of where your house might need a little more insulation, they can also be used by crooks to capture the details of the PIN you use each time you slip your card into an ATM to withdraw cash.

That, at least, is the opinion of a band of researchers from the University of California at San Diego (San Diego, CA, USA) who have apparently now demonstrated that the secret codes typed in by banking customers on ATMs can be recorded by a digital infrared camera due to the residual heat left behind on their keypads.

According to an article on MIT’s Technology Review web site, the California academics showed that a digital infrared camera can read the digits of a customer's PIN number on the keypad more than 80% of the time if used immediately; if the camera is used a minute later, it can still detect the correct digits about half the time.

Keaton Mowery, a doctoral student in computer science at UCS, conducted the research with fellow student Sarah Meiklejohn and professor Stefan Savage.

But even Mowery had to admit that the likelihood of anyone attacking an ATM in such a manner was low, partly due to the $18,000 cost of buying such a camera or its $2000 per month rental fee. He even acknowledged that mugging would prove a lot more reliable means to extract money from the ATM user, albeit the technique isn't quite as elegant as using an imaging system to do so.

Friday, September 23, 2011

Sick bay uses high-tech imaging

While the exploitation of vision systems has made inspection tasks more automated, those systems have also reduced or eliminated the need for unskilled workers.

But such workers won't be the only ones to suffer from the onslaught of vision technology -- pretty soon even skilled folks in professions such as medicine might start to see their roles diminished by automation systems, too.




As a precursor of things to come, take a look at a new system developed by researchers at the Leicester University as a means of helping doctors to noninvasively diagnose disease.

Surrounding a conventional hospital bed, thermal, multispectral, hyperspectral, and ultrasound imagers gather information from patients. Complementing the imaging lineup is a real-time mass spectrometer that can analyze gases present in a patient's breath to detect for signs of disease.

Professor Mark Sims, the University of Leicester researcher who led the development of the system, said that its aim was to replace a doctor's eyes with imaging systems, and his nose with breath analysis systems.

Even though nearly all the technologies employed in the system have been used in one way or another, Sims said that they have never all been used in an integrated manner.

Clearly, though, if this instrumentation were coupled to advanced software that could correlate all the information captured from a patient with a database of known disease traits, one would have a pretty powerful tool through which to diagnose disease.

The doctors, of course, would then have to find something else to occupy their time. But just think of the cost savings that could be made.

Monday, September 19, 2011

More to follow...

Starting this month, the Vision Insider blog will offer you the reader an opinionated twice a week update about industry trends, market reports, new products and technologies. While many of these will be staff-written, we will be using this forum to allow you our readers to opine on subjects such as machine vision and image processing standards, software, hardware, and tradeshows you find useful.

If you have any strong opinions that you feel we could publish (without of course being taken to court), I would be only too pleased to hear from you. And, of course, if you disagree with what I have said, you are only too free to leave a comment using our easy to use feedback form.

Friday, August 19, 2011

A new journey

Conard Holton has accepted a new position as Associate Publisher and Editor in Chief of a sister publication, Laser Focus World. Andy Wilson, founding editor of Vision Systems Design, will be taking over the role of editor in chief. Andy has been the technical mainstay of Vision Systems Design since its beginning fifteen years ago, writing many of the articles that have established it as the premier resource for machine vision and image processing.

Wednesday, June 22, 2011

Lightfield camera headed for the consumer market

A recent article in the New York Times reveals that Lytro, a Mountain View, CA startup, plans to release a lightfield camera into the point-and-shoot consumer market later this year, allowing professional and amateur photographers to "take shots first and focus later."

With $50 million in venture funding, the company is led by Ren Ng, a Stanford University Ph.D. who wrote his thesis on the subject of lightfield cameras. His work and that of others are described in the March 2008 Vision Systems Design article Sharply Focused.

Several research organizations and machine vision camera manufacturers have developed lightfield cameras, including Stanford, MIT, Mitsubishi Electric Research Labs, Raytrix, and Point Grey Research. Like traditional cameras, lightfield cameras gather light using a single lens. However, by placing an array of lens elements at the image plane, the structure of light impinging on different sub-regions of the lens aperture can be captured.

By capturing data from these multiple sub-regions, software-based image-processing techniques can be used to select image data from any sub-region within the aperture. In this way, images can be recreated that represent views of the object from different positions, recreating a stereo effect. Better still, if the scene depth is calculated from the raw image, each pixel can be refocused individually to give an image that is in focus everywhere.

An interactive photo from the Lytro website gives an idea of the potenial uses of lightfield cameras (hint: click on an area of the image to focus). And there will be a Facebook app.

Friday, June 3, 2011

Don Braggins remembered

Don Braggins, a long-standing and highly respected figure in the machine vision industry, has passed away at age 70. Founder of Machine Vision Systems Consultancy in Royston, England, in 1983, Don specialized in image processing and analysis and was a frequent contributor to and participant in organizations such as the European Machine Vision Association and the UK Industrial Vision Association (UKIVA). First as a founding member of the UKIVA in 1992, he became its director in 1995, and helped guide its development for many years. He remained a consultant to the association until diagnosed with an inoperable brain tumor in 2010.

Traveling frequently with his wife Anne, Don was welcomed by companies, universities, and trade organizations around the world for his experience, insights, and good humor. Before establishing his own company, he was product marketing manager for image analysis products at Cambridge Instruments. A graduate of Clare College, Cambridge University, he was a Chartered Engineer and a Fellow of SPIE.

Machine Vision Systems Consultancy was known for its independence as a source of information about machine vision products and services. Clients varied from multi-nationals, to startup companies, venture capitalists, and OEMs.

As editor of technical journals and frequent contributor to trade press magazines, Don regularly researched the European market for industrial vision systems for individual clients and associations. Between 2000 and 2002 he served as a non-executive board member of Fastcom Technology, a Swiss spinout from EPFL Lausanne. He was also a board member of Falcon Vision in Hungary, providing international marketing advice and technology sourcing, and introduced Falcon to the French company Edixia, which subsequently bought a controlling stake.

“Don knew the machine vision industry like the back of his hand,” remembers Andy Wilson, Editor of Vision Systems Design. “You could always rely on him to direct you towards the latest developments and innovations shown at a trade show. He was not only knowledgeable but would freely share his valuable opinions and thoughts with anyone who cared to ask. I will miss him.”

In addition to his wife Anne, Don is survived by two children and five grandchildren.

The staff of Vision Systems Design extend our sincerest condolences to the Braggins family.

--Conard Holton, Vision Systems Design

Friday, May 27, 2011

EMVA meeting--machine vision business good, missed the volcano

From May 12-14, more than 140 attendees at the 2011 EMVA business conference in Amsterdam celebrated the soaring market for machine vision products and recounted tales of traveling home from the 2010 meeting in Istanbul through the Eyjafjallajoekull volcano "Cloud". They didn't realize they would just miss yet another cloud from an Icelandic volcano, Grímsvötn, which erupted just seven days after the conference ended and disrupted air travel across parts of northern Europe.



Market health of European machine vision companies and companies doing business in Europe was reported to be excellent, with many focused on application areas that exhibit particular strength, especially automotive manufacturing, transportation imaging, and surveillance. From 2009 to 2010, overall growth by European companies was almost 35% (the drop from 2008 to 2009 had been -21% overall).

Germany expects to see overall growth of at least 11% in 2011, putting it back on a trend line consistent with pre-2009 sales, and about 20% growth was expected globally, according to the EMVA. Europe should see about 22% and the Americas about 19%. Asia, after a 62% year-over-year growth in 2010, should see a growth of 18% in 2011.

Describing the market trends, Gabriele Jansen, president of Jansen C.E.O. and Member of the EMVA Executive Board, attributed the strong rebound in machine vision sales to several factors:
- Increase in industrial production
- Broad-based improvement in sentiment among industry managers due to a significant increases in overall orders and in production trends
- Decline in the inventory of finished goods to historically low levels
- Remaining effects of stimulus programs for specific industries (eg, automotive)
- Strong increase in demand for machine vision in Asia

The conference mood was excited but a bit nervous, causing many to ask: How can this growth be sustained?

To inspire attendees to ponder answers to that question, the conference included several speakers who focused on the future, with talks about globalization and sustainablity, finding new markets using the Blue Ocean strategy, and how to think about and manage for the future.

Ramesh Rashkar, an MIT professor, described his lab's work in lightfield imaging and computional light transport, including recent work on a camera that can look around a corner. Speakers also address the rapid advances being made in service robots and machine vision for agriculture.

The networking, as always at EMVA events, was excellent. Talking with so many colleagues in the machine vision "industry" also reminded me that machine vision is not an industry per se. At its core, it is the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, security, transportation, and agriculture.

Finally, a parting view of the Cloud that was missed: This Envisat image from ESA, acquired on 24 May 2011, shows a large cloud of ash northeast of Scotland that has been carried by winds from Iceland’s Grímsvötn volcano about 1000 km away. The Grímsvötn volcano, located in southeast Iceland about 200 km east of Reykjavik, began erupting on 21 May for the first time since 2004.

Thursday, May 19, 2011

New iPhone App for 3-D imaging

It's not exactly machine vision yet, but a researcher at Georgia Tech, Grant Schindler, has created what appears to be the first 3-D scanner app for an iPhone 4.

Using both the screen and the front-facing camera, the app--called Trimensional--detects patterns of light reflected off a face to build a true 3-D model.

Schindler says Trimensional can now share movies and animated GIFs of your 3-D scans and users can unlock the 3-D Model Export feature to create physical copies of any scanned object on a 3-D printer, or import textured 3-D scans into popular 3-D graphics software.

I wonder if and when such iPhone apps can be used in machine vision applications?

Monday, May 9, 2011

Machine vision vs Angry Birds

Readers of Vision Systems Design may know that OptoFidelity (Tampere, Finland) makes systems that test user experience with products such as PDAs and mobile phones. In fact, our October 2009 cover story featured one of the company’s automated test systems, the WatchDog, which performed just such a test using a JAI Camera Link camera and National Instruments frame grabber. The WatchDog system’s interface allows a user to correlate an exact measured response of both the refresh rate of the screen and any user interaction with the device.

Now, OptoFidelity has expanded its world and made a commercial-quality video called “Man vs Robot” about a vision-guided robotic system it has built that can beat humans playing Angry Birds — which, in the unlikely case you haven’t heard, is a computer game from Rovio Mobile (Espoo, Finland) being played by millions of people on various mobile displays.

Some of the fun, machine vision, and robot technology behind that video appears in this “Making of Man vs Robot” video, also from OptoFidelity.

Thursday, April 28, 2011

Vision guides Justin the Robot playing ball

This video explains it all. The DLR in Germany has developed Justin over the years as a very adaptable research robot, able to perform duties from acting as a butler to potentially working on a satellite.

The vision and motion capabilities shown by Justin in the video are remarkable. We will be covering more of such capabilities in the coming months based the recently completed research report: Vision for Service Robots, on sale on our website.

Wednesday, April 27, 2011

Machine vision industry consolidation - what's next?

Now that the recession is over and profits are rising fast, it seems that many companies are considering how to expand their markets and solidify both geographical and technological positions. The acquisition of LMI Technologies by Augusta Technologie, parent of Allied Vision Technologies, is just the most recent example of course.

Augusta also recently acquired P+S Technik (digital film expertise) and VDS Vosskühler (infrared specialist), and in 2009 acquired Prosilica (GigE cameras).

Teledyne has been reconfiguring the machine vision world with its recent acquisitions of DALSA (cameras, boards, software) and Nova Sensors (infrared), and the partial acquisition of Optech (airborne and space imaging).

And, in this year alone, Pro-Lite acquired light measurement supplier SphereOptics. Camera systems supplier NET New Electronic Technology acquired iv-tec, which develops algorithms and real-time image-processing software. And Adept acquired food-packaging equipment supplier InMoTx after having acquired service robot maker Mobility in 2010.

These are only the most recent and obvious acquisitions. Numerous OEMs and peripheral software and hardware makers have also merged or been acquired. It's a trend long predicted in the machine vision world. What hardware and software products will be in demand by those seeking to expand? What's next in the drive to create full-product-line vendors to serve vision system integrators and end-users?

Friday, April 15, 2011

Remotely controlled equipment in action at Fukushima

This video from IDG News Service highlights some of the roles played by robotic equipment in the analysis and recovery from the disasters at the Fukushima nuclear power plants in Japan.



The plant operator Tokyo Electric Power Company (Tepco) deployed three camera-equipped, remote-controlled excavators donated by Shimizu and Kajima to clear radioactive debris around the unit 3 reactor. Robots sent to Japan by Qinetiq North America are still being evaluated before deployment to the site.

In addition, Tepco launched a Honeywell T-Hawk micro air vehicle to survey the plant from above, according to a report on CNET.

Tuesday, April 12, 2011

Having fun with Kinect and machine vision

In the course of researching our Vision for Service Robots market report, it became obvious that low-end vision systems would be a great boon to robot developers of all sorts. And indeed, researchers are taking advantage of low-cost consumer sensors to design increasingly capable and inexpensive robots.

The Microsoft Kinect, designed for the Xbox 360 game system, has set new records for consumer sales and is generating considerable excitement among robot hobbyist and researchers. The Kinect sells for about $150 and its embedded time-of-flight camera and infrared sensors can be used as a vision system for some service robots. For some interesting applications of Kinect in service robots, IEEE Spectrum has a good blog: Top 10 Robotic Kinect Hacks.



But hobbyists and service robot makers aren’t the only one taking advantage of Kinect. MVTec Software has just tested Kinect in 3-D applications for industrial tasks such as bin picking, packaging, and palletizing, as well as for research and development.

And Eye Vision Technology has used the Kinect sensor with its EyeScan 3D system for robotic applications such as depalletization and sorting components on the assembly line.

Tuesday, April 5, 2011

Robots search and assess Japanese disaster sites

Increasingly, vision-guided service robots are being deployed for rescue and assessment tasks following the earthquake and tsunami in Japan. A recent blog on IEEE Spectrum covers the deployment of KOHGA3 by a team from Kyoto University.

The team used the remote-controlled ground robot to enter a gymnasium in Hachinohe, Aomori Prefecture, in the northeastern portion of Japan's Honshu island, and assess damages. They tried to inspect other damaged buildings in the region with limited success.

The robotics team is led by Professor Fumitoshi Matsuno. KOHGA3 has four sets of tracks that allow it to traverse rubble, climb steps, and go over inclines up to 45 degrees. The robot carries three CCD cameras, a thermal imaging camera, laser scanner, LED light, attitude sensor, and a gas sensor. Its 4-degrees-of-freedom robotic arm is nearly 1 meter long and equipped with CCD camera, carbon-dioxide sensor, thermal sensor, and LED light.



In addition, there are several early reports on robot forays or plans, and numerous teams from various robot organizations are making themselves available to help. For example, you can follow the efforts in Japan of Dr. Robin Murphy, who directs the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M University, on her blog.

Tuesday, March 29, 2011

Automate 2011 shows synergy of shows

I don’t know whether Automate 2011 (held at McCormick Place in Chicago, March 21-24) was a success for every exhibitor and attendee, but it had all the necessary elements. Unofficial numbers for the show were 170 exhibitors and over 7500 attendees.

The floor traffic, which waxed and waned during the four days of the show, seemed to consist of many system integrators, tool manufacturers, and warehouse system providers—ideal traffic for the show and attributable to the very large, co-located ProMat show on materials handling. Indeed, walking around ProMat was a bit like walking through a display of machine vision in action.

Having recently completed a market report on the use of vision in service robots in, among other applications, warehousing, it was great to see the robots at Kiva Systems gliding eerily around the floor, using embedded smart camera technology to read Data Matrix codes on the floor and to position themselves below shelves that they then picked up and moved to a human who would pick out desired parts.

In additional good news for the machine vision industry, the International Federation of Robotics (IFR) presented the preliminary results of its annual statistics for industrial robots, including vision-guided robots. In 2010, with more than 115,000 industrial robots shipped, the number of units sold worldwide almost doubled from 2009, the lowest year since the early 1990s.

Here are some slides from the IFR presentation, which show trends from 2001 to 2011, and the strength of markets in Asia, especially in Korea. An article on the subject explores the trends in different regions and industries (click on images to enlarge).



Thursday, March 17, 2011

Will earthquake impact machine vision component supply?

News about the supply chain of vision and electronic components coming from Japan has so far been tentative and sporadic. The ongoing effects of the earthquake, tsunami, and nuclear power plant failures have dominated the news but indications of specific global economic consequences are emerging.

A New York Times article today investigates some of the impacts, noting that a Texas Instruments plant north of Tokyo that make A/D chips and accounts for 10% of TI's global output was damaged and won't resume full production until September. Toshiba has closed some of its production lines, potentially affecting the availability of NAND flash chips.

The port of Sendai-Shiogama is heavily damaged. It is the 13th largest Japanese port in container shipments and of particular importance to Sony, Canon, and Pioneer. FedEx shut service to much of eastern Japan, including Tokyo, following the earthquake but now reports resumption with some service delays.

Please let me know if you have any related information - cholton@pennwell.com.

Friday, February 18, 2011

Hummingbird with video camera

A new surveillance device may be arriving at your bird feeder soon. Yesterday, AeroVironment (Monrovia, CA), announced that it had got its Nano Hummingbird to precisely hover and fly forward, fast. Weighing two thirds of an ounce, including batteries and video cameras, the prototype was built as part of the DARPA Nano Air Vehicle program.

The final concept demonstrator is capable of climbing and descending vertically, flying sideways left and right, flying forward and backward, as well as rotating clockwise and counter-clockwise, under remote control. During the demonstration the Nano Hummingbird flew in and out of a building through a normal-size doorway.



The hand-made prototype aircraft has a wingspan of 16 cm (6.5 inches) and can fitted with a removable body fairing, which is shaped to have the appearance of a hummingbird. The company, which makes a variety of unmanned aerial vehicles used by the military, says the Nano is larger and heavier than an average hummingbird, but is smaller and lighter than the largest hummingbird currently found in nature.

Vision Systems Design is publishing a market report on vision for such UAVs and other service robots. For more information, click here.