Friday, November 4, 2011

Mistaken identity

Next week marks the start of one of the biggest event in Vision Systems Design's calendar -- the VISION 2011 show in Stuttgart Germany.

Accompanying me to the show this year will be Susan Smith, our publisher; Judy Leger, our national sales manager; and the latest addition to our editorial team, Dave Wilson.

As many of you may know, Dave joined the magazine just last month to increase our presence in the important European market. But what some of you may not know that Dave is also my identical twin brother, a fact I thought I'd make perfectly clear before the show begins, in order to diminish the confusion that will inevitably arise as a case of mistaken identity on the show floor.

You see, although numerous vision system algorithms have been developed over the years to differentiate between products of a similar nature, I'm sorry to say that most human beings' visual systems -- even those in the machine vision industry -- seem to be incapable of differentiating between the two of us, despite the fact that I clearly inherited all the brains and good looks.

For that reason, I have programmed my brother's central processing unit to respond to the greeting "Hi, Andy" whenever he hears it, after which he will instigate a verbal subroutine, which will explain that he is simply a poor imitation of the real thing.

However, if you do run into the man instead of me, you will find that he is equally as willing to learn what new technologies are being discussed at the show.

He would be especially delighted to discuss any applications of machine vision related to the use of smart cameras and hyperspectral imaging. Please be sure to bend the man's ear if you see him!

Tuesday, November 1, 2011

Win $50,000 courtesy of the US Government!

Today's troops often confiscate remnants of destroyed documents from war zones, but reconstructing entire documents from them is a daunting task.

To discover if they can unearth a more effective means to do just that, the folks at DARPA have come up with a challenge that they hope will encourage individuals to develop a more automated solution.

That's right. The defense organization is hoping that by offering a whopping $50,000 in prize money, entrants to its so-called "Shredder Challenge" will generate some ideas that it might be able to make use of.

The Challenge itself consists of solving five individual puzzles embedded in the content of documents that have all been shredded by different means. To participate in the challenge, participants must download the images of the shredded documents from the challenge web site, reconstruct the documents, solve the puzzles, and submit the correct answers before Dec. 4, 2011.

Points will be awarded to those who provide the correct answers to the mandatory questions associated with each puzzle. $1,000 will be awarded for each point scored up to $50,000 for a perfect score. DARPA will then award one cash prize of up to $50,000 to the participant who scored the highest total number of points by the deadline.

Registration is open to all eligible parties at www.shredderchallenge.com, which provides detailed rules and images of the shredded documents for the five problems.





Clearly, this is an application that would benefit from the expert knowledge of those in the image processing field who might be able to develop -- or deploy -- a set of vision-based algorithms to reconstruct the documents and hence solve the puzzles.

Interestingly enough, of course, several individuals contributing to the discussion forums on the Shredder Challenge web site are taking exactly that approach...

Wednesday, October 26, 2011

Camera runs at 40,000 frames per second

A camera invented at the Langley Field Laboratory has captured images at an astonishing 40,000 frames/s, providing researchers with a great deal of insight concerning the phenomenon of knock in spark-ignition engines over a six-year period.

The high-speed motion picture camera operates on a principle that its inventors call optical compensation. The photosensitive film used in the camera is kept continuously in motion and the photographic images are moved with the film such that each image remains stationary relative to the film during the time of its exposure.

That's right. This isn't a digital camera at all, but a film camera. But perhaps even more remarkable is that it that was invented in February 1936! The first working version of the camera was constructed in the Norfolk Navy Yard during 1938 and the camera operated successfully first time on December 16, 1938 at Langley Field.

Now, thanks to an article written by Cearcy Miller, interested readers can not only discover exactly how the camera was designed but also view some high-speed motion pictures of familiar objects that illustrate the quality of the photographs taken by the camera at the time.

If you thought that high-speed imaging was a relatively new idea, why not check out how the engineers solved the problem all those years ago!

Friday, October 21, 2011

From dissertation to product release

In 2006, Ren Ng's PhD research on lightfield photography won Stanford University's prize for best thesis in computer science as well as the internationally recognized ACM Dissertation award.

Since leaving Stanford, Dr. Ng has been busy starting up his own company called Lytro (Mountain View, CA, USA), to commercialize a camera based on the principles of lightfield technology while making it practical enough for everyday use [see the Vision Insider blog entry "Lightfield camera headed for the consumer market"].



This week saw the result of his efforts, as Lytro took the wraps off a set of three cameras that can all capture the color, intensity, and direction of all the light in a scene, enabling users to focus the images they take after the fact.

The cameras themselves aren't all that different -- except for the paint job on the outside. The first two, in Electric Blue and Graphite, cost $399 and are capable of storing 350 pictures. A Red Hot version -- at the somewhat higher price of $499 -- is capable of storing 750.

With no unnecessary modes or dials, the cameras feature just two buttons (power and shutter) and have a glass touch-screen that allows pictures to be viewed and refocused directly before they are downloaded to a computer.

To illustrate the capabilities of the new cameras, a number of Lytro employees and select testers have taken some snaps and uploaded the results to the company's so-called Living Pictures Gallery, where they can be viewed and refocused on the web.

As savvy a marketing idea as that is, I can't say the same behind the company's choice of computer platform which runs its free desktop application that imports pictures from camera to computer. Rather than produce software for the enormously popular Windows PC, the company chose to support Mac OS X in its initial release.

Despite this minor upset, the company does have more exciting projects in the works. Next year, for example, it plans to launch a set of software tools that will allow the lightfield pictures to be viewed on any 3-D display and to enable viewers to shift the perspective of the scene.

Wednesday, October 19, 2011

Sent to Coventry

This week, I dispatched our European editor Dave Wilson off to the Photonex trade show in Coventry in the UK to discover what novel machine-vision systems might be under development in Europe.

Starting off early to beat the traffic jams on the motorway, he arrived at the Ricoh show grounds at the ungodly hour of eight in the morning. But that gave him a good two hours to plan the day ahead before the doors of the show opened -- which is exactly what he did.

Whilst browsing through the technical seminar program organized by the UK Industrial Vision Association (UKIVA) over a breakfast of Mexican food, one presentation in particular caught his eye.

Entitled “3D imaging in action,” it promised to reveal how a Sony smart camera and a GigE camera could be used together to create a 3-D image-processing system that could analyze the characteristics of parts on a rotating table.

The demonstration by Paul Wilson, managing director of Scorpion Vision (Lymington, UK; www.scorpionvision.co.uk), would illustrate the very techniques that had been used by a system integrator who had developed a robotic vision system that could first identify -- and then manipulate -- car wheels of different sizes and heights.

And indeed it did. During the short presentation, Wilson explained how the Scorpion Vision software developed by Tordivel (Oslo, Norway; www.tordivel.no) had been used to create the application which was first capturing three-dimensional images of the parts and then making measurements on them. The entire application ran under an embedded version of Windows XP on the Sony smart camera.

Interestingly enough, software such as Tordivel’s allows applications such as this to be developed by a user with few, if any, programming skills. Instead, they are created through a graphical user interface from which a user chooses a number of different tools to perform whatever image-analysis tasks are required.

The ease by which such software allows system integrators to build systems runs in stark contrast to other more traditional forms of programming, or even more contemporary ones that make use of graphical development environments. Both of these require a greater level of software expertise and training than such non-programmed graphical user interfaces.

Even so, the more sophisticated and easier the software is to use, the more expensive it is likely to be, a fact that was not lost on Scorpion Vision’s managing director as he spoke to our man Dave at the show.

Nevertheless, he also argued that higher initial software costs can often be quickly offset by the greater number of systems that can be developed by a user in any given period of time -- an equally important consideration to be taken into account when considering which package to use to develop your own 3-D vision system.

Friday, October 14, 2011

Steve Jobs, the OEM integrator, and me

Back in 1990, I decided to start my own publishing company. Transatlantic Publishing, as the outfit was called, was formed specifically to print a new magazine called The OEM Integrator, a journal that targeted folks building systems from off-the-shelf hardware and software.

I hadn't given much thought to that publication for years, until last week that is, when my brother telephoned me to say that he had unearthed a copy of the premier issue of the publication, complete with the entire media pack that was produced to support it.

Intrigued to discover what gems might have been written way back then, I asked him to email me a PDF of one or two of the stories that had appeared in that first issue.

As you can imagine, I had to chuckle when I opened the email attachment. For there, in all its glory, was a roundup of new hardware and software products that had been announced for none other than the Apple NuBus, a 32-bit parallel computer bus incorporated into computer products for a very brief period of time by Steve Jobs' Apple Computer. [UPDATE: Click here to read the article from the 90s!]

Despite my enthusiasm for the new bus board standard, NuBus didn't last too long, and when Apple switched to the PCI bus in the mid-1990s, NuBus quickly vanished.

But the bus that my brother chose to write an even lengthier piece on had even less success in the marketplace. His article touted the benefits of the Futurebus -- a bus that many then believed would be the successor to the VME. Sadly, however, the effort to standardize this new bus took so long that everyone involved lost interest, and Futurebus was hardly used at all.

Both these articles point out one important fact that all industry commentators would do well to take heed of. If you are going to make predictions as to what new technology is going to set the world on fire, you've got to be very, very careful indeed!

Wednesday, October 12, 2011

Technology repurposing can be innovative, too

More years ago than I care to remember, the president of a small engineering company asked me if I would join several other members of his engineering team on a panel to help judge a competition that he was running in conjunction with the local high school.

The idea behind the competition was pretty simple. Ten groups of students had each been supplied with a pile of off-the-shelf computer peripherals that the engineering company had no longer any use for, and tasked with the role of coming up with novel uses for them.

As the teams presented their ideas to the panel, it became obvious that they were all lateral thinkers. Many of them had ripped out the innards of the keyboards, mice, and loudspeakers they had been provided with and repurposed them in unusual and innovative ways to solve specific engineering problems.

Recently, a number of engineering teams across the US have taken a similar approach to solving their own problems, too, but this time with the help of more sophisticated off-the-shelf consumer technology -- more specifically, inexpensive smart phones.

Engineers at the California Institute of Technology, for example, have taken one of the beasts and used it to build a "smart"petri dish to image cell cultures. Those at the University of California-Davis have transformed an iPhone into a system that can perform microscopy. And engineers at Worcester Polytechnic Institute have developed an app that uses the video camera of a phone to measure heart rate, respiration rate, and blood oxygen saturation.

Taking existing system-level components and using them in novel ways may never win those engineers the same accolades that the designers of the original components often receive. But the work of such lateral thinkers is no less original. Their work just goes to show that great product ideas do not necessarily have to be entirely game-changing. Sometimes, repurposing existing technology can be equally as innovative.