Wednesday, October 26, 2011

Camera runs at 40,000 frames per second

A camera invented at the Langley Field Laboratory has captured images at an astonishing 40,000 frames/s, providing researchers with a great deal of insight concerning the phenomenon of knock in spark-ignition engines over a six-year period.

The high-speed motion picture camera operates on a principle that its inventors call optical compensation. The photosensitive film used in the camera is kept continuously in motion and the photographic images are moved with the film such that each image remains stationary relative to the film during the time of its exposure.

That's right. This isn't a digital camera at all, but a film camera. But perhaps even more remarkable is that it that was invented in February 1936! The first working version of the camera was constructed in the Norfolk Navy Yard during 1938 and the camera operated successfully first time on December 16, 1938 at Langley Field.

Now, thanks to an article written by Cearcy Miller, interested readers can not only discover exactly how the camera was designed but also view some high-speed motion pictures of familiar objects that illustrate the quality of the photographs taken by the camera at the time.

If you thought that high-speed imaging was a relatively new idea, why not check out how the engineers solved the problem all those years ago!

Friday, October 21, 2011

From dissertation to product release

In 2006, Ren Ng's PhD research on lightfield photography won Stanford University's prize for best thesis in computer science as well as the internationally recognized ACM Dissertation award.

Since leaving Stanford, Dr. Ng has been busy starting up his own company called Lytro (Mountain View, CA, USA), to commercialize a camera based on the principles of lightfield technology while making it practical enough for everyday use [see the Vision Insider blog entry "Lightfield camera headed for the consumer market"].



This week saw the result of his efforts, as Lytro took the wraps off a set of three cameras that can all capture the color, intensity, and direction of all the light in a scene, enabling users to focus the images they take after the fact.

The cameras themselves aren't all that different -- except for the paint job on the outside. The first two, in Electric Blue and Graphite, cost $399 and are capable of storing 350 pictures. A Red Hot version -- at the somewhat higher price of $499 -- is capable of storing 750.

With no unnecessary modes or dials, the cameras feature just two buttons (power and shutter) and have a glass touch-screen that allows pictures to be viewed and refocused directly before they are downloaded to a computer.

To illustrate the capabilities of the new cameras, a number of Lytro employees and select testers have taken some snaps and uploaded the results to the company's so-called Living Pictures Gallery, where they can be viewed and refocused on the web.

As savvy a marketing idea as that is, I can't say the same behind the company's choice of computer platform which runs its free desktop application that imports pictures from camera to computer. Rather than produce software for the enormously popular Windows PC, the company chose to support Mac OS X in its initial release.

Despite this minor upset, the company does have more exciting projects in the works. Next year, for example, it plans to launch a set of software tools that will allow the lightfield pictures to be viewed on any 3-D display and to enable viewers to shift the perspective of the scene.

Wednesday, October 19, 2011

Sent to Coventry

This week, I dispatched our European editor Dave Wilson off to the Photonex trade show in Coventry in the UK to discover what novel machine-vision systems might be under development in Europe.

Starting off early to beat the traffic jams on the motorway, he arrived at the Ricoh show grounds at the ungodly hour of eight in the morning. But that gave him a good two hours to plan the day ahead before the doors of the show opened -- which is exactly what he did.

Whilst browsing through the technical seminar program organized by the UK Industrial Vision Association (UKIVA) over a breakfast of Mexican food, one presentation in particular caught his eye.

Entitled “3D imaging in action,” it promised to reveal how a Sony smart camera and a GigE camera could be used together to create a 3-D image-processing system that could analyze the characteristics of parts on a rotating table.

The demonstration by Paul Wilson, managing director of Scorpion Vision (Lymington, UK; www.scorpionvision.co.uk), would illustrate the very techniques that had been used by a system integrator who had developed a robotic vision system that could first identify -- and then manipulate -- car wheels of different sizes and heights.

And indeed it did. During the short presentation, Wilson explained how the Scorpion Vision software developed by Tordivel (Oslo, Norway; www.tordivel.no) had been used to create the application which was first capturing three-dimensional images of the parts and then making measurements on them. The entire application ran under an embedded version of Windows XP on the Sony smart camera.

Interestingly enough, software such as Tordivel’s allows applications such as this to be developed by a user with few, if any, programming skills. Instead, they are created through a graphical user interface from which a user chooses a number of different tools to perform whatever image-analysis tasks are required.

The ease by which such software allows system integrators to build systems runs in stark contrast to other more traditional forms of programming, or even more contemporary ones that make use of graphical development environments. Both of these require a greater level of software expertise and training than such non-programmed graphical user interfaces.

Even so, the more sophisticated and easier the software is to use, the more expensive it is likely to be, a fact that was not lost on Scorpion Vision’s managing director as he spoke to our man Dave at the show.

Nevertheless, he also argued that higher initial software costs can often be quickly offset by the greater number of systems that can be developed by a user in any given period of time -- an equally important consideration to be taken into account when considering which package to use to develop your own 3-D vision system.

Friday, October 14, 2011

Steve Jobs, the OEM integrator, and me

Back in 1990, I decided to start my own publishing company. Transatlantic Publishing, as the outfit was called, was formed specifically to print a new magazine called The OEM Integrator, a journal that targeted folks building systems from off-the-shelf hardware and software.

I hadn't given much thought to that publication for years, until last week that is, when my brother telephoned me to say that he had unearthed a copy of the premier issue of the publication, complete with the entire media pack that was produced to support it.

Intrigued to discover what gems might have been written way back then, I asked him to email me a PDF of one or two of the stories that had appeared in that first issue.

As you can imagine, I had to chuckle when I opened the email attachment. For there, in all its glory, was a roundup of new hardware and software products that had been announced for none other than the Apple NuBus, a 32-bit parallel computer bus incorporated into computer products for a very brief period of time by Steve Jobs' Apple Computer. [UPDATE: Click here to read the article from the 90s!]

Despite my enthusiasm for the new bus board standard, NuBus didn't last too long, and when Apple switched to the PCI bus in the mid-1990s, NuBus quickly vanished.

But the bus that my brother chose to write an even lengthier piece on had even less success in the marketplace. His article touted the benefits of the Futurebus -- a bus that many then believed would be the successor to the VME. Sadly, however, the effort to standardize this new bus took so long that everyone involved lost interest, and Futurebus was hardly used at all.

Both these articles point out one important fact that all industry commentators would do well to take heed of. If you are going to make predictions as to what new technology is going to set the world on fire, you've got to be very, very careful indeed!

Wednesday, October 12, 2011

Technology repurposing can be innovative, too

More years ago than I care to remember, the president of a small engineering company asked me if I would join several other members of his engineering team on a panel to help judge a competition that he was running in conjunction with the local high school.

The idea behind the competition was pretty simple. Ten groups of students had each been supplied with a pile of off-the-shelf computer peripherals that the engineering company had no longer any use for, and tasked with the role of coming up with novel uses for them.

As the teams presented their ideas to the panel, it became obvious that they were all lateral thinkers. Many of them had ripped out the innards of the keyboards, mice, and loudspeakers they had been provided with and repurposed them in unusual and innovative ways to solve specific engineering problems.

Recently, a number of engineering teams across the US have taken a similar approach to solving their own problems, too, but this time with the help of more sophisticated off-the-shelf consumer technology -- more specifically, inexpensive smart phones.

Engineers at the California Institute of Technology, for example, have taken one of the beasts and used it to build a "smart"petri dish to image cell cultures. Those at the University of California-Davis have transformed an iPhone into a system that can perform microscopy. And engineers at Worcester Polytechnic Institute have developed an app that uses the video camera of a phone to measure heart rate, respiration rate, and blood oxygen saturation.

Taking existing system-level components and using them in novel ways may never win those engineers the same accolades that the designers of the original components often receive. But the work of such lateral thinkers is no less original. Their work just goes to show that great product ideas do not necessarily have to be entirely game-changing. Sometimes, repurposing existing technology can be equally as innovative.

Friday, October 7, 2011

Software simplifies system specification

National Instruments' NI Week in Austin, TX was a great chance to learn how designers of vision-based systems used the company's LabVIEW graphical programming software to ease the burden of software development.

But as useful as such software is, I couldn't help but think that it doesn't come close to addressing the bigger issues faced by system developers at a much higher, more abstract level.

You see, defining the exact nature of any inspection problem is the most taxing issue that system integrators face. And only when that has been done can they set to work choosing the lighting, the cameras, and the computer, and writing the software that is up to the task.

It's obvious, then, that software like LabVIEW only helps tackle one small part of this problem. But imagine if it could also select the hardware, based simply on a higher-level description of an inspection task. And then optimally partition the software application across such hardware.

From chatting to the NI folks in Texas, I got the feeling that I'm not alone in thinking that this is the way forward. I think they do, too. But it'll probably be a while before we see a LabVIEW-style product emerge into the market with that kind of functionality built in.

In the meantime, be sure to check out our October issue (coming online soon!) to see how one of NI's existing partners -- Coleman Technologies -- has used the LabVIEW software development environment to create software for a system that can rapidly inspect dinnerware for flaws.

Needless to say, the National Instruments software didn't choose the hardware for the system. But perhaps we will be writing an article about how it could do so in the next few years.

Wednesday, October 5, 2011

Spilling the military vision beans

While there are many fascinating application challenges that have been resolved by machine-vision systems, there are many that go unreported.

That's because the original equipment manufacturers (OEMs) that create such vision-based machines are required to sign non-disclosure agreements (NDAs) with their customers to restrict what information can be revealed.

Oftentimes, it’s not just the specifications of the machine that are required to be kept under wraps. These NDAs also restrict the disclosure of the challenge that needed to be addressed before the development of the system even commenced.

Now, you might think that the development of vision systems for the military marketplace might be an even more secretive affair. After all, building a vision system to protect those in battle would initially appear to be much more imperative than keeping quiet about a machine that inspects food or fuel cells.



While the specifics of military designs are almost impossible to obtain legally, that's not true, however, for depictions of the systems that the military would like to see developed in the future.

Often such descriptions are found in extensive detail on numerous military procurement sites, even down to the sorts of software algorithms and hardware implementations that are required to be deployed.

Could it be that in doing so, though, the military minds are handing over potentially constructive information to research teams in rogue states? If they are, then surely they are making a mockery of the very International Traffic in Arms Regulations (ITAR), which control the export and import of defense-related materials and services.