Thursday, December 30, 2010

New Year - New Web site

The redesign of the Vision Systems Design Web site is the first in a series of new developments for 2011. Our address remains: www.vision-systems.com, but you will find a new look and numerous enhancements to help engineers and system integrators make use of machine vision and image processing products and technologies.

One of the key improvements is the visibility of our topic centers on any page of the site in the navigation bar. These topic centers focus on Factory Automation, Non-Industrial Vision, Cameras, Boards & Software, Lighting & Optics, and Robotics.

You'll also find more videos and a new video player, and the addition of Editorial Digests on relevant subjects such as 3-D imaging and solar cell manufacturing.

In addition, you will easily find the current issue and archives of our magazine, more links to our Buyers Guide and Industrial Camera Directory, faster page load times, and a link to OptoIQ, which is the portal site for several PennWell publications related to lasers, photonics, bio-optics, and research.

Please let me know what you think of the new site and what other changes we might make.

Best wishes for the New Year.

Conard Holton
Editor in Chief
Vision Systems Design
cholton@pennwell.com

Wednesday, November 17, 2010

VISION 2010 – Parting views

After three demanding days on the show floor and travel home from Stuttgart (in many cases delayed by a fierce storm over northern Europe), it’s clear that VISION 2010 was a major success, a relief to the organizers and exhibitors, and a good omen for 2011.


(Figure: The Mid-Size Robocup Team: Tech United Eindhoven put on a show)

In no particular order, here are some observations:

- This tradeshows is alive and well. It drew a record 6800 visitors, 1600 exhibitors, and 323 exhibiting companies. Forty four percent of the exhibitors came from outside Germany, primarily from the US and South Korea. Thirty five percent of the visitors came from outside Germany

- Most companies reported strong to torrid growth in the past months. Basler, for example, grew very fast but expects sales will soften in the first and second quarters of 2011, before strengthening again in the second half of next year.

- A price war is being waged in some lower-ends of the camera world. Several of the competing vendors are German and continue making their cameras in Germany despite higher labor costs. Their plan is to win on the basis of innovation and quality. Part of their strategy is to use consumer interfaces, move from CCD to CMOS sensors, add capabilities in software not hardware, and continue developing smaller camera footprints.

- Several of the larger camera vendors told me that over 60 percent of their business is non-industrial--that was surprising but shouldn’t have been. It’s been predicted for several years and now it’s true. Growth is very fast in transportation/traffic management, security, and point of sale/entertainment applications. Each sector (especially security) has its own barriers to entry, with large, established system vendors and low-cost competition from “legacy” CCTV cameras.

- According to system integrator Luster LightVision in Beijing, the Chinese machine vision market is roughly $76 million industrial and $50 million non-industrial, with transportation and logistics the fastest growing segments. 2010 will be the year that machine vision really takes off although it is hampered by the fact that the markets are so cost-sensitive, there is little interdisciplinary system design knowledge or machine vision expertise, and the industry organization is weak.

- GigE is seeing steady global growth, with the GigE Vision standard being used in some non-machine vision applications, including embedded military and medical applications.

- Tradeshows are inspiring because they give engineers a chance to get out of the office, lab, or factory and talk to suppliers, developers, or speakers at technical session. Google may be a great tool for search or for comparing components, but human interaction is a better source of innovation, or maybe for learning that someone else has already solved your problem. Engineering management should encourage their staff to get out, ask questions, and see what the world is doing.

Tuesday, November 9, 2010

VISION 2010 – Day One, VISION Award, market news

Stuttgart, November 9—The anticipation seems to have been rewarded for the now-official 323 exhibitors (a record number), as most exhibitors were delighted with the quantity and quality of leads. More coverage of the show is available at this link.



At a press conference the winner of the VISION Award was announced: SICK, for its ColorRanger E. Of the 23 entries, it was probably inevitable that some sort of 3-D product would win since 3-D is one of the fastest growing technology segments in machine vision.

The ColorRanger E combines high-speed 3-D and color linescan capabilities at more than 11 kHz. Applications include wood inspection, quality control in baking or food processing, and solar wafer shape and color quality assurance.

The list of contestants contains many other innovative products—you will find the list at the end of this blog.

During a press conference, Dr. Olaf Munkelt, chairman of the VDMA Machine Vision Group, talked about the unprecedented collapse of the global market and its impact on machine vision sales—“basic trust is still lacking”, he said. Nonetheless the industry has come bounding back, with German sales expected to rise 18% in 2010 over 2009, giving the German industry a value of 1.1 billion euros. Inspection, Munkelt said, remains the most important driver of sales, especially in the automotive market, and 3-D metrology is rapidly increasing.

In a video interview with me that we’ll post soon, Munkelt said that the global sales of machine vision break out roughly into one third each for Europe, Asia, and North America. He noted that the US share is dropping in relative terms and that part of the reason for this may be that investment in R&D is declining. This observation is not new and it’s one that concerns many of us in the US.

Basler also held a press conference in which its leaders, Dietmar Ley and Arndt Bake, described the company as moving to be a “pure play” camera company, leaving its systems business side. Bake said that Basler sees significant new growth opportunities in traffic (eg, tolls, parking, law enforcement), surveillance, and point-of-sale applications (eg, recycling, wheel alignment).

Finally, for a taste of the innovative technologies on display at the show, here’s the list of the 22 entries that did not win the VISION Award. We’ll be covering some of these products online and in future issues of the magazine:

• ABAQuS, Using the M200CT to check the quality of barcodes and 2-D codes as a prerequisite to automated data input with barcode scanners
• Allied Vision Technologies, Redefining the limits of GigE Vision bandwidth using Link Aggregation
• Aqsense, Global dimensional inspection and geometric features measurement
• BAP Image Systems, High-speed scanning basing on CIS-sensors
• Basler Vision Technologies, New CMOS camera generation
• Chronos Vision, High speed, real-time video eye tracking
• Dalsa, BOA smart camera
• Effilux, LED lighting for 3-D profilometry
• FLIR Commercial Systems, New FLIR A615
• Frankfurt Laser Company, HEML high-power temperature stabilised laser diode module
• Fraunhofer IDMT, Fraunhofer eye tracker–a calibration free solution with scalable and configurable Hough IP Core
• Imaging Diagnostics, Auto-focus camera using standard DSLR lenses
• Inviso, Brain-inspired machine vision
• Keyetech, Keyetech texture-based recogniser
• New Imaging Technologies, Native WDR: a radical, innovative breakthrough in CMOS sensors based on the Magic technology
• OPT Machine Vision Tech, Introduction of AOI light application in machine vision
• Photometrics, EMCCD camera – automatic, real-time imaging data standardization
• PMD Technologies, First robust and feasible gesture control using time-of-flight technology
• Raytrix, Single lens 3-D-camera with extended depth of field for industrial inspection
• Schott Lighting and Imaging, Telecentric zoom lens ML-Z07545HR
• Smartvision, BlobMax, A new method for a safe inspection of planar and curved surfaces with diffuse or specular reflection
• Softhard Technology, Currera-R atom based industrial smart camera
• Sony Image Sensing Solutions Europe, The smallest C-mount digital camera
• Spotrack, Method for the positioning of a multiple pan/tilt devices in single object tracking applications (multiple camera tracking)
• Technos Japan, The first visual inspection system that does not miss defects
• Vision for Vision, Interactive workshop for fast prototyping
• Xenics, The Lynx: a novel high-performance line scan camera system

Monday, November 8, 2010

VISION 2010 - As the show begins

November 8, Stuttgart--Preparations are hectically concluding for tomorrow's start of the 23rd annual VISION show. This is the third year that the show is held in the Neue Messe Stuttgart--the new, very large and sprawling complex next to the airport. The old Killesberg location was cozier for sure, but could never have accommodated the 306 exhibitors or 6000+ attendees expected for this show.

Based on the enthusiasm and energy of people I've spoken to so far, the show should be better than ever. The recovery of the industry is evident, with many companies reporting banner quarters or years. One of the biggest complaints is the shortage of components such as sensors, so vendors are struggling to build cameras fast enough to meet demand. Not such a bad problem after a very difficult recession.



We will be covering the show in several ways. We'll be posting videos about the show, the markets, and multiple technical sessions on topics such as GigE Vision, megapixel lenses, CMOS sensors, data transmission standards, and CoaXPress. We'll also be recording videos from the Global Vision Standards demonstration booth, including demonstrations of Camera Link HS and GigE Vision. GigE Vision will be especially interesting to follow because it is beginning to be used outside of the machine vision world, including in embedded military and medical applications.

Also, we will be posting a series of sponsored videos from machine vision vendors who describe their products and applications in some depth.

And of course, in the coming issues of Vision Systems Design magazine and online we'll be publishing in-depth technical articles by editor Andy Wilson about the most interesting technical developments at the show, including the winner of the VISION award.

Wednesday, September 8, 2010

VISION 2010: a study in variations of machine vision

As machine vision technologies and products become more established across multiple industries, tradeshows such as the forthcoming VISION 2010 to be held November 9-11 at the New Stuttgart Trade Fair Centre in Stuttgart, Germany, will reflect these trends.

Indeed, during VISION 2010, a panel discussion entitled: “Green Vision – Driving Factor for a Green Future” will focus on how machine vision can be used in systems to protect the environment, conserve resources, increase energy efficiency, and develop more environmentally friendly products.

In addition to highlighting innovations in industrial camera and system design, the show will also include a demonstration of autonomous robot footballers, an application park highlighting the role machine vision plays in testing and production processes, an area demonstrating international machine vision standards, joint booths for startup companies, and a series of seminars for those new to machine vision.

According to the organizers, Messe Stuttgart, attendance is already on track to exceed last year, both in terms of exhibitors (now over 300) and attendees. Those who wish to see the many sides of machine vision would do well not to miss the event.

Thursday, August 26, 2010

Hyperspectral imaging heads commercial

Multispectral imaging enables several discrete images in the visible and IR bands of the spectrum to be captured and processed. To capture continuous spectral bands from the ultraviolet to the far infrared, hyperspectral imaging is a powerful if often expensive imaging tool.

Hyperspectral remote-sensing applications have flourished for several decades. Now, low-cost imaging spectrometers are being introduced that allow innovative approaches to applications such as medical diagnostics, metallurgy, sorting materials, food processing, and microscopy.

We recently published an article by Rand Swanson at Resonon describing a compact hyperspectral imaging system that can be flown in a Cessna aircraft to monitor the spread of leafy spurge, an invasive weed that reduces grazing forage for livestock.



We’ve also reported on the use of hyperspectral imaging to detect the food pathogen Campylobacter and to sort walnuts.

A hyperspectral imaging microscopy system also allows detailed examination of LED structures in the visible and near-IR.



You can find more examples by searching our website. I expect to see numerous such articles in the future. For example, we’ll be describing a hyperspectral blueberry sorting system from EVK in Austria in our September issue.

Monday, August 23, 2010

Big Pharma needs machine vision

Discussions on the Vision System Design Group on LinkedIn have recently reflected the growing interest in using machine vision to inspect pharmaceutical products. We have published a series of technical article that might be of interest.

One article about a pharmaceutical packing system that uses IR and visible sensors describes how American SensoRx developed a system that inspects tablets, capsules, caplets, and gels at very high speeds before they are packed up and shipped to distributors.


Another describes how the German company Boehringer uses FireWire cameras to inspect capsules used in inhaled medications for respiratory disease.

And yet another describes how Pfizer added an x-ray system to its visible light inspection system to check tablets in blister packs.

Friday, July 23, 2010

Robots, with vision, at your service

A recent article and video in the New York Times describes Bandit, a robot built by researchers at the University of Southern California, which interacts with autistic children. Three-foot-tall Bandit can maintain “eye” contact with an autistic child and, sometimes, use playful or sympathetic actions to overcome withdrawn behavior.

Another robot, named RUBI—Robot Using Bayesian Inference—at the University of California, San Diego, images children’s faces, recognizes basic emotions from facial muscle movement, and responds with verbal and physical gestures of encouragement.

These service robots are part of a rapidly growing wave of robotic human helpers. In the classroom they may supplement the work of human teachers, during surgery they may perform delicate procedures, and on the battlefield they may help disarm a roadside bomb, as described in our June 2010 cover story.

The technological differences between these service robots--with their vision and image processing functions--and robots used in industrial applications can be small. For example, a recent article on our website describes the work of researchers at the Technical University of Munich who are imaging non-verbal communications such as gestures and facial expressions as a method of interacting with robots. To date, they have demonstrated that their work can help those that require assisted living and workers in automated production plants, where background noise may make speech recognition difficult.

Recently, European researchers have built a robot for 'on-demand' rubbish collection – just make a call and it will soon arrive at your door. It's ideal for collecting waste in the narrow streets of many historical towns.



About the size of a person, it can navigate the narrowest of alleys, stop outside your door and take your rubbish away. And the best bit is this: You don't have to remember when to put your bin out, but simply make a telephone call. Soon the robot is waiting outside your door, ready to receive your rubbish.

Monday, July 19, 2010

July VSD online--Hyperspectral imaging, minature autofocus lenses, 3-D vision


To capture continuous spectral bands from the UV to the far IR, hyperspectral imaging has become a powerful imaging tool. In our July issue, Rand Swanson at Resonon describes a compact hyperspectral imaging system that has been flown in a Cessna aircraft to monitor the spread of leafy spurge, an invasive weed that reduces grazing forage for livestock.

In our Product Focus article, editor Andy Wilson describes recent developments in miniaturized autofocus lenses. Whether based on electro-optical, electromechanical, thermo-optical, or acousto-mechanical techniques, these tunable optics will find cutting-edge applications in smart machine-vision systems, endoscopy systems, and mobile phones.

Our cover story shows how an optical tester based on an off-the-shelf camera system can be used to calibrate centering errors of lenses to ensure the imaging quality of an optical assembly or subassembly.

3-D vision remains one of the most alluring areas for innovation in machine vision development. While dual-camera and time-of-flight sensors are becoming increasingly important, other options such as the one described in an article about ISee3D now allow stereo images to be captured from a single camera/lens combination.

We also have articles on: inspection of wood surfaces for defects by researchers at AIDO in Spain; an algorithm that uses partial deriatives to improve edge detection; and an FFT processor that performs phase correlation.

Wednesday, July 7, 2010

Blogging about machine vision – interested?

Some cynics I know mock the idea of blogging, but I think it’s a good way to explore a subject such as machine vision. And a blogger might even be paid the highest compliment--having your blog blogged about.

A case in point: editor Andy Wilson’s My View video blog on the Vision Systems Design website was recently blogged about by Laura Hoffman, who runs the Microscan blog SolutionConnection, along with colleagues such as John Agapakis.

Numerous other companies in the machine vision industry have blogs or are trying to figure out what they could write that wouldn’t rattle internal corporate feathers but would still be interesting. Like Microscan, Thor Vollset at ScorpionVision’s blog is aiming to keep readers up to date on the company and how customers can use its products.

Then there are system integrators who have occasional blogs on their own sites or on a magazine site. These include David Dechow at Aptura Machine Vision Solutions with his Regarding Machine Vision blog, and Ned Lecky at Lecky Integration and John Nagle at Nagle Research. Also, there are journalists who blog about related topics, such as Frank Tobe at Everything Robotic and Gabriele Jansen at Inspect-online blog.

And of course there is the popular and anonymous B Grey at machinevision4users blog, who presumably hides his or her identity out of concern about industry (or employer?) reaction. The postings vary from technical observations and comparisons to witty digs. But anonymity is both a shield and a crutch. Speaking as a journalist who must live with the consequences of what I write, I think B Grey should stand forth and be counted.

Whether anonymous or very public, all bloggers can attest to the fact that it’s not easy to post frequently and have something interesting or new--or at least amusing--to say. Yet it can be quite rewarding, personally and professionally.

Blogs can become good networking and marketing tools that engage people. And you can re-post blogs to other social media sites such as Linked In, where you will find many relevant groups such as the Vision Systems Design Group, the Machine Vision Group, the Image Processing Group, and the 3D Machine Vision Group. Most of these groups have hundreds or even thousands of members.

If you have a comment on what I’ve written, please post it on this blog.

If you’re reading this and interested in contributing a regular or at least somewhat regular blog to Vision Systems Design, please let me know: cholton@pennwell.com.

Tuesday, June 22, 2010

June issue: vision-guided robots, software, and Three Mile Island



Our June issue is now available on our website. The articles in it point to some of the many ways in which machine vision is evolving.

I glimpsed this potential in 1982, when I watched the feed from the first remote video camera lowered into a reactor vessel at Three Mile Island, after the nuclear accident had destroyed the reactor core in 1979. It took several years to develop the imaging equipment for this first foray and, in the years that followed, many cameras and robots would gather information about damage and perform cleanup operations in highly radioactive areas of the plant. Here's a picture of Rover, developed with Carnegie Mellon University.


Although robots were not then sophisticated enough to perform major operations—and stereo vision was practically a dream—the future of vision-guided robots was obvious. Some colleagues and I wrote a history of the cleanup, including the robotic and imaging technologies that were used. You can download a PDF of the history published by the Electric Power Research Institute by clicking HERE.


Remotely operated vehicles are now playing an increasing role in other crises. Our cover story in the June issue, for example, shows how 3-D displays can help remote operators in the military safely handle and dispose of explosive devices using robots.

Another article explains how single-sensor image fusion technology could enable simpler and more effective imaging of potential threats in security and defense operations.

Machine vision is not always on the front line of environmental and political challenges, however. Researchers from the University of Ilmenau in Germany are using image processing techniques to evaluate the quality of wheat after it is harvested.

And, as contributing editor Winn Hardin explains, manufacturers are using other machine vision techniques ensure that the steel tubes produced for oil and gas production are of the highest quality.

This broadening range of biomedical, robotics, military, and aerospace applications is leading software vendors to expand the functionality of their products beyond simple measurement functions, as editor Andy Wilson writes in his Product Focus article on machine vision software.

Indeed, new opportunities for machine vision and image processing systems are occurring every year. To take advantage of these developments, however, suppliers of machine vision systems will have to look outside the box of conventional industrial manufacturing and into niche applications that span the gamut from agriculture to space exploration.

Friday, June 11, 2010

Art, vision, and ALS - forget eye-tracking for shoppers

We've seen eye-tracking systems that help determine the preferences of shoppers or website browsers. Here's one that could really benefit people who suffer from physical limitations: The EyeWriter project.

It's an ongoing collaborative research effort to empower people who are suffering from amyotrophic lateral sclerosis (ALS; aka Lou Gehrig's disease) with creative technologies. It allows graffiti writers and artists with paralysis resulting from ALS to draw using only their eyes.

The collaborative consists of members of Free Art and Technology (FAT), OpenFrameworks, the Graffiti Research Lab, and The Ebeling Group communities. They have teamed-up with LA graffiti writer, publisher, and activist, Tony Quan, aka TEMPTONE. He was diagnosed with ALS in 2003, a disease which has left him almost completely physically paralyzed… except for his eyes.

The long-term goal is to create a professional/social network of software developers, hardware hackers, urban projection artists and ALS patients from around the world who are using local materials and open source research to creatively connect and make eye art.

The Eyewriter from Evan Roth on Vimeo.


Click HERE to see the Specification Sheet for the Eyewriter, including the low-cost vision components that are needed.

Tuesday, June 8, 2010

The Vision Show revealed--on video

The most interesting thing for me about The Vision Show in Boston (May 25-27) was the simple fact that about 80 exhibitors put on a very upbeat and comprehensive showing of machine vision components available to integrators and end-users.

In one place you could see and touch the cameras, lighting, boards, cabling, etc that you might want to design into your next system. In the technical sessions and tutorials, you could also be instructed in many of the fundamentals of the technology and understand how products perform.

Here’s a video with Jeff Burnstein from the AIA talking about the show and what’s coming next.



Of course the fact that most vendors were reporting good to great sales numbers really helped. The overall mood of the roughly 1900 attendees and exhibitors was so strikingly different than during the depths of the recession that it was impossible not to get caught up in the good feelings.

Follow this link to our Video Showcase to see some of the videos that were made during the show.

Monday, June 7, 2010

British military envisions how imaging catches insurgents

Using very high-resolution digital cameras, multispectral imaging, and laser ranging, the UK’s Defence Science and Technology Laboratory (DSTL) says that new imaging technology will be used within 5 years to recognize insurgents or terrorists.

DSTL, which develops and tests the latest technologies for the Ministry of Defence, had members of its staff act out insurgent-like behavior, while developers and engineers took on the role of "good guys", pursuing and monitoring them.



The military twist was that these high-tech surveillance techniques are being combined with software that can pick out unusual patterns in behavior--such as two vehicles meeting in a concealed area. The surveillance, DSTL says, will eventually help to "win the battle" against insurgency. For more information, read the excellent BBC News article.

Monday, May 24, 2010

Imaging battles Gulf oil disaster

Satellite imaging and paricle image velocimetry are two of the imaging techniques being deployed against the oil spill in the Gulf of Mexico. A May 22 article in the New York Times describes several of the techniques that researchers are using to try a get an accurate measurement of the oil spill.

One approach is described in more detail by one of the Times authors, Steve Wereley at Purdue University, in a PowerPoint presentation entitled “Oil Flow Rate Analysis – Deepwater Horizons Accident”. He predicts that the Deepwater Horizon Gulf of Mexico oil spill is more than 50 times worse than initial BP predictions.

Using an imaging technique called particle image velocimetry (PIV), Wereley analyzed video obtained from BP to compute the magnitude of oil flowing from the site. According to his presentation, Wereley estimates that between 56,000 and 84,000 barrels a day are currently pouring into the Gulf of Mexico. Doug Suttle, chief operating officer for BP, initially said he thinks the estimate of 1,000 barrels a day is accurate, although BP is now admitting they have underestimated the amount of oil leaking.

To obtain his figures, Wereley computed the average plume velocity of the oil using PIV techniques, multiplied this figure by the cross-sectional area to find the volume flow rate, and then converted this figure to barrels per day.

PIV is an optical method of fluid visualization. It is used to obtain instantaneous velocity measurements and related properties in fluids. By measuring features in the fluid, motion of these features is used to calculate velocity information of the flow being studied.

A live video of the oil leak, provided by BP over Ustream is available on www.ustream.com - search: live oil spill cam.



All this imaging doesn't even take into account the dozen or so remote underwater vehicles that are now in operation near the sea bed, around the leak, streaming video back to a control center in Houston.



The oil spill is a disaster that maybe imaging and machine vision can help understand and moderate.

Thursday, May 13, 2010

Machine vision lives in Iran

While cruising the Bosphorus I met a vision system integrator from Tehran. Kasra Ravanbakhsh is the co-founder and managing director of Kasra Hooshmand Engineering (KDI). I was of course taken with him since he attributed his attendance at the EMVA Business Conference in Istanbul with seeing an advertisement for it in Vision Systems Design.

It turns out that along with my own blog about the conference and travel home under The Volcanic Cloud, Kasra has made a blog with many pictures about the EMVA conference.

KDI was formed in 2003 as private joint stock company in Tehran. The company's previous name was Kasra Digital Instruments and it still uses that abbreviation and logo.


Kasra says his company excels in developing machine vision systems, PC-based automation and monitoring, industrial automation, data acquisition, LabVIEW programming, microcontroller-based systems, and instrumentation. It is also very involved in cleanroom design and installation.
 
He also claims that KDI is the only professional developer of machine vision and real-time image processing-based inspection and control systems in Iran. KDI operates in industries such as pharmaceutical, glassware, packaging, military, aerospace, paper, food and beverage, and steel and aluminum production.

Kasra made many good contacts during the conference and perhaps opened the eyes of his new friends to some of the technical and intellectual life that stands just beyond their usual reach—not to mention some potential sales opportunities.

Europeans often note that their North American colleagues come from a “young” culture on the far side of the Atlantic. A bit of Persian history as described on the KDI website helps to put real antiquity into perspective!

Conard Holton
cholton@pennwell.com

Wednesday, May 12, 2010

An interactive atlas about global automation

If you manufacture automation equipment, including machine vision systems and robots, and you’re wondering where in the world to look for commercial growth opportunities, then you should review the Automation Atlas.

The Atlas shows the relative degree of automation in a country by showing the estimated number of robots per employees in processing industries. For more information and to use the Atlas, click here.



The Atlas was commissioned by the AUTOMATICA trade fair (held at Messe Munich, 7-11 June) and created by the statistical department of IFR - International Federation of Robotics, which is sponsoring the co-located ROBOTIK conference. The very interesting conference program is now available on the IFR website.

The IFR says only one-third of companies use automation technologies such as industrial robots or process-integrated quality control. For example, according to the Automation Atlas, countries in Eastern Europe employ relatively little automation technology--fewer than 50 industrial robots per 10,000 employees in the processing industry. The robot figure is only between 100 and 200 in Slovenia.

And globally there are clearly opportunities for growth in the pharmaceutical, cosmetics, and medical equipment industries, where the number of industrial robots in use is estimated to be fewer than 50 per 10,000 employees. In contrast, there are an estimated 400 to 700 robots for the same number of employees in the automobile industry.

Conard Holton, cholton@pennwell.com
Vision Systems Design

Thursday, April 22, 2010

Riding the Bosphorus Express—Istanbul to Munich

For the 150 people attending the European Machine Vision Association Business Conference in Istanbul last week, the meeting began as a fascinating visit to a beautiful city with a rich history, but not one usually on the machine-vision meeting circuit.

As the presentations, market reports, networking, and boat cruise passed, the specter of “The Cloud” from the Icelandic volcano began to dominate everyone’s thinking. European airspace was shutting down as we took a scenic cruise up the Bosphorus past historic Dolmabah├že Palace:



By the last day of the conference--Saturday, April 17--it was clear that all plans to fly home were in jeopardy. I was able to fly out on Sunday because the flight was direct to New York’s JFK and we could skirt the southern edge of Europe.

However, my colleague and Vision Systems Design sales rep, Johann Bylek, had a different adventure on his way home to Munich.

Here is his report:

An unexpected adventure trip from Istanbul to Munich

After the EMVA conference in Istanbul most of the European attendees were not able to fly home because of the Icelandic volcanic ash cloud. Most European airports were closed and all flights cancelled. As a result, most people were stuck in Istanbul.

It was nearby impossible to connect with any airline since all telephone lines were overloaded. Rental cars and trains were sold out across Europe, and thousands of passengers were hanging around the airports.

A group of attendees--with special thanks to the “chief coordinator” Dr. Horst G. Heinol-Heikkinen (CEO, Asentics)—began discussing other possibilities to get home.

After many false leads, it was possible to find and hire a Bulgarian bus to drive to Istanbul and pick up a group of 34 people. These passengers would then be driven over 2000 km (about 1250 miles) to Munich. And the cost for the bus and two drivers? About €290 ($385) per passenger.



Starting at 7:00 pm Sunday evening in Istanbul, we reached the Bulgarian border at 10:00 pm. We had to use a side road to avoid the highway customs station where about 200 buses were waiting for immigration.

Pushing on to the Serbian border we had to wait in line for three hours because of six other buses ahead of us--everybody has got a Serbian stamp in their passport. One of our group, Manfred Schaffrath, from Profactor, was picked out for detailed luggage inspection, maybe searching for cigarettes or drugs. We wondered if perhaps he was suspected because he is Austrian!



Driving the whole night and half another day the bus passed Belgrade at 3:00 pm on Monday, and then going further on through Hungary and Austria to Munich.



After a 37 hour bus ride through five different countries with different languages and currencies, the group arrived in Munich on Tuesday morning at 8:00 am. Everybody was tired but happy to be back in Germany.

Images courtesy of Manfred Schaffrath, Profactor

Monday, April 5, 2010

Omens at trade shows

Predicting the market outlook for machine vision products can seem akin to interpreting the patterns of tea leaves or Tarot cards or even practicing myomancy – studying the movements of mice to foretell the future. However, those attending and exhibiting at this spring’s spate of machine vision and image processing trade shows may practice a modern version of myomancy to give themselves a sense of market momentum.

This week, for example, the SPIE Defense, Security, and Sensing show held in Orlando, FL, will provide attendees an impression of the state of the markets for imaging components and applications, especially those used in infrared applications. A strong technical conference accompanies the show.

The month after, in Boston, The Vision Show, May 25-27, will give both exhibitors and attendees alike an idea of the health of machine vision industry in North America, particularly the health of component makers.

Automatica, held in Munich, Germany, June 8-11, will reveal similar prospects for components and systems in Europe, especially as the show includes a strong robotic exhibition and the collocated technical symposium, ISR/Robotik. The show also takes place at the same time and exhibition center as Intersolar 2010, which will attract a vast audience for those involved in solar energy products and services, and is a fast growing area for machine-vision components and systems.

Myomancy, anyone?

Thursday, April 1, 2010

A fool for milk

When I first read about automated cow milking machines that use machine vision, I thought it was amusing. Last year, LMI Technologies was working with GEA Farm Technologies to adapt its 3-D time-of-flight imager to the task of producing happier cows and higher yields.



Now I find that robot maker Fanuc Robotics has taken the concept of automated milking to an advanced stage with a herd-milking system that can be seen in this video. Machine vision just keeps getting more interesting.

Wednesday, March 31, 2010

Getting a bit better all the time

It seems that the market for machine vision products and systems is improving. IMS Research in the UK says the recovery in the world machine vision market is gathering pace, as shown by its latest quarterly report consolidating revenue data from major suppliers.



John Morse, the managing analyst for the tracking report says, “The market appears to be recovering faster than previously forecast.” The North America market grew the most between the third and forth quarter of 2009; but there was good growth also from both Europe/Middle East/Africa and Asia Pacific. “Our data show that the low point was Q1 2009 and revenues have grown each subsequent quarter. If this trend continues, the market will be back to its 2008 levels by 2011."

This corresponds to discussions I've had with system integrators and vendors. Blogger B Grey at Machine Vision 4 Users even notes that delivery times on lenses for machine vision (now getting longer) could be an indicator of recovery.

What form of tea-leaf-reading works for you?

Here are some more revealing charts that John Morse sent to me:



Tuesday, March 30, 2010

Adventures in Imaging

If you'd like to know what's going on with the VC crowd and new ideas in imaging, you could attend the MIT Imaging Ventures class on March 30, 2010.

If you missed it, here is the panel of entrepreneurs and technologists--a list that is very interesting to check out:
• Kenny Kubala, FiveFocal ~ Advanced imaging and optics;
• Rob Rowe, Lumidigm ~ Biometric fingerprint systems;
• Mark Holzbach, Zebra Imaging ~ Holographic products;
• Kari Pulli, Nokia Research Imaging ~ Imaging on mobiles

Follow Vision Systems Design on:
Twitter
Facebook
LinkedIn

Monday, February 8, 2010

BigShot could be a big deal for machine vision

Simpler cameras with embedded intelligence sounds like a good idea. In fact many vendors of smart cameras for machine vision are already heading in this direction, adding FPGAs, DSPs, and CPUs to their products so that their customers can build ever-more sophisticated systems without some of the software development needed for custom applications.

But wait! It seems that a group of 10-year-old kids is working on the same idea. Actually, it’s not quite the same idea since the kids are performing this task using a simple camera kit called BigShot (http://www.bigshotcamera.org/). The creator of BigShot is Shree Nayar, chairman of Columbia University’s computer-science department and director of the Computer Vision Laboratory.

BigShot is a build-it-yourself camera. It comes in a kit with less than 20 parts that snap and screw together simply. When it’s finished, users can peer through the transparent back and, with the help of labels preprinted on the plastic, show curious friends how the camera works. The labels point out the microprocessor, the memory chip, and other features that let this homemade device digitally capture, store, and reproduce images.



BigShot takes normal, panoramic, and even three-dimensional pictures. But the real point of the camera isn’t the photos. It’s to use the camera as an excuse to expose the kids to as many science and engineering concepts as possible.

Nayar worked with a group of contractors to flesh out his initial design and build the first set of working prototypes. He also worked with a group of undergraduate and graduate students at Columbia to develop the online educational materials, design the Bigshot website, and conduct the field tests.

So far there have been test sites in New York City, Bengaluru, India, and Vung Tao, Vietnam, where, the camera has served as a means for children of very different social and economic backgrounds to communicate and express themselves.

What can vendors and integrators of machine-vision products learn from such an undertaking? One lesson, perhaps, is that it is critical to educate young people in science and engineering and encourage some to follow these career paths.

Another is that simplicity and transparency help make technology a more useful tool--whether in education, manufacturing, security, biomedical research, or human relations.

In ways we may not yet recognize, the future success of such machine vision and image processing applications is already being secured by the interest, enthusiasm, and energy of 10-year olds fiddling with do-it-yourself cameras.