Advanced Optical Technology Improving Robots' Interaction With People and the Environment

Ein Gastbeitrag von Clemens Mueller*

Anbieter zum Thema

Object detection and mapping are critical functions that a fully autonomous robot must perform. Various optical technologies are available for the robots to detect the environment, including visible light cameras and mechanical LiDAR systems.

Robot interaction: In order for a robot to recognise people and its surroundings, it is equipped with cameras, sensors and LiDAR.(Source:  © Andrey Popov - stock.adobe.com)
Robot interaction: In order for a robot to recognise people and its surroundings, it is equipped with cameras, sensors and LiDAR.
(Source: © Andrey Popov - stock.adobe.com)

Innovative manufacturers have solved many of the engineering design challenges involved in the development of service robots. In particular, efficient electric motors and the latest lithium-ion battery technology mean that a robot’s motion can be accurately controlled, and that stand-alone operation when not connected to a power outlet can be sustained for long periods.

The search for valuable differentiation is now moving on to the interaction between a robot and its environment: not only the built or open space within which it works, but also the humans it encounters as its moves around its operating domain. The scope to improve environment awareness and human-machine interaction is as great in consumer service robots such as automatic floor cleaners as it is in professional or industrial robots, such as robot valets in hotels and other hospitality settings. And here, optical technology is emerging as a valuable enabler of new capabilities and higher performance. This article describes the latest innovations in optical technology which are set to be deployed in the next generation of professional and personal service robots.

Environment awareness: from 2D to 3D sensing

Figure 1: 
A 3D sensing system enables a service robot to detect an obstacle in its path which is invisible 
to a traditional 2D 
mechanical LiDAR scanner.(Source:  ams OSRAM)
Figure 1: 
A 3D sensing system enables a service robot to detect an obstacle in its path which is invisible 
to a traditional 2D 
mechanical LiDAR scanner.
(Source: ams OSRAM)

Object detection and mapping are crucial functions for a fully autonomous robot to perform. The more precisely and accurately the robot senses its environment, the faster and more efficiently it can move around a shared space. This requirement led robot manufacturers to implement various optical technologies for scanning the environment, including visible light cameras and mechanical LiDAR systems.

The latest approach is to adopt 3D sensing, a technology deployed in high volume in the mobile phone to enable facial recognition. 3D sensing uses a combination of infrared emitters (IR LEDs or lasers) and high-resolution IR sensors to build a ‘depth map’, which accu­rately represents the size and shape of objects in the field of view, and their distance from the viewer.

3D sensing techniques include:

  • Structured light – projection of a so-called dot pattern on to the object, and inferring its shape from the distorted pattern reflected back to the image sensor.
  • Active stereo vision – light emitted from an IR dot pattern projector is reflected from objects in the field of view, and the reflected light is detected by a pair of image sensors.
  • Direct time-of-flight (dToF) – Short light pulses from an infrared laser (e.g. vertical cavity surface emitting laser (VCSEL) or edge emitting Laser (EEL)) are emitted into the field of interest via an optical system. Pulses reflected from objects are detected by a detector (e.g. single-photon avalanche diode (SPAD)). The interval between emission and detection is timed and the time converted to a distance measurement. Firmware renders the dToF measurements as a 3D depth map.
  • Indirect time-of-flight (iToF) - a continuous modulated light wave is emitted by an infrared source (e.g. a VCSEL flood illuminator), the phase difference between outgoing and incoming signals is measured to calculate the distance. Each technique has different advantages for different types of applications.

A dToF sensor can for example be used to save power: a low-power dToF system may be used to detect the pres­ence of an object, triggering a higher-power 3D sensing system to wake up and produce a depth map image of the object. Stereo vision means a pair of image sensors with differing views is providing 2D digital images of where the 3D information is extracted from. The Mira220 is a NIR global shutter image sensor with high quantum efficiency in visible as well as in near-infrared light spectrum of up to 38% at the 940 nm based on our internal tests. This feature allows device manufacturers to reduce the output power of the NIR illuminators used alongside the image sensor in 2D and 3D sensing systems, reducing total power consumption.

Whichever 3D sensing method is used, the technology provides an ideal replacement for the mechanical LiDAR systems used in earlier robot designs: mechanical LiDAR is expensive, offers a short operating lifetime in harsh operating conditions, and only provides a flat 2D view. A 3D sensing system, which has a lower total cost and lasts longer, is already more attractive than mechanical LiDAR technology. As Figure 1 shows, the improved view of the environment also enables a robot to operate more efficiently and navigate faster around obstacles.

Smart zoning to accelerate a robot's movement

Figure 2: Information about the direction of travel reduces the need for safety stops, increas­ing the efficiency of a robot’s operation.(Source:  ams OSRAM)
Figure 2: Information about the direction of travel reduces the need for safety stops, increas­ing the efficiency of a robot’s operation.
(Source: ams OSRAM)

The safety of people who share a robot’s environment is of critical importance. Robot safety systems use the concept of zoning: demarcating an area around the robot into which other objects, including people, are not allowed to trespass. Robot developers have discovered, however, that the area does not need to be fixed. For illustration, consider how pedestrians stay safe around moving vehicles: they keep a long distance away from the front of a moving vehicle. But a pedestrian waiting at a road crossing can safely step into the road within touching distance of the rear of a car after it has passed the crossing.

This is an example of smart zoning – and it depends on information not only about the real-time position of an object, but also its direction of travel. This is an additional application for 3D sensing: a series of detailed 3D scans of the environment enable a robot to compute the position, speed and direction of travel of multiple objects in its environment, and navigate faster and more efficiently around objects by reducing the safe zone to the rear of moving objects (see Figure 2). A corridor is an excellent laboratory for the study of non-verbal human communication. Somehow, people walking towards each other in a narrow space generally manage to pass without bumping or hesitating.

Jetzt Newsletter abonnieren

Verpassen Sie nicht unsere besten Inhalte

Mit Klick auf „Newsletter abonnieren“ erkläre ich mich mit der Verarbeitung und Nutzung meiner Daten gemäß Einwilligungserklärung (bitte aufklappen für Details) einverstanden und akzeptiere die Nutzungsbedingungen. Weitere Informationen finde ich in unserer Datenschutzerklärung. Die Einwilligungserklärung bezieht sich u. a. auf die Zusendung von redaktionellen Newslettern per E-Mail und auf den Datenabgleich zu Marketingzwecken mit ausgewählten Werbepartnern (z. B. LinkedIn, Google, Meta).

Aufklappen für Details zu Ihrer Einwilligung

Spoken instructions are rarely used – instead, humans use a combination of eye contact, body language and gestures to signal where they intend to go, and to work out where they expect others to go. These techniques are not available in human-robot interactions. But humans are adept at avoiding a moving obstruction such as a robot if they know where it is intending to go.

Projection lighting through micro-lens arrays to display

Figure 3: Projection lighting implemented with a micro-lens array may be used to signal a robot’s intentions to people in the vicinity.(Source:  ams OSRAM)
Figure 3: Projection lighting implemented with a micro-lens array may be used to signal a robot’s intentions to people in the vicinity.
(Source: ams OSRAM)

New light projection technology provides an ideal method for a robot to signal intention. In the automotive sector, ams OSRAM has pioneered the implementation of projection lighting through micro-lens arrays to display, for instance, turn signals on to the road surface below the wing mirror, or to project a 'welcome light carpet' on to the pave­ment when the driver walks up to a locked car.

The ams OSRAM micro-lens array technology enables the projection of high-resolution static or semi-dynamic messages or symbols on to any flat surface: in a robot, a micro-lens array could project symbols such as left-turn or right-turn arrows, or a set of pulsing rings surrounding the robot to indicate the extent of its safety zone (see Figure 3). By communicating information and intention with light, the robot can guide humans to step out of its way or navigate smoothly around it, without slowing the movement either of the robot or the person.

As this article has shown, optical technologies can be used to improve a robot’s awareness of its environment and the people around it. Optical technology can also enhance a robot’s awareness of the materials in the environment. Infrared (near/ NIR and short wave/ SWIR) spectrometry is an established field of science, used to detect the distinctive spectral signatures of materials, and to measure their moisture content. A spectrometer is a special­ized and expensive piece of benchtop scientific equipment. ams OSRAM is changing this market by developing spectrometer-on-a-chip solutions.

In a robot type such as a floor cleaner, those spectral sensing IC's can detect flooring materials, distinguishing wool from polyester from wood from ceramics. Besides material categorization ams OSRAM's spectral sensor portfolio also allows to accurately measure. This enables cleaning operations to be automatically optimized for the type of floor. NIR and SWIR spectrometry is a more accurate method of identifying flooring materials than a visual camera backed by complex algorithms, and is simpler to implement: it relies only on machine learning-based algorithms to match an acquired spectral power distribution with a reference sample stored in memory.

Innovation in optical technology advances state of the art in robotics

The market for service robots is driven by competition over smart features and higher performance. Customers care about the cleaning performance of automatic floor cleaners, for instance, but also the speed with which they operate, their ability to avoid obstacles including people or pets, and their success in navigating around occupied spaces.

Competition in the robotics market is now leading manufacturers to evaluate new optical technologies. With this new technologies enhance a robot's awareness of its environment and of the people and materials in the environment. As in the case of 3D sensing and infrared (IR) spectrometry, many of these technologies are borrowed from other markets – but their deployment promises to bring new value to the latest generation of service robots.

* Clemens Mueller is Senior Director Application Marketing for industrial and medical at ams OSRAM.

(ID:48689878)