Object detection and mapping are critical functions that a fully autonomous robot must perform. Various optical technologies are available for the robots to detect the environment, including visible light cameras and mechanical LiDAR systems.
Robot interaction: In order for a robot to recognise people and its surroundings, it is equipped with cameras, sensors and LiDAR.
Innovative manufacturers have solved many of the engineering design challenges involved in the development of service robots. In particular, efficient electric motors and the latest lithium-ion battery technology mean that a robot’s motion can be accurately controlled, and that stand-alone operation when not connected to a power outlet can be sustained for long periods.
The search for valuable differentiation is now moving on to the interaction between a robot and its environment: not only the built or open space within which it works, but also the humans it encounters as its moves around its operating domain. The scope to improve environment awareness and human-machine interaction is as great in consumer service robots such as automatic floor cleaners as it is in professional or industrial robots, such as robot valets in hotels and other hospitality settings. And here, optical technology is emerging as a valuable enabler of new capabilities and higher performance. This article describes the latest innovations in optical technology which are set to be deployed in the next generation of professional and personal service robots.
Environment awareness: from 2D to 3D sensing
Figure 1: A 3D sensing system enables a service robot to detect an obstacle in its path which is invisible to a traditional 2D mechanical LiDAR scanner.
(Source: ams OSRAM)
Object detection and mapping are crucial functions for a fully autonomous robot to perform. The more precisely and accurately the robot senses its environment, the faster and more efficiently it can move around a shared space. This requirement led robot manufacturers to implement various optical technologies for scanning the environment, including visible light cameras and mechanical LiDAR systems.
The latest approach is to adopt 3D sensing, a technology deployed in high volume in the mobile phone to enable facial recognition. 3D sensing uses a combination of infrared emitters (IR LEDs or lasers) and high-resolution IR sensors to build a ‘depth map’, which accurately represents the size and shape of objects in the field of view, and their distance from the viewer.
3D sensing techniques include:
Structured light – projection of a so-called dot pattern on to the object, and inferring its shape from the distorted pattern reflected back to the image sensor.
Active stereo vision – light emitted from an IR dot pattern projector is reflected from objects in the field of view, and the reflected light is detected by a pair of image sensors.
Direct time-of-flight (dToF) – Short light pulses from an infrared laser (e.g. vertical cavity surface emitting laser (VCSEL) or edge emitting Laser (EEL)) are emitted into the field of interest via an optical system. Pulses reflected from objects are detected by a detector (e.g. single-photon avalanche diode (SPAD)). The interval between emission and detection is timed and the time converted to a distance measurement. Firmware renders the dToF measurements as a 3D depth map.
Indirect time-of-flight (iToF) - a continuous modulated light wave is emitted by an infrared source (e.g. a VCSEL flood illuminator), the phase difference between outgoing and incoming signals is measured to calculate the distance. Each technique has different advantages for different types of applications.
A dToF sensor can for example be used to save power: a low-power dToF system may be used to detect the presence of an object, triggering a higher-power 3D sensing system to wake up and produce a depth map image of the object. Stereo vision means a pair of image sensors with differing views is providing 2D digital images of where the 3D information is extracted from. The Mira220 is a NIR global shutter image sensor with high quantum efficiency in visible as well as in near-infrared light spectrum of up to 38% at the 940 nm based on our internal tests. This feature allows device manufacturers to reduce the output power of the NIR illuminators used alongside the image sensor in 2D and 3D sensing systems, reducing total power consumption.
Whichever 3D sensing method is used, the technology provides an ideal replacement for the mechanical LiDAR systems used in earlier robot designs: mechanical LiDAR is expensive, offers a short operating lifetime in harsh operating conditions, and only provides a flat 2D view. A 3D sensing system, which has a lower total cost and lasts longer, is already more attractive than mechanical LiDAR technology. As Figure 1 shows, the improved view of the environment also enables a robot to operate more efficiently and navigate faster around obstacles.
Smart zoning to accelerate a robot's movement
Figure 2: Information about the direction of travel reduces the need for safety stops, increasing the efficiency of a robot’s operation.
(Source: ams OSRAM)
The safety of people who share a robot’s environment is of critical importance. Robot safety systems use the concept of zoning: demarcating an area around the robot into which other objects, including people, are not allowed to trespass. Robot developers have discovered, however, that the area does not need to be fixed. For illustration, consider how pedestrians stay safe around moving vehicles: they keep a long distance away from the front of a moving vehicle. But a pedestrian waiting at a road crossing can safely step into the road within touching distance of the rear of a car after it has passed the crossing.
This is an example of smart zoning – and it depends on information not only about the real-time position of an object, but also its direction of travel. This is an additional application for 3D sensing: a series of detailed 3D scans of the environment enable a robot to compute the position, speed and direction of travel of multiple objects in its environment, and navigate faster and more efficiently around objects by reducing the safe zone to the rear of moving objects (see Figure 2). A corridor is an excellent laboratory for the study of non-verbal human communication. Somehow, people walking towards each other in a narrow space generally manage to pass without bumping or hesitating.
Stand: 08.12.2025
Es ist für uns eine Selbstverständlichkeit, dass wir verantwortungsvoll mit Ihren personenbezogenen Daten umgehen. Sofern wir personenbezogene Daten von Ihnen erheben, verarbeiten wir diese unter Beachtung der geltenden Datenschutzvorschriften. Detaillierte Informationen finden Sie in unserer Datenschutzerklärung.
Einwilligung in die Verwendung von Daten zu Werbezwecken
Ich bin damit einverstanden, dass die Vogel Communications Group GmbH & Co. KG, Max-Planckstr. 7-9, 97082 Würzburg einschließlich aller mit ihr im Sinne der §§ 15 ff. AktG verbundenen Unternehmen (im weiteren: Vogel Communications Group) meine E-Mail-Adresse für die Zusendung von redaktionellen Newslettern nutzt. Auflistungen der jeweils zugehörigen Unternehmen können hier abgerufen werden.
Der Newsletterinhalt erstreckt sich dabei auf Produkte und Dienstleistungen aller zuvor genannten Unternehmen, darunter beispielsweise Fachzeitschriften und Fachbücher, Veranstaltungen und Messen sowie veranstaltungsbezogene Produkte und Dienstleistungen, Print- und Digital-Mediaangebote und Services wie weitere (redaktionelle) Newsletter, Gewinnspiele, Lead-Kampagnen, Marktforschung im Online- und Offline-Bereich, fachspezifische Webportale und E-Learning-Angebote. Wenn auch meine persönliche Telefonnummer erhoben wurde, darf diese für die Unterbreitung von Angeboten der vorgenannten Produkte und Dienstleistungen der vorgenannten Unternehmen und Marktforschung genutzt werden.
Meine Einwilligung umfasst zudem die Verarbeitung meiner E-Mail-Adresse und Telefonnummer für den Datenabgleich zu Marketingzwecken mit ausgewählten Werbepartnern wie z.B. LinkedIN, Google und Meta. Hierfür darf die Vogel Communications Group die genannten Daten gehasht an Werbepartner übermitteln, die diese Daten dann nutzen, um feststellen zu können, ob ich ebenfalls Mitglied auf den besagten Werbepartnerportalen bin. Die Vogel Communications Group nutzt diese Funktion zu Zwecken des Retargeting (Upselling, Crossselling und Kundenbindung), der Generierung von sog. Lookalike Audiences zur Neukundengewinnung und als Ausschlussgrundlage für laufende Werbekampagnen. Weitere Informationen kann ich dem Abschnitt „Datenabgleich zu Marketingzwecken“ in der Datenschutzerklärung entnehmen.
Falls ich im Internet auf Portalen der Vogel Communications Group einschließlich deren mit ihr im Sinne der §§ 15 ff. AktG verbundenen Unternehmen geschützte Inhalte abrufe, muss ich mich mit weiteren Daten für den Zugang zu diesen Inhalten registrieren. Im Gegenzug für diesen gebührenlosen Zugang zu redaktionellen Inhalten dürfen meine Daten im Sinne dieser Einwilligung für die hier genannten Zwecke verwendet werden. Dies gilt nicht für den Datenabgleich zu Marketingzwecken.
Recht auf Widerruf
Mir ist bewusst, dass ich diese Einwilligung jederzeit für die Zukunft widerrufen kann. Durch meinen Widerruf wird die Rechtmäßigkeit der aufgrund meiner Einwilligung bis zum Widerruf erfolgten Verarbeitung nicht berührt. Um meinen Widerruf zu erklären, kann ich als eine Möglichkeit das unter https://contact.vogel.de abrufbare Kontaktformular nutzen. Sofern ich einzelne von mir abonnierte Newsletter nicht mehr erhalten möchte, kann ich darüber hinaus auch den am Ende eines Newsletters eingebundenen Abmeldelink anklicken. Weitere Informationen zu meinem Widerrufsrecht und dessen Ausübung sowie zu den Folgen meines Widerrufs finde ich in der Datenschutzerklärung, Abschnitt Redaktionelle Newsletter.
Spoken instructions are rarely used – instead, humans use a combination of eye contact, body language and gestures to signal where they intend to go, and to work out where they expect others to go. These techniques are not available in human-robot interactions. But humans are adept at avoiding a moving obstruction such as a robot if they know where it is intending to go.
Projection lighting through micro-lens arrays to display
Figure 3: Projection lighting implemented with a micro-lens array may be used to signal a robot’s intentions to people in the vicinity.
(Source: ams OSRAM)
New light projection technology provides an ideal method for a robot to signal intention. In the automotive sector, ams OSRAM has pioneered the implementation of projection lighting through micro-lens arrays to display, for instance, turn signals on to the road surface below the wing mirror, or to project a 'welcome light carpet' on to the pavement when the driver walks up to a locked car.
The ams OSRAM micro-lens array technology enables the projection of high-resolution static or semi-dynamic messages or symbols on to any flat surface: in a robot, a micro-lens array could project symbols such as left-turn or right-turn arrows, or a set of pulsing rings surrounding the robot to indicate the extent of its safety zone (see Figure 3). By communicating information and intention with light, the robot can guide humans to step out of its way or navigate smoothly around it, without slowing the movement either of the robot or the person.
As this article has shown, optical technologies can be used to improve a robot’s awareness of its environment and the people around it. Optical technology can also enhance a robot’s awareness of the materials in the environment. Infrared (near/ NIR and short wave/ SWIR) spectrometry is an established field of science, used to detect the distinctive spectral signatures of materials, and to measure their moisture content. A spectrometer is a specialized and expensive piece of benchtop scientific equipment. ams OSRAM is changing this market by developing spectrometer-on-a-chip solutions.
In a robot type such as a floor cleaner, those spectral sensing IC's can detect flooring materials, distinguishing wool from polyester from wood from ceramics. Besides material categorization ams OSRAM's spectral sensor portfolio also allows to accurately measure. This enables cleaning operations to be automatically optimized for the type of floor. NIR and SWIR spectrometry is a more accurate method of identifying flooring materials than a visual camera backed by complex algorithms, and is simpler to implement: it relies only on machine learning-based algorithms to match an acquired spectral power distribution with a reference sample stored in memory.
Innovation in optical technology advances state of the art in robotics
The market for service robots is driven by competition over smart features and higher performance. Customers care about the cleaning performance of automatic floor cleaners, for instance, but also the speed with which they operate, their ability to avoid obstacles including people or pets, and their success in navigating around occupied spaces.
Competition in the robotics market is now leading manufacturers to evaluate new optical technologies. With this new technologies enhance a robot's awareness of its environment and of the people and materials in the environment. As in the case of 3D sensing and infrared (IR) spectrometry, many of these technologies are borrowed from other markets – but their deployment promises to bring new value to the latest generation of service robots.
* Clemens Mueller is Senior Director Application Marketing for industrial and medical at ams OSRAM.