Glossary

Click below to skip to a glossary section

A

AGV (Automated Guided Vehicles)
AGVs are machines relying on guidance systems and infrastructure such as beacons or magnetic tapes on the floor. AGVs follow routes provided by the central system; they can’t change their path and would typically stop in front of an obstacle until it has been cleared as they can’t move out of their track. For this reason, they’re typically used in controlled indoor spaces.

AMR (Autonomous Mobile Robots)
AMRs are self-driving robotic vehicles that can plot their own path to a given destination and navigate around obstacles. Their defining feature is that they run SLAM (simultaneous localization and mapping) algorithms. SLAM algorithms enable AMRs to build a map of their environment and track their location on the map, providing the foundation for autonomous navigation. Depending on how advanced their autonomy technology is, AMRs can work safely around people, constantly update the maps of the facility they operate in, and navigate in unstructured spaces. See also SLAM.

Automated Storage and Retrieval System (AS/RS)
An automated storage system, off-limits to humans, where goods are placed and retrieved from storage in a centrally-run computer-operated system. An AS/RS system can use automated guided vehicles (AGVs) or conveyor systems. Efficient for high-volume items but is CapEx-intensive and susceptible to single-point system failure. AS/RS solutions were first developed in the 1960s. 

Autonomous
The term is used to describe a robot or a robotic vehicle requiring no direct human operator or a guidance system to operate. Autonomous vehicles are self-driving, as opposed to “automated” vehicles that require some sort of guidance system to move, such as magnetic tapes.

C

Cobot
Short for collaborative robot. See Collaborative robot below.

Collaborative robot
Collaborative robots and robotic vehicles are designed to navigate and work safely with and around people and any mobile equipment, human-operated, automated, or autonomous. Collaborative robots don’t require a safety fence.

D

Dense 3D map
A full 3D map of the environment, with all visible surfaces included in the map. Dense 3D maps are used for operations within the autonomy system when the robot requires full detail for more complex tasks such as autonomous trailer unloading. See also Sparse 3D map.

Depth estimation
The capability of an autonomous mobile robot or a robotic vehicle to use camera inputs to estimate depth with no other sensor data.

F

Fleet management
Software solutions for setting up and controlling a fleet of autonomous mobile robots or robotic vehicles. Ideally, a fleet management system enables workflow orchestration, deeply integrating human employees and their robotic helpers.

Follow me
A material-handling automation use case that involves autonomous mobile robots (AMRs) or automated guided vehicles (AGVs) following a person around the facility. Vision-based AMRs, depending on how advanced their autonomy technology is, can follow a person by “sight” only, thanks to their object tracking feature (see Object tracking for more detail). AGVs require that the person carries a device emitting a signal that the machine can follow.

G

Goods to Person (G2P)
Goods to Person is an order fulfillment method using automation to bring the stock to pickers. It can involve autonomous mobile robots (AMRs), automated guided vehicles (AGVs), or automated storage and retrieval systems (AS/RS). Of these, AMRs are the only automation option that can be used in unstructured spaces. A typical use case example would involve an AMR bringing a container with the required goods to a packing station,returning it after the picker takes the required items from the container

I

Instance segmentation
The capability of an autonomous robot or robotic vehicle to detect separate instances of the same type of object. An example would be detecting separate pallets in a stack. This is a fairly advanced autonomy technology feature and requires input from cameras. Instance segmentation enables complex operations that wouldn’t be possible otherwise.

L

LiDAR (Light Detection And Ranging)
LiDAR is a laser ‘radar’ – a binary sensor, showing only if something is there or not. It doesn’t capture any other data. 2D LiDAR is the standard sensor for industrial autonomous mobile robots (AMRs) and automated guided vehicles (AGVs), required for safety certification. However, 2D LiDARs sense only a thin sliver of the world, a single plane usually some 20 cm above the floor for most AMRs and AGVs. Any objects below or above that plane are invisible to it. 3D LiDARs are unsuitable for mobile robots due to computing and energy requirements.

Localization
The capability of an autonomous mobile robot or a robotic vehicle to localize itself on its map of the facility.

M

Mapping in 3D
Some autonomous mobile robots have the capability of building 3D maps of the environment in which they operate. For 3D mapping, AMRs require input from stereo cameras or another 3D sensor. 3D maps can be dense, with all visible surfaces included in the map, or sparse, using only data points that pick up natural features. See also Sparse 3D map and Dense 3D map.

Material handling automation
An umbrella term covering technologies, equipment and processes minimizing human input in material handling. Automating material handling can include conveyor infrastructure, automated storage and retrieval systems (AS/RS), automated guided vehicles (AGVs), or autonomous mobile robots (AMR). AMRs, the most recent technology, offer the most flexibility, and vision-based AMRs are the only technology suitable for unstructured spaces, where people, vehicles and cargo constantly move about. See also AS/RS, AGV, AMR and Robot vision.

O

Object tracking
The capability of an autonomous robot or a robotic vehicle to track separate objects in its field of vision. Object tracking is crucial for multiple processes and operations of an autonomous system, including safety and navigation. Depending on how advanced an autonomy technology is, it can also enable the robot to follow a single person using only vision, without any kind of signal-emitting device.  

Optical flow
The capability of a moving autonomous vehicle to predict the motion of other objects based on their apparent motion in its field of view. Put in very simple terms: if a robot moves, any stationary objects will appear to move in the opposite direction t the same speed. However, another moving object will appear to move differently. An autonomous robot must deduce the direction and speed of other moving objects, also taking into account depth and its own motion. Optical flow, of course, requires camera input.

P

Pose estimation 
The capability of a robot or a robotic vehicle to estimate the exact pose of an object – for example, a pallet – in space. Without pose estimation, an autonomous system can’t perform complex operations such as autonomous trailer unloading. Camera input is crucial for pose estimation.

R

Robot vision
The capability of a robot to perceive and understand its environment by processing camera inputs usually involving some of the machine learning approaches. Autonomy technologies based on robot vision not only perform better than LiDAR-based robots but do so across a more extensive range of use cases. Depending on how advanced their vision-based autonomy is, vision-based AMRs can be used in unstructured, highly dynamic environments – those bustling areas where people, equipment and cargo constantly move about the facility. They can detect low-lying and negative obstacles that LiDAR sensors can’t perceive (see LiDAR for more detail). Also, unlike LiDAR, cameras provide rich data that can be used for various ancillary purposes, such as inventory or security.

S

Sensor fusion
Combining data from various sensors – for example, 2D LiDAR and stereo cameras.

Semantic segmentation
The capability of an autonomous mobile robot to understand what surrounds them – to detect separate objects and recognize what those objects are. Semantic understanding makes robots better at mapping and modeling the world (a wall will always be there, a box won’t) and safer around people and moving equipment. Semantic segmentation is a fairly advanced feature for vision-based autonomous mobile robots. It requires cameras and machine learning as the system must be trained to recognize objects.

SLAM (Simultaneous Localization and Mapping)
SLAM is an algorithm enabling a mobile robot or a robotic vehicle to build a map of its environment and keep track of its location on the map at the same time. Without SLAM, robots can’t be considered autonomous. SLAM can be based on various sensor inputs, including 2D LiDAR and stereo cameras. See AMR, LiDAR and Stereoscopic cameras for more detail.

Sparse 3D map
This type of 3D map, built by autonomous mobile robots, consists of data points picking up natural features. A sparse map thus doesn’t show entire surfaces, just a “cloud” of points in 3D space marking up natural features that the robot detects. The robots use sparse 3D maps for operations that don’t require full detail, such as localization. (See also Dense 3D Map and Localization)

Stereoscopic cameras (or Stereo cameras)
A pair of cameras enabling advanced visual perception, including depth estimation, just like a pair of eyes. Stereo cameras provide autonomous mobile robots (AMRs) with rich data so that they can “see” and understand the world around them. Cameras mounted on AMRs intended for industrial spaces must be specially designed for a tough working environment, including bad indoor lighting and vibrations from AMRs’ motion.

Swarm order fulfillment
An order fulfillment method involving complex coordination of autonomous mobile robots (AMRs). A typical swarm use case would involve AMRs moving between employees stationed in separate warehouse zones. Each AMR moves from employee to employee (from zone to zone). Employees place items from ‘their’ zone in containers carried by AMRs. As an AMR ends its round, it takes the items making up an order to be packed and shipped.  

V

vAMR (visual Autonomous Mobile Robots)
Autonomous mobile robots using visual perception for autonomous navigation. See also Robot vision.

vSLAM (visual Simultaneous Localization and Mapping)
SLAM algorithm based on robot vision. See SLAM for more detail.