Logistics robot lingo

March 18, 2019

AGVs or AGV robots = Automated Guided Vehicles (AGVs). Machines that lack full autonomy, relying on guidance systems such as beacons or magnetic tapes to navigate. AGVs have pre-determined paths (“virtual tracks”). They are a slightly more flexible version of the conveyor belt and are not always designed to work safely around humans (such as the case with Kiva robots). AGVs require more time to deploy than autonomous solutions because of their infrastructure requirements. AGVs cannot travel beyond the path that is planned for them. When these robots encounter obstacles, they are not equipped to do anything other than stop and wait until the obstacle is cleared away.

 

AMRs = Autonomous Mobile Robots – fully autonomous and designed to navigate itself around people, equipment and obstacles. AMRs must have simultaneous localization and mapping (SLAM) capabilities as well as autonomous navigation (path planning). Most AMRs on the market rely on LiDAR.

 

LiDAR = Light Detection And Ranging. LiDAR is a laser radar that allows the machine to sense its surroundings. It is the standard sensor in industrial AMRs. Its limitation is that it can’t sense low lying obstacles and does not perform consistently in highly-dynamic constantly-changing environments

 

Autonomous = requiring no direct human operator or guidance system, self-driving.

 

Collaborative = designed to navigate and work safely with and around people as well as fixed and mobile equipment.

 

Engine = an umbrella term for a complex component of software. A robot’s autonomy technology consists of several ‘engines’ working together.

 

Mobile = designed to move around facilities, not stationary

 

Semantic Understanding = the capability of an AI-powered robot to perceive objects around it plus recognize and understand what objects are.  The process is similar to how humans see/identify objects. A robot equipped with AI- and deep learning-based semantic understanding can recognize people, pallets – empty or loaded – forklifts, carts, load types, bar codes – anything that the robot will need to know for safe operation.

Semantic understanding is achieved by exposing the autonomy technology to large data-sets, comprised of tens of thousands of images. The objects in the pictures are painstakingly outlined using specialized software (or if you’re lucky – AI); each object in the picture assigned a name. The vast exposure to a variety of objects within a class equips robots with the ability to extrapolate that knowledge to new settings.

Picture: evidence of “Semantic Understanding” – object classes are consistently color-coded

SLAM (engine) = Simultaneous Localization and Mapping refers to the complex capability of a robot to build (or update) a map while at the same time keeping track of its own location on the map.

 

Traversability Map = a visual representation of the level confidence that the robot has in its ability to pass safely through an area – it answers the question – can the floor space be traversed based on various factors such as the presence of obstacles, holes, etc. Fully autonomous robots construct their map based on sensor inputs.

 

Visual Perception (in robots) = Perception based on sensor inputs that arrive from cameras rather than via LiDAR. When used for navigation, vision-based robots not only perform better than LiDAR-based robots but do so across a more extensive range of use cases. Vision-based robots can be used in unstructured, highly dynamic environments (bustling areas with few fixed points). They can detect low-lying and negative obstacles better that LiDAR. And, unlike LiDAR, cameras provide rich data that can be used for a variety of ancillary purposes (such as security).

 

Automated Storage and Retrieval System (AS/RS) = An automated storage system, off-limits to humans, where goods are placed and retrieved from storage in a centrally-run computer-operated system. Can use Automated Guided Vehicles. Highly efficient, but also highly capital-intensive and susceptible to single-point system failure.

 

3D modelling = Voxel grid occupancy maps are basic maps and are effective for navigation purposes. This procedure maps your space using 3D color-coded cubes. Color corresponds to height and shows the volume of space occupied by undifferentiated objects, shown as stacked voxels (portmanteau of volumetric and pixels).

Examples of voxel grid occupancy maps

Dense Scene Reconstruction incorporates and fuses colour information into the map. Such a map represents a great base for layering semantic information (see above).