An xR system may also utilize a variety of technologies for capturing detail of a location, determining viewing angle/direction, and establishing proximity such as techniques to create a 3D scan (e.g., RGB camera plus a depth camera, single RGB camera and computer vision, visual inertial odometry, or a pre-existing online database of 3D scan data). Other technologies include digital photogrammetry, LiDAR, infrared or structured light 3D scanning, laser scans, as well as visual and audio positioning methodologies (e.g. echolocation).
For example, a person may walk into a rented space and open an app that lets them navigate the apartment. They may scan their phone around the apartment to find AR icons near the exact positions of the router, the thermostat, or extra towels and may even type in “forks” to search for the drawer containing cutlery. To support these features, the device may use an exact position and orientation in the house, down to the centimeter. This may be possible using a visual mapping and localization system (SLAM) that can parse a visual scene and track position in a highly accurate manner. In some cases, these systems may rely on custom marker images and QR codes that provide highly conspicuous visual features to help a device measure its position relative to that marker.
Some devices may utilize 3D depth sensors and cameras to enable a marker-less mapping and localization. A 3D point cloud map may be an external reference in this system. In some cases, semantic mapping may enable maps that are complex and dynamically understand the world around them.