3D camera ranging principle
3D camera, different from the traditional 2D camera that can only acquire plane images, the biggest feature is that it has acquired depth of field data. Through the combination of plane coordinates (x, y) and the depth data z of the camera distance from the point. The three-dimensional coordinates of each point in the image can be obtained, and the restoration of the real scene, scene reconstruction and other tasks can be completed.
Structured light ranging method: A beam of invisible infrared light with a specific wavelength is used as the light source to illuminate the object, and then the position information and depth information of the object are obtained according to the returned optical distortion image. Structured light can be divided into three categories according to different patterns: striped structured light, encoded structured light, and speckle structured light.
Speckle refers to the diffraction images formed by laser irradiation on rough objects. These images are highly random and appear different states depending on the distance. Based on the pre-calibration results (the speckle image corresponds to the depth data), the depth values corresponding to different speckle images can be found through feature matching.
The light time-of-flight method, literally, calculates the depth data of the object according to the time-of-flight of the detected light pulse. Because the speed of light is extremely fast, it is not feasible to directly detect the time-of-flight of light, which is usually realized by detecting the phase shift of the light.