TOF相機(Time of Fight Camera)(維基百科全翻譯版)


關於TOF相機,維基百科裏有一個較好的概述,鑑於很多同學無法查看維基百科,所以此篇的內容爲維基百科的翻譯版。並加上一些個人的註解。

前言

A time-of-flight camera (ToF camera) is a range imaging camera system that employs time-of-flight techniques to resolve distance between the camera and the subject for each point of the image, by measuring the round trip time of an artificial light signal provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems.1
 
Time-of-flight camera products for civil applications began to emerge around 2000,2 as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers. The distance resolution is about 1 cm. The spatial resolution of time-of-flight cameras is generally low compared to standard 2D video cameras, with most commercially available devices at 320 × 240 pixels or less as of 2011.3456 Compared to other 3D laser scanning methods for capturing 3D images, TOF cameras operate more quickly by providing up to 160 operations per second.7

飛行時間相機(ToF相機)是採用飛行時間的距離成像相機系統通過測量由激光或LED提供的人造光信號的往返時間來解析相機和拍攝對象之間距離的技術。基於激光的飛行時間相機是更廣泛的無掃描激光雷達類別的一部分,其中整個場景都是用每個激光脈衝捕獲,而不是像掃描激光雷達系統那樣用激光束逐點捕獲。1【注:TOF不用有掃描的步驟,一次發射多個點覆蓋場景並獲取覆蓋場景的全範圍距離數據。】

民用TOF相機產品在2000年左右開始出現,2半導體工藝極大加快了這種器件的生產速度。這種系統的覆蓋範圍從幾釐米到幾公里,距離分辨率約爲1釐米。與標準2D視頻相比,飛行時間相機的空間分辨率通常較低,截至2011年,大多數商用設備的像素爲320×240像素或更小。3 4 5 6與其他用於捕獲3D圖像的3D激光掃描方法相比,TOF相機速度更快,每秒可提供多達160次獲取。7【注:TOF相機的特點是幀率高,分辨率低。】

設備種類(Types of devices)

Several different technologies for time-of-flight cameras have been developed.

RF-modulated light sources with phase detectors

Photonic Mixer Devices (PMD),8 the Swiss Ranger, and CanestaVision9 work by modulating the outgoing beam with an RF carrier, then measuring the phase shift of that carrier on the receiver side. This approach has a modular error challenge: measured ranges are modulo the RF carrier wavelength. The Swiss Ranger is a compact, short-range device, with ranges of 5 or 10 meters and a resolution of 176 x 144 pixels. With phase unwrapping algorithms, the maximum uniqueness range can be increased. The PMD can provide ranges up to 60 m. Illumination is pulsed LEDs rather than a laser.10 CanestaVision developer Canesta was purchased by Microsoft in 2010. The Kinect2 for Xbox One was based on ToF technology from Canesta.

Range gated imagers

These devices have a built-in shutter in the image sensor that opens and closes at the same rate as the light pulses are sent out. Because part of every returning pulse is blocked by the shutter according to its time of arrival, the amount of light received relates to the distance the pulse has traveled. The distance can be calculated using the equation, z = R (S2 − S1) / 2(S1 + S2) + R / 2 for an ideal camera. R is the camera range, determined by the round trip of the light pulse, S1 the amount of the light pulse that is received, and S2 the amount of the light pulse that is blocked.1112
 
The ZCam by 3DV Systems[1] is a range-gated system. Microsoft purchased 3DV in 2009. Microsoft’s second-generation Kinect sensor was developed using knowledge gained from Canesta and 3DV Systems.13
 
Similar principles are used in the ToF camera line developed by the Fraunhofer Institute of Microelectronic Circuits and Systems and TriDiCam. These cameras employ photodetectors with a fast electronic shutter.
 
The depth resolution of ToF cameras can be improved with ultra-fast gating intensified CCD cameras. These cameras provide gating times down to 200ps and enable ToF setup with sub-millimeter depth resolution.14
 
Range gated imagers can also be used in 2D imaging to suppress anything outside a specified distance range, such as to see through fog. A pulsed laser provides illumination, and an optical gate allows light to reach the imager only during the desired time period.15

Direct Time-of-Flight imagers

These devices measure the direct time-of-flight required for a single laser pulse to leave the camera and reflect back onto the focal plane array. Also known as “trigger mode”, the 3D images captured using this methodology image complete spatial and temporal data, recording full 3D scenes with single laser pulse. This allows rapid acquisition and rapid real-time processing of scene information. For time-sensitive autonomous operations, this approach has been demonstrated for autonomous space testing16 and operation such as used on the OSIRIS-REx Bennu asteroid sample and return mission17 and autonomous helicopter landing.1819
 
Advanced Scientific Concepts, Inc. provides application specific (e.g. aerial, automotive, space) Direct TOF vision systems20 known as 3D Flash LIDAR cameras. Their approach utilizes InGaAs Avalanche Photo Diode (APD) or PIN photodetector arrays capable of imaging laser pulse in the 980 nm to 1600 nm wavelengths.

已經開發了多種用於TOF相機的技術。

帶有相位檢測器的RF調製光源
Photonic Mixer Devices (PMD),8 Swiss Ranger和CanestaVision 9通過調製帶有射頻載波的輸出光束,在接收器端測量該載波的相移。 該方法面對的挑戰是模塊化誤差:測量範圍是RF載波波長的模。 Swiss Ranger是一款緊湊的短距離設備,測距爲5或10米,分辨率爲176 x 144像素,使用相位展開算法,可以增加最大唯一性範圍。 PMD可以提供最大60m的測距,照明光源採用脈衝LED而非激光。10 CanestaVision的開發商Canesta在2010年被Microsoft收購,Xbox One的Kinect2就是基於Canesta的TOF技術。

距離選通成像儀
這些設備在圖像傳感器中具有內置的快門,該快門以與發出光脈衝相同的速率打開和關閉。 由於每個返回脈衝的一部分會根據到達時間被快門遮擋,因此接收到的光量與脈衝的傳播距離有關。 距離可以使用以下公式計算:對於理想的相機,z = R(S2-S1)/ 2(S1 + S2)+ R / 2。 R是相機範圍,由光脈衝的往返行程確定,S1是接收到的光脈衝的數量,S2是被阻止的光脈衝的數量。11 12

3DV Systems的ZCam [1]是一個距離選通系統。微軟在2009年收購了3DV。第二代Kinect傳感器是基於從Canesta和3DV獲得的經驗開發的系統。13

弗勞恩霍夫微電子電路與系統研究所和TriDiCam開發的ToF相機系列使用了類似的原理。 這些相機使用帶有快速電子快門的光電探測器。

使用超快速選通增強型CCD相機可以提高ToF相機的深度分辨率。
這些相機的選通時間低至200ps,並能夠以亞毫米的深度分辨率進行ToF設置。14

距離選通成像器也可以用於2D成像中,以抑制指定距離範圍之外的任何物體,例如透視霧。 脈衝激光提供照明,光閘僅在所需時間段內允許光到達成像器。15

直接TOF成像儀
這些設備測量單個激光脈衝離開相機並反射回焦平面陣列所需的直接飛行時間。 也稱爲“觸發模式”,使用此方法捕獲的3D圖像可對空間和時間數據進行完整成像,並用單個激光脈衝記錄完整的3D場景。這允許場景信息的快速獲取和快速實時處理。 這種方法已被證明可用於對時間敏感的自主空間測試16和作業,比如用於OSIRIS-REx Bennu小行星樣本和返回任務17以及直升機自主着陸。1819

Advanced Scientific Concepts,Inc.提供稱爲3D Flash LIDAR相機的特定於應用程序(例如,航空,汽車,太空)的直接TOF視覺系統20。 他們的方法利用InGaAs Avalanche Photo Diode (APD) 或PIN光電探測器陣列,能夠對980 nm至1600 nm波長的激光脈衝成像。

部件(Components)

A time-of-flight camera consists of the following components:

Illumination unit: It illuminates the scene. For RF-modulated light sources with phase detector imagers, the light has to be modulated with high speeds up to 100 MHz, only LEDs or laser diodes are feasible. For Direct TOF imagers, a single pulse per frame (e.g. 30 Hz) is used. The illumination normally uses infrared light to make the illumination unobtrusive.
 
Optics: A lens gathers the reflected light and images the environment onto the image sensor (focal plane array). An optical band-pass filter only passes the light with the same wavelength as the illumination unit. This helps suppress non-pertinent light and reduce noise.
 
Image sensor: This is the heart of the TOF camera. Each pixel measures the time the light has taken to travel from the illumination unit (laser or LED) to the object and back to the focal plane array. Several different approaches are used for timing; see Types of devices above.
 
Driver electronics: Both the illumination unit and the image sensor have to be controlled by high speed signals and synchronized. These signals have to be very accurate to obtain a high resolution. For example, if the signals between the illumination unit and the sensor shift by only 10 picoseconds, the distance changes by 1.5 mm. For comparison: current CPUs reach frequencies of up to 3 GHz, corresponding to clock cycles of about 300 ps - the corresponding ‘resolution’ is only 45 mm.
 
Computation/Interface: The distance is calculated directly in the camera. To obtain good performance, some calibration data is also used. The camera then provides a distance image over some interface, for example USB or Ethernet.

TOF相機由以下部件組成:

照明單元:用於照明場景。 對於帶有相位檢測器成像器的RF調製光源,必須以高達100 MHz的高速調製光,只有LED或激光二極管纔是可行的。 對於Direct TOF成像器,每幀使用單個脈衝(例如30 Hz)。 照明通常使用紅外光使照明不引人注目。

光學器件:透鏡收集反射的光並將環境成像到圖像傳感器(焦平面陣列)上。 光學帶通濾光片僅使具有與照明單元相同波長的光通過。 這有助於抑制不相關的光線並減少噪聲。

圖像傳感器:這是TOF相機的核心。 每個像素測量光從照明單元(激光或LED)傳播到物體再回到焦平面陣列所花費的時間。 有多種不同的方法用於計時,參閱上面的設備類型。

驅動器電子設備:照明單元和圖像傳感器都必須由高速信號控制並同步。 這些信號必須非常準確才能獲得高分辨率。 例如,如果照明單元和傳感器之間的信號僅偏移10皮秒,則距離變化1.5毫米。 爲了進行比較:當前的CPU達到高達3 GHz的頻率,對應於大約300ps的時鐘週期—相應的“分辨率”僅爲45 mm。

計算/接口:距離直接在相機中計算。 爲了獲得良好的性能,還使用了一些校準數據。 攝像機通過某些接口(例如USB或以太網)提供距離圖像。

原理(Principle)

在這裏插入圖片描述
Principle of operation of a time-of-flight camera: In the pulsed method (1), the distance, d = ct/2*q2/(q1 + q2) , where c is the speed of light, t is the length of the pulse, q1 is the accumulated charge in the pixel when light is emitted and q2 is the accumulated charge when it is not. In the continuous-wave method (2), d = ct/2π arctanq3 - q4q1 - q2 .21
在這裏插入圖片描述
Diagrams illustrating the principle of a time-of-flight camera with analog timing

The simplest version of a time-of-flight camera uses light pulses or a single light pulse. The illumination is switched on for a very short time, the resulting light pulse illuminates the scene and is reflected by the objects in the field of view. The camera lens gathers the reflected light and images it onto the sensor or focal plane array. Depending upon the distance, the incoming light experiences a delay. As light has a speed of approximately c = 300,000,000 meters per second, this delay is very short: an object 2.5 m away will delay the light by:22
在這裏插入圖片描述
For amplitude modulated arrays, the pulse width of the illumination determines the maximum range the camera can handle. With a pulse width of e.g. 50 ns, the range is limited to
在這裏插入圖片描述
These short times show that the illumination unit is a critical part of the system. Only with special LEDs or lasers is it possible to generate such short pulses.

The single pixel consists of a photo sensitive element (e.g. a photo diode). It converts the incoming light into a current. In analog timing imagers, connected to the photo diode are fast switches, which direct the current to one of two (or several) memory elements (e.g. a capacitor) that act as summation elements. In digital timing imagers, a time counter, that can be running at several gigahertz, is connected to each photodetector pixel and stops counting when light is sensed.

In the diagram of an amplitude modulated array analog timer, the pixel uses two switches (G1 and G2) and two memory elements (S1 and S2). The switches are controlled by a pulse with the same length as the light pulse, where the control signal of switch G2 is delayed by exactly the pulse width. Depending on the delay, only part of the light pulse is sampled through G1 in S1, the other part is stored in S2. Depending on the distance, the ratio between S1 and S2 changes as depicted in the drawing.[9] Because only small amounts of light hit the sensor within 50 ns, not only one but several thousand pulses are sent out (repetition rate tR) and gathered, thus increasing the signal to noise ratio.

After the exposure, the pixel is read out and the following stages measure the signals S1 and S2. As the length of the light pulse is defined, the distance can be calculated with the formula:
在這裏插入圖片描述
In the example, the signals have the following values: S1 = 0.66 and S2 = 0.33. The distance is therefore:
在這裏插入圖片描述
In the presence of background light, the memory elements receive an additional part of the signal. This would disturb the distance measurement. To eliminate the background part of the signal, the whole measurement can be performed a second time with the illumination switched off. If the objects are further away than the distance range, the result is also wrong. Here, a second measurement with the control signals delayed by an additional pulse width helps to suppress such objects. Other systems work with a sinusoidally modulated light source instead of the pulse source.

For direct TOF imagers, such as 3D Flash LIDAR, a single short pulse from 5 to 10 ns is emitted by the laser. The T-zero event (the time the pulse leaves the camera) is established by capturing the pulse directly and routing this timing onto the focal plane array. T-zero is used to compare the return time of the returning reflected pulse on the various pixels of the focal plane array. By comparing T-zero and the captured returned pulse and comparing the time difference, each pixel accurately outputs a direct time-offlight measurement. The round trip of a single pulse for 100 meters is 660 ns. With a 10 ns pulse, the scene is illuminated and the range and intensity captured in less than 1 microsecond.

TOF相機的最簡單形式使用光脈衝或單個光脈衝。 短時間內開啓照明,所產生的光脈衝照亮場景並被視野中的物體反射, 相機鏡頭會收集反射的光並將其成像到傳感器或焦平面陣列上。入射光會根據距離產生延遲,由於光的速度約爲c = 300,000,000米/秒,因此延遲非常短:距離2.5 m的物體會將光延遲以下時間:22
在這裏插入圖片描述
對於調幅陣列,照明的脈衝寬度決定了相機可以處理的最大範圍。 脈衝寬度爲 50 ns,範圍僅限於
在這裏插入圖片描述
這麼短暫的時間表明照明單元是系統的關鍵部分。只有使用特殊的LED或激光,纔可能產生這樣的短脈衝。

單個像素由感光元件(例如,光電二極管)組成。 它將入射光轉換爲電流。 在模擬定時成像器中,與光電二極管相連的是快速開關,該快速開關將電流引導至用作求和元件的兩個(或幾個)存儲元件(例如電容器)之一。 在數字定時成像器中,可以以幾千兆赫運行的計時器連接到每個光電探測器像素,並在檢測到光時停止計數。

在調幅陣列模擬計時器的圖中,像素使用兩個開關(G1和G2)和兩個存儲元件(S1和S2)。 開關由具有與光脈衝相同長度的脈衝控制,其中,開關G2的控制信號正好延遲了脈衝寬度。 根據延遲,僅一部分光脈衝通過S1中的G1進行採樣,另一部分存儲在S2中。 根據距離的不同,S1和S2之間的比例也會發生變化,如圖所示。9 由於在50 ns內只有少量的光射到傳感器上,因此不僅發出一個脈衝,而是發出數千個脈衝(重複率tR)並聚集,從而提高了信噪比。

曝光後,讀出像素,隨後測量信號S1和S2。 定義了光脈衝的長度後,可以使用以下公式計算距離:
在這裏插入圖片描述
在此示例中,信號具有以下值:S1 = 0.66和S2 = 0.33。 因此,距離爲:
在這裏插入圖片描述
在存在背景光的情況下,存儲元件會接收多餘的信號,這會干擾距離測量。爲了消除信號的背景部分,可以在關閉照明的情況下第二次執行整個測量。 如果物體距離距離範圍遠,則結果也是錯誤的,通過將控制信號延遲一個額外的脈衝寬度進行第二次測量有助於抑制此類物體。 其他系統使用正弦調製光源代替脈衝源。

對於直接的TOF成像儀,例如3D Flash LIDAR,激光會發出5到10ns的單個短脈衝。 通過直接捕獲脈衝並將此時序路由到焦平面陣列來建立T-zero事件(脈衝離開相機的時間)。 T-zero用於比較焦平面陣列各個像素上返回反射脈衝的返回時間。通過比較T-zero和捕獲的返回脈衝並比較時間差,每個像素可以準確地輸出直接的飛行時間測量值。 單脈衝100米的往返行程爲660 ns。 使用10 ns的脈衝,場景被照亮,並且在不到1微秒的時間內捕獲範圍和強度。

優點(Advantages)

Simplicity
In contrast to stereo vision or triangulation systems, the whole system is very compact: the illumination is placed just next to the lens, whereas the other systems need a certain minimum base line. In contrast to laser scanning systems, no mechanical moving parts are needed.

Efficient distance algorithm
It is a direct process to extract the distance information out of the output signals of the TOF sensor. As a result, this task uses only a small amount of processing power, again in contrast to stereo vision, where complex correlation algorithms are implemented. After the distance data has been extracted, object detection, for example, is also a straightforward process to carry out because the algorithms are not disturbed by patterns on the object.

Speed
Time-of-flight cameras are able to measure the distances within a complete scene with a single shot. As the cameras reach up to 160 frames per second, they are ideally suited to be used in real-time applications.

簡單
與立體視覺或三角測量系統相比,整個系統非常緊湊:照明僅放置在鏡頭旁邊,而其他系統則需要一定的最小基線。 與激光掃描系統相比,不需要機械運動部件。

高效的測距算法
從TOF傳感器的輸出信號中提取距離信息是直接的過程。 不像立體視覺中需要複雜的算法,該任務僅使用少量處理能力。在提取距離數據之後,類似於物體檢測任務也是一種簡單的執行過程,因爲算法不受物體上圖案的干擾。

速度
TOF相機能夠一次獲取完整場景中的距離,由於攝像頭每秒可達到160幀,因此非常適合在實時應用中使用。

缺點(Disadvantages)

Background light
When using CMOS or other integrating detectors or sensors that use visible or near infra-red light (400 nm - 700 nm), although most of the background light coming from artificial lighting or the sun is suppressed, the pixel still has to provide a high dynamic range. The background light also generates electrons, which have to be stored. For example, the illumination units in many of today’s TOF cameras can provide an illumination level of about 1 watt. The Sun has an illumination power of about 1050 watts per square meter, and 50 watts after the optical band-pass filter. Therefore, if the illuminated scene has a size of 1 square meter, the light from the sun is 50 times stronger than the modulated signal. For nonintegrating TOF sensors that do not integrate light over time and are using near-infrared detectors (InGaAs) to capture the short laser pulse, direct viewing of the sun is a non issue because the image is not integrated over time, rather captured within a short acquisition cycle typically less than 1 microsecond. Such TOF sensors are used in space applications17 and in consideration for automotive applications.23

Interference
In certain types of TOF devices (but not all of them), if several time-of-flight cameras are running at the same time, the TOF cameras may disturb each other’s measurements. There exist several possibilities for dealing with this problem:

Time multiplexing: A control system starts the measurement of the individual cameras consecutively, so that only one illumination unit is active at a time.
Different modulation frequencies: If the cameras modulate their light with different modulation frequencies, their light is collected in the other systems only as background illumination but does not disturb the distance measurement.

For Direct TOF type cameras that use a single laser pulse for illumination, because the single laser pulse is short (e.g. 10 nanoseconds), the round trip TOF to and from the objects in the field of view is correspondingly short (e.g. 100 meters = 660 ns TOF round trip). For an imager capturing at 30 Hz, the probability of an interfering interaction is the time that the camera acquisition gate is open divided by the time between laser pulses or approximately 1 in 50,000 (0.66 μs divided by 33 ms).

Multiple reflections
In contrast to laser scanning systems where a single point is illuminated, the time-of-flight cameras illuminate a whole scene. For a phase difference device (amplitude modulated array), due to multiple reflections, the light may reach the objects along several paths. Therefore, the measured distance may be greater than the true distance. Direct TOF imagers are vulnerable if the light is reflecting from a specular surface. There are published papers available that outline the strengths and weaknesses of the various TOF devices and approaches.24

背景光
當使用CMOS或其他使用可見光或近紅外光(400 nm-700 nm)的集成檢測器或傳感器時,儘管大部分來自人工照明或太陽的背景光有采用手段抑制,但像素仍然必須提供較高的動態範圍。背景光也會產生電子而被存儲,例如,當今許多TOF相機中的照明單元可以提供大約1瓦的照明水平,太陽的照明功率約爲每平方米1050瓦,在光學帶通濾波器之後的照明功率爲50瓦,因此,如果照明場景的大小爲1平方米,則來自太陽的光比調製信號強50倍。對於非集成TOF傳感器,它不會隨時間推移而積聚光,而是使用近紅外檢測器(InGaAs)來捕獲短激光脈衝,因此直接觀察太陽是沒有問題的,因爲圖像不會隨時間而積聚,而是在一個較短的週期採集(通常小於1微秒)。這種TOF傳感器用於空間應用[17],並考慮用於汽車應用。23

干擾
在某些類型的TOF設備(但不是全部)中,如果同時運行多個TOF攝像頭,則TOF攝像頭可能會干擾彼此的測量。有幾種可能的解決方案:
時分複用:一個控制系統會連續開始測量各個攝像機,因此一次僅激活一個照明單元。
不同的調製頻率:如果攝像機以不同的調製頻率調製其光,則它們的光在其他系統中僅作爲背景照明收集,而不會干擾距離測量。

對於使用單個激光脈衝進行照明的直接TOF型相機,由於單個激光脈衝很短(例如10納秒),因此往返於視場中物體的TOF往返行程相應較短(例如100米 = 660 ns TOF往返)。對於以30 Hz捕獲的成像器,發生干擾相互作用的可能性是相機採集門打開的時間除以激光脈衝之間的時間,大約是1/50,000(0.66μs除以33 ms)。

多重反射
與照亮單個點的激光掃描系統相反,飛行時間相機照亮了整個場景。對於相位差設備(調幅陣列),由於多次反射,光可能會沿着多條路徑到達對象, 因此測得的距離可能大於真實距離。如果光從鏡面反射,則直接TOF成像儀很容易出現問題。 有公開的論文概述了各種TOF裝置和方法的優缺點。24

應用(Applications)

Automotive applications
Time-of-flight cameras are used in assistance and safety functions for advanced automotive applications such as active pedestrian safety, precrash detection and indoor applications like out-ofposition (OOP) detection.2526

Human-machine interfaces and gaming
As time-of-flight cameras provide distance images in real time, it is easy to track movements of humans. This allows new interactions with consumer devices such as televisions. Another topic is to use this type of cameras to interact with games on video game consoles.27 The second-generation Kinect sensor originally included with the Xbox One console used a time-of-flight camera for its range imaging,28 enabling natural user interfaces and gaming applications using computer vision and gesture recognition techniques. Creative and Intel also provide a similar type of interactive gesture time-of-flight camera for gaming, the Senz3D based on the DepthSense 325 camera of Softkinetic.29. Infineon and PMD Technologies enable tiny integrated 3D depth cameras for close-range gesture control of consumer devices like all-in-one PCs and laptops (Picco flexx and Picco monstar cameras).30
在這裏插入圖片描述
Smartphone cameras
As of 2019, several smartphones include time-of-flight cameras. These are mainly used to improve the quality of photos by providing the camera software with information about foreground and background.31
在這裏插入圖片描述
Measurement and machine vision
Other applications are measurement tasks, e.g. for the fill height in silos. In industrial machine vision, the time-of-flight camera helps to classify and locate objects for use by robots, such as items passing by on a conveyor. Door controls can distinguish easily between animals and humans reaching the door.

Robotics
Another use of these cameras is the field of robotics: Mobile robots can build up a map of their surroundings very quickly, enabling them to avoid obstacles or follow a leading person. As the distance calculation is simple, only little computational power is used.

Earth topography
ToF cameras have been used to obtain digital elevation models of the Earth’s surface topography,32 for studies in geomorphology.
在這裏插入圖片描述

汽車應用
TOF相機用於輔助和安全功能,用於先進的汽車應用,例如主動行人安全,事前碰撞檢測和室內應用,例如錯位(OOP)檢測。25 26

人機界面和遊戲
TOF相機實時提供距離圖像,因此很容易跟蹤人類的運動。 這允許與諸如電視之類的消費類設備進行新的交互。 另一個主題是使用這種類型的相機與視頻遊戲機上的遊戲進行交互。27 Xbox One控制檯最初隨附的第二代Kinect傳感器使用TOF相機進行距離成像,28使用計算機視覺和手勢識別技術,實現了自然的用戶界面和遊戲應用程序。Creative和Intel還爲遊戲提供了類似類型的交互式手勢TOF相機,即基於Softkinetic的DepthSense 325相機的Senz3D。29。 Infineon 和PMD Technologies 使微型集成3D深度相機能夠對諸如多合一PC和筆記本電腦(Picco flexx和Picco monstar相機)等消費類設備進行近距離手勢控制。30

智能手機相機
截至2019年,幾款智能手機均包含TOF相機。這些主要用於通過爲相機軟件提供有關前景和背景的信息來提高照片的質量。31

測量與機器視覺
其他應用是測量任務,例如筒倉的填充高度。在工業機器視覺中,TOF攝像機有助於對機器人使用的對象進行分類和定位,例如在傳送帶上經過的物品。門控制器可以輕鬆地區分動物和人。

機器人
這些相機的另一個用途是機器人領域:移動機器人可以非常迅速地繪製周圍環境的地圖,從而使其能夠避開障礙物或跟隨領導者。 由於距離計算很簡單,因此僅使用很少的計算能力。

地球地形
TOF相機已用於獲取地球表面地形的數字高程模型,32 用於地貌研究。

品牌(Brands)

Active brands (as of 2011)
3D Flash LIDAR Cameras and Vision Systems by Advanced Scientific Concepts, Inc. for aerial, automotive and space applications

DepthSense - TOF cameras and modules, including RGB sensor and microphones by SoftKinetic

IRMA MATRIX - TOF camera, used for automatic passenger counting on mobile and stationary applications by iris-GmbH

Kinect - hands-free user interface platform by Microsoft for video game consoles and PCs, using time-of-flight cameras in its second generation of sensor devices.28

pmd - camera reference designs and software (pmd[vision], including TOF modules [CamBoard]) and TOF imagers (PhotonICs) by PMD Technologies

real.IZ 2+3D - High-resolution SXGA (1280×1024) TOF camera developed by startup company odos imaging, integrating conventional image capture with TOF ranging in the same sensor. Based on technology developed at Siemens.

Senz3D - TOF camera by Creative and Intel based on DepthSense 325 camera of Softkinetic, used for gaming.29

SICK - 3D industrial TOF cameras (Visionary-T) for industrial applications and software33

3D MLI Sensor - TOF imager, modules, cameras, and software by IEE (International Electronics & Engineering), based on modulated light intensity (MLI)

TOFCam Stanley - TOF camera by Stanley Electric

TriDiCam - TOF modules and software, the TOF imager originally developed by Fraunhofer Institute of Microelectronic Circuits and Systems, now developed by the spin out company TriDiCam

Hakvision - TOF stereo camera


Defunct brands
CanestaVision[34] - TOF modules and software by Canesta (company acquired by Microsoft in 2010)

D-IMager - TOF camera by Panasonic Electric Works

OptriCam - TOF cameras and modules by Optrima (rebranded DepthSense prior to SoftKinetic merger in 2011)

ZCam - TOF camera products by 3DV Systems, integrating full-color video with depth information (assets sold to Microsoft in 2009)

SwissRanger - an industrial TOF-only camera line originally by the Centre Suisse d’Electronique et Microtechnique, S.A. (CSEM), now developed by Mesa Imaging (Mesa Imaging acquired by Heptagon in 2014)

Fotonic - TOF cameras and software powered by Panasonic CMOS chip (Fotonic acquired by Autoliv in 2018)
在這裏插入圖片描述

進一步閱讀(Further reading)

Hansard, Miles; Lee, Seungkyu; Choi, Ouk; Horaud, Radu (2012). “Time-of-flight cameras: Principles, Methods and Applications” (http://hal.inria.fr/docs/00/72/56/54/PDF/TOF.pdf)(PDF). SpringerBriefs in Computer Science. doi:10.1007/978-1-4471-4658-2 (https://doi.org/10.1007%2F978-1-4471-4658-2). ISBN 978-1-4471-4657-5. “This book describes a variety of recent research into time-of-flight imaging: […] the underlying measurement principle […] the associated sources of error and ambiguity […] the geometric calibration of time-of-flight cameras, particularly when used in combination with ordinary color cameras […and] use time-of-flight data in conjunction with traditional stereo matching techniques. The five chapters, together, describe a complete depth and color 3D reconstruction pipeline.”

參考文獻(References)


  1. Iddan, Gavriel J.; Yahav, Giora (2001-01-24). “3D imaging in the studio (and elsewhere…)” (https://web.archive.org/web/20090612071500/http://www.3dvsystems.com/technology/3D%20Imaging%20in%20the%20studio.pdf) (PDF). Proceedings of SPIE. 4298. San Jose,CA: SPIE (published 2003-04-29). p. 48. doi:10.1117/12.424913 (https://doi.org/10.1117%2F12.424913). Archived from the original (http://www.3dvsystems.com/technology/3D%20Imaging%20in%20the%20studio.pdf) (PDF) on 2009-06-12. Retrieved 2009-08-17. “The [timeof-flight] camera belongs to a broader group of sensors known as scanner-less LIDAR (i.e.laser radar having no mechanical scanner); an early [1990] example is [Marion W.] Scott and his followers at Sandia.” ↩︎ ↩︎

  2. “Product Evolution” (https://web.archive.org/web/20090228203547/http://www.3dvsystems.com/technology/product.html#1). 3DV Systems. Archived from the original (http://www.3dvsystems.com/technology/product.html#1) on 2009-02-28. Retrieved 2009-02-19. “Z-Cam, the first depth video camera, was released in 2000 and was targeted primarily at broadcasting organizations.” ↩︎ ↩︎

  3. Schuon, Sebastian; Theobalt, Christian; Davis, James; Thrun, Sebastian (2008-07-15). “High-quality scanning using time-of-flight depth superresolution” (http://www-cs.stanford.edu/people/theobalt/TOF_CV_Superresolution_final.pdf) (PDF). IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008. Institute of Electrical and Electronics Engineers. pp. 1–7. CiteSeerX 10.1.1.420.2946 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.420.2946). doi:10.1109/CVPRW.2008.4563171 (https://doi.org/10.1109%2FCVPRW.2008.4563171). ISBN 978-1-4244-2339-2. Retrieved 2009 07-31. “The Z-cam can measure full frame depth at video rate and at a resolution of 320×240 pixels.” ↩︎ ↩︎

  4. “Canesta’s latest 3D Sensor - “Cobra” … highest res CMOS 3D depth sensor in the world” (https://www.youtube.com/watch?v=5_PVx1NbUZQ) (Flash Video). Sunnyvale, California: Canesta. 2010-10-25. “Canesta “Cobra” 320 x 200 Depth Sensor, capable of 1mm depth resolution, USB powered, 30 to 100 fps […] The complete camera module is about the size of a silver dollar” ↩︎ ↩︎

  5. “SR4000 Data Sheet” (http://www.mesa-imaging.ch/dlm.php?fname=pdf/SR4000_Data_Sheet.pdf) (PDF) (Rev 2.6 ed.). Zürich, Switzerland: Mesa Imaging. August 2009: 1. Retrieved 2009-08-18. “176 x 144 pixel array (QCIF)” ↩︎ ↩︎

  6. “PMD[vision] CamCube 2.0 Datasheet” (https://web.archive.org/web/20120225210428/http://www.pmdtec.com/fileadmin/pmdtec/downloads/documentation/datasheet_camcube.pdf) (PDF) (No. 20090601 ed.). Siegen, Germany: PMD Technologies. 2009-06-01: 5. Archived from the original (http://www.pmdtec.com/fileadmin/pmdtec/downloads/documentation/datasheet_camcube.pdf) (PDF) on 2012-02-25. Retrieved 2009-07-31. “Type of Sensor: PhotonICs PMD 41k-S (204 x 204)” ↩︎ ↩︎

  7. http://ww2.bluetechnix.com/en/products/depthsensing/list/argos/ ↩︎ ↩︎

  8. Christoph Heckenkamp: Das magische Auge - Grundlagen der Bildverarbeitung: Das PMD Prinzip (http://www.inspect online.com/whitepaper/das-magische-auge). In: Inspect. Nr. 1, 2008, S. 25–28. ↩︎ ↩︎

  9. Gokturk, Salih Burak; Yalcin, Hakan; Bamji, Cyrus (24 January 2005). “A Time-Of-Flight Depth Sensor - System Description, Issues and Solutions” (https://web.archive.org/web/20070623233559/http://www.canesta.com/assets/pdf/technicalpapers/CVPR_Submission_TOF.pdf) (PDF). IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2004: 35–45. doi:10.1109/CVPR.2004.291 (https://doi.org/10.1109%2FCVPR.2004.291). Archived from the original (http://www.canesta.com/assets/pdf/technicalpapers/CVPR_Submission_TOF.pdf) (PDF) on 2007-06-23. Retrieved 2009-07-31. “The differential structure accumulates photo-generated charges in two collection nodes using two modulated gates. The gate modulation signals are synchronized with the light source, and hence depending on the phase of incoming light, one node collects more charges than the other. At the end of integration, the voltage difference between the two nodes is read out as a measure of the phase of the reflected light.” ↩︎ ↩︎ ↩︎

  10. “Mesa Imaging - Products” (http://www.mesa-imaging.ch). August 17, 2009. ↩︎ ↩︎

  11. US patent 5081530 (https://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US5081530), Medina, Antonio, “Three Dimensional Camera and Rangefinder”, issued 1992-01-14, assigned to Medina, Antonio ↩︎ ↩︎

  12. Medina A, Gayá F, Pozo F (2006). “Compact laser radar and three-dimensional camera”. J. Opt. Soc. Am. A. 23 (4): 800–805. Bibcode:2006JOSAA…23…800M (https://ui.adsabs.harvard.edu/abs/2006JOSAA…23…800M). doi:10.1364/JOSAA.23.000800 (https://doi.org/10.1364%2FJOSAA.23.000800). PMID 16604759 (https://pubmed.ncbi.nlm.nih.gov/16604759). ↩︎ ↩︎

  13. “Kinect for Windows developer’s kit slated for November, adds ‘green screen’ technology” (http://www.pcworld.com/article/2042958/kinect-for-windows-developers-kit-slated-for-november-adds-green-screen-technology.html). PCWorld. 2013-06-26. ↩︎ ↩︎

  14. “Submillimeter 3-D Laser Radar for Space Shuttle Tile Inspection.pdf” (http://www.stanfordcomputeroptics.com/download/Submillimeter%203-D%20Laser%20Radar%20for%20Space%20Shuttle%20Tile%20Inspection.pdf) (PDF). ↩︎ ↩︎

  15. “Sea-Lynx Gated Camera - active laser camera system” (https://web.archive.org/web/20100813110630/http://www.laseroptronix.se/gated/sealynx.pdf) (PDF). Archived from the original (http://www.laseroptronix.se/gated/sealynx.pdf) (PDF) on 2010-08-13. ↩︎ ↩︎

  16. Reisse, Robert; Amzajerdian, Farzin; Bulyshev, Alexander; Roback, Vincent (4 June 2013). “Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing” (https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/201300134
    72.pdf) (PDF). Laser Radar Technology and Applications XVIII. 8731: 87310H. doi:10.1117/12.2015961 (https://doi.org/10.1117%2F12.2015961). hdl:2060/20130013472 (https://hdl.handle.net/2060%2F20130013472). ↩︎ ↩︎

  17. “ASC’s 3D Flash LIDAR camera selected for OSIRIS-REx asteroid mission” (http://www.nasaspaceflight.com/2012/05/ascs-lidar-camera-osiris-rex-asteroid-mission/). NASASpaceFlight.com. 2012-05-13. ↩︎ ↩︎ ↩︎

  18. http://e-vmi.com/pdf/2012_VMI_AUVSI_Report.pdf ↩︎ ↩︎

  19. “Autonomous Aerial Cargo/Utility System Program” (https://archive.today/20140406180528/http://www.onr.navy.mil/en/Science-Technology/Departments/Code-35/All-Programs/aerospace-research-351/Autonomous-Aerial-Cargo-Utility-AACUS.aspx). Office of Naval Research. Archived from the original (http://www.onr.navy.mil/en/Science-Technology/Departments/Code-35/All Programs/aerospace-research-351/Autonomous-Aerial-Cargo-Utility-AACUS.aspx) on 2014-04-06. ↩︎ ↩︎

  20. “Products” (http://www.advancedscientificconcepts.com/products/overview.html). Advanced Scientific Concepts. ↩︎ ↩︎

  21. “Time-of-Flight Camera — An Introduction" (http://eu.mouser.com/applications/time-of-flight-robotics). Mouser Electronics. ↩︎

  22. “CCD/CMOS Lock-In Pixel for Range Imaging: Challenges, Limitations and State-of-the-Art” (https://pdfs.semanticscholar.org/1b8d/d5859923001073d49613e27b539ec6686463.pdf) - CSEM ↩︎ ↩︎

  23. “Automotive” (http://www.advancedscientificconcepts.com/applications/automotive.htm). Advanced Scientific Concepts. ↩︎ ↩︎

  24. Aue, Jan; Langer, Dirk; Muller-Bessler, Bernhard; Huhnke, Burkhard (2011-06-09). “Efficient segmentation of 3D LIDAR point clouds handling partial occlusion” (https://ieeexplore.ieee.org/document/5940442/). 2011 IEEE Intelligent Vehicles Symposium (IV). Baden-Baden, Germany: IEEE. doi:10.1109/ivs.2011.5940442 (https://doi.org/10.1109%2Fivs.2011.5940442). ISBN 978-1-4577-08909. ↩︎ ↩︎

  25. Hsu, Stephen; Acharya, Sunil; Rafii, Abbas; New, Richard (25 April 2006). Performance of a Time-of-Flight Range Camera for Intelligent Vehicle Safety Applications (https://web.archive.org/web/20061206105733/http://www.canesta.com/assets/pdf/technicalpapers/canesta_amaa06_paper_final1.pdf) (PDF). Advanced Microsystems for Automotive Applications 2006. VDI-Buch. Springer. pp. 205–219. CiteSeerX 10.1.1.112.6869 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.112.6869). doi:10.1007/3-540-33410-6_16 (https://doi.org/10.1007%2F3-540-33410-6_16). ISBN 978-3-540-33410-1. Archived from the original (http://www.canesta.com/assets/pdf/technicalpapers/canesta_amaa06_paper_final1.pdf) (PDF) on 2006-12-06. Retrieved 2018-06-25. ↩︎ ↩︎

  26. Elkhalili, Omar; Schrey, Olaf M.; Ulfig, Wiebke; Brockherde, Werner; Hosticka, Bedrich J. (September 2006), “A 64x8 pixel 3-D CMOS time-of flight image sensor for car safety applications” (http://publica.fraunhofer.de/documents/N-48683.html), European Solid State Circuits Conference 2006, pp. 568–571, doi:10.1109/ESSCIR.2006.307488 (https://doi.org/10.1109%2FESSCIR.2006.307488), ISBN 978-1-4244-0302-8, retrieved 2010-03-05 ↩︎ ↩︎

  27. Captain, Sean (2008-05-01). “Out of Control Gaming” (http://www.popsci.com/gear-gadgets/article/2008-05/out-control-gaming). PopSci.com. Popular Science. Retrieved 2009-06-15. ↩︎ ↩︎

  28. Rubin, Peter (2013-05-21). “Exclusive First Look at Xbox One” (https://www.wired.com/gadgetlab/2013/05/xbox-one/). Wired. Wired Magazine. Retrieved 2013-05-22. ↩︎ ↩︎ ↩︎

  29. Sterling, Bruce (2013-06-04). “Augmented Reality: SoftKinetic 3D depth camera and Creative Senz3D Peripheral Camera for Intel devices” (https://www.wired.com/beyond_the_beyond/2013/06/augmented-reality-softkinetic-3d-depth-camera-and-creative-senz3d-peripheral-camera-for-intel-devices/). Wired Magazine. Retrieved 2013-07-02. ↩︎ ↩︎ ↩︎

  30. Lai, Richard. “PMD and Infineon to enable tiny integrated 3D depth cameras (hands-on)” (https://www.engadget.com/2013/06/06/pmd-infineon-camboard-pico-s-3d-depth-camera/). Engadget. Retrieved 2013-10-09. ↩︎ ↩︎

  31. Heinzman, Andrew (2019-04-04). “What Is a Time of Flight (ToF) Camera, and Why Does My Phone Have One?” (https://www.howtogeek.com/409704/what-is-a-time-of-flight-tof-camera-and-why-does-my-phone-have-one/). How-To Geek. ↩︎ ↩︎

  32. Nitsche, M.; Turowski, J. M.; Badoux, A.; Rickenmann, D.; Kohoutek, T. K.; Pauli, M.; Kirchner, J. W. (2013). “Range imaging: A new method for high-resolution topographic measurements in small- and medium-scale field sites”. Earth Surface Processes and
    Landforms. 38 (8): 810. Bibcode:2013ESPL…38…810N (https://ui.adsabs.harvard.edu/abs/2013ESPL…38…810N). doi:10.1002/esp.3322 (https://doi.org/10.1002%2Fesp.3322). ↩︎ ↩︎

  33. TBA. “SICK - Visionary-T y Visionary-B: 3D de un vistazo - Handling&Storage” (http://www.handling-storage.com/sick—visionary-t-y-visionary-b–3d-de-un-vistazo.html). www.handlingstorage.com (in Spanish). Retrieved 2017-04-18. ↩︎

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章