[Prescan]Prescan中Sensor學習

該筆記參考鏈接:

  • https://blog.csdn.net/zhanshen112/article/details/88565400

Prescan中目前提供的傳感器一共有三種類型:

  • Idealized Sensor 理想傳感器:包括了理論研究需要的傳感器,適用於自動駕駛算法開發前期邏輯的驗證。
  • Detailed Sensor詳細傳感器:真實存在的傳感器,對應的是有相應的傳感器模型,考慮了在實際使用過程中的傳感器信息的損失等(例如:雷達就考慮了路徑發散衰減、大氣衰減和目標反射衰減,相機則考慮相機畸變等),適用於自動駕駛算法的魯棒性驗證
  • Ground-Truth Sensor真值傳感器:提供的是真值(主要是視覺傳感器),適用於算法的早期開發階段

1. Idealized Sensor

理想傳感器主要包括:

  1. SELF snesor/GPS receiver 自車/GPS接收器
  2. AIR sensor( radar/lidar/ultrasonic ) 執行器信息接收器 (挖坑)
  3. Antenna&DSRC transmitter/receiver 天線和DSRC的發送器/接收器(挖坑)
  4. Beacon/OBU 信標(信號塔)/車載單元

1.1 GPS接收器

作用:輸出本車精確的GPS位置信息
在這裏插入圖片描述
PreScan中是否可以模擬GPS信號不好的這種狀態?

1.2 AIR Sensor 執行器信息傳感器

作用:輸出目標物的檢測信息
對於每個檢測到的物體,都有如下參數:
Range[m] 從傳感器座標系到檢測到目標物體的距離
Azimuth[deg] 方位角
Elevation[deg] 高程角
ID[n] 檢測到目標的ID
Velocity[m/s] 目標物縱向速度
Heading[deg] 目標物的航向角(N-0 deg, E – 90 deg)

在這裏插入圖片描述
3種不同的可能檢測方法:

Bounding Box;檢測邊界
Center of Bounding Box; 檢測中心
Cebter of Gravity;檢測質心

注意:AIR傳感的侷限:該傳感不考慮重疊、遮擋和目標的實際形狀,檢測點可能落在檢測範圍之外。具體應用時應該根據具體情況判斷。
在這裏插入圖片描述

1.3 Beacon/OBU

作用:當車載裝置在beacon檢測範圍之內,可實現V2I通信;雙向通信(Beacon依附於infrastructure上,OBU依附於車輛上)
根據傳感原理,分爲以下兩種:

  • RF射頻:OBU應位於Beacon的光束中
  • IR紅外:OBU應位於Beacon的光束中,需要考慮遮擋
RF IR
fast slower
no obstructed view (no occluded objects) obstructed view (occluded objects)

RF OBU體現爲一個點,IR OBU體現爲一個矩形box(尺寸可以編輯,默認0.1m)
輸出參數:

  • Position & orientation 位置&方位角
  • Field of view 視野角度
  • Range 距離(Only Beacon,默認RF 50m IR 20m)
  • Cone angle (Only Beacon,默認45°)
  • Maximum number of detectable OBUs (Only Beacon,默認 5)
  • Maximum number of detectable beacons (Only OBU,默認 2)

2. Detailed Sensor

Detailed Sensor主要包括:

  1. Camera Sensor
  2. Fisheye Camera Sensor
  3. TIS:Technology Independent Sensor
  4. Lidar Sensor
  5. Radar Sensor
  6. Ultrasonic Sensor

2.1 Camera Sensor

作用:發送圖像信息(pixels)到Simulink,便於用戶自定義相應的算法(例如:車道線識別、目標分類檢測、融合算法等)

可配置項:

  • Position & Orientation 位置和方位
  • Mono Vision /Stereo Vision 單目/雙目
  • Field of View 視野範圍
  • Resolution 分辨率
  • Frame-rate 幀頻率
  • Color/monochrome 彩色/黑白
  • Misalignment (position / orientation)
  • Drift (偏移)
    Simulink中攝像頭輸出:
    在這裏插入圖片描述

2.2 Fish eye Camera

作用:發送圖像信息(pixels)到Simulink,便於用戶自定義相應的算法(例如:泊車輔助時障礙物識別等)

可配置項:
在這裏插入圖片描述

2.3 Lidar

激光雷達可用於多種目標,包括非金屬物體、岩石、雨水等。
PreScan基於兩種工作原理給出了對應的Lidar模型:

  • pulse time-of-flight ranging
  • beam modulation telemetry
    這兩種模型中,laser scanner都包含了發射器和接收器。
    在這裏插入圖片描述
    爲了計算距離,被測信號的功率必須足夠大。
    可配置項:
  • 波長
  • 發散角(對激光雷達而言,一般是0.01-0.08度)
  • 最大目標輸出數量(最大爲5),針對最大目標輸出數量,下面的圖給出了更爲具體的解釋(下圖max object to detect設置爲3)
    在這裏插入圖片描述

輸出到Simulink中的數據
在這裏插入圖片描述

Signal name Description
ActiveBeamID[-] ID of the beam in the current simulation time step. Value is 0 when there’s no detection.
Range[m] Range at which the target object has been detected.
DopplerVelocity [m/s] Velocity of target point, relative to the sensor, along the beam.
DopplerVelocityX/Y/Z [ms-1] Velocity of target point, relative to the sensor, along the beam, decomposed into X,Y,Z of the sensor’s coordinate system.
Theta[deg] Azimuth angle in the sensor coordinate system at which the target is detected.
Phi[deg] Elevation angle in the sensor coordinate system at which the target is detected.
TargetID[-] Numerical ID of the detected target.
TargetTypeID[-] The Type ID of the detected object.
EnergyLoss[dB] Ratio received power / transmitted power.
Alpha[deg] Azimuthal incidence angle of Lidar on the target object.
Beta[deg] Elevation incidence angle of Lidar on the target object.

2.3 附 - Lidar Equation

<這部分先挖坑>
對於採用脈衝time of flight這種測量方法,假設trt_r爲發射端發生信號到接收端收到信號的時間,cc爲光速,在PreScan中爲常數,則Range RRtrt_r之間的關係可以表達如下:
2R=trc2R = t_r c

對於採用光束調製測量方法,激光由相對低頻正弦波調製,因此採用的是相位進行間接測量。假設fmodf_{mod}是調製頻率,ϕr\phi_r爲發射波和接收波之間的相位差,則trt_r可以表示爲:

2.4 Radar Sensor

Radar Sensor是一個更加詳細的TIS傳感器版本,與TIS傳感器有所區別的是:

  • 增加項:
    1. 支持對Antenna Gain Maps的使用
    2. 將大氣衰減建模爲頻率和降雨的函數
    3. 可由外部提供掃描模式(即Simulink提供)
    4. 改進的Radar模型
  • 刪減項:
    1. 運用在Lidar上的pencil beam功能被移除了
    2. 分層陣列掃描功能被移除了(可以用多個Radar Sensor實現)

《未完》
輸出到Simulink中的數據

Signal name Description
ActiveBeamID[-] ID of the beam in the current simulation time step. Value is 0 when there’s no detection.
Range[m] Range at which the target object has been detected.
DopplerVelocity [ms-1] Velocity of target point, relative to the sensor, along the line-of-sight between sensor and target point.
DopplerVelocityX/Y/Z[ms-1] Velocity of target point, relative to the sensor, along the line-of-sight between sensor and target point, decomposed into X,Y,Z of the sensor’s coordinate system.
Theta[deg] Azimuth angle in the sensor coordinate system at which the target is detected.
Phi[deg] Elevation angle in the sensor coordinate system at which the target is detected.
TargetID[-] Numerical ID of the detected target.
TargetTypeID[-] The Type ID of the detected object.
EnergyLoss[dB] Ratio received power / transmitted power.
Alpha[deg] Azimuthal incidence angle of the Radar beam on the target object.
Beta[deg] Elevation incidence angle of the Radar beam on the target object.

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述

車載雷達的分類及基本屬性

類型 工作形式 頻率 覆蓋距離 水平視角 應用場景
SRR短距雷達 脈衝 24GHz 30m ±65°~±80° BSD、PA、LCA、FCW、RCW
MRR中距離雷達 連續波/脈衝 24GHz / 76-77GHz 70m ±40°~±50° LCA
LRR長距離雷達 連續波/脈衝 76-77GHz 200m ±4°~±8° ACC

2.4 Ultrasonic Sensor

反射強度取決於透射波的強度、輻射模式、到物體的距離、介質的透射率和目標物體的特性。

輸出到Simulink中的數據
在這裏插入圖片描述

signal name description
ObjectDetection[-] Indicates if an object is detected. (1 if an object is detected, 0 otherwise)
Range[m] Range at which the target object has been detected.
DopplerVelocity [ms-1] Velocity of target point, relative to the sensor, along the line-of-sight between sensor and target point.
DopplerVelocityX/Y/Z[ms-1] Velocity of target point, relative to the sensor, along the line-of-sight between sensor and target point, decomposed into X,Y,Z of the sensor’s coordinate system.
Theta[deg] Azimuth angle in the sensor coordinate system at which the target is detected.
Phi[deg] Elevation angle in the sensor coordinate system at which the target is detected.
TargetID[-] Numerical ID of the detected target.
TargetTypeID[-] The Type ID of the detected object.
EnergyLoss[dB] Ratio received power / transmitted power, same as ΔSPL.
Alpha[deg] Azimuthal incidence angle of sound wave on the target object.
Beta[deg] Elevation incidence angle of sound wave on the target object.

3. Ground Truth Sensor

3.1 Lane Marker Sensor

lane marker傳感器提供道路上存在的車道線信息。
輸出到Simulink的數據
在這裏插入圖片描述
輸出結果以Bus形式給出,主要包括了5個信號:

  • sliceCout [int] 前視距離數量
  • ScanAtSensor [sub-bus:LaneMarkerSliceData]
  • ScanAtDistance1(,2,3)

3.2 Analytical Lane Marker Sensor

分析車道標誌傳感器是車道傳感器的一種新的實現方法,提供道路上車道線的信息。
車道線信息以多項式的形式給出,需要注意一下,因爲是用多項式進行車道線的擬合,所以這個傳感器輸出的在一定程度上不算是真值,是對真值的逼近。

該傳感器僅考慮傳感器視野範圍內的車道線,同一條車道線在視野範圍內穿過了,會視爲不同的車道線。
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述在Simulink中可視化車道線,可以藉助Prescan提供的’ALMS XY Polynomial Plot’實現,該模塊利用open('PreScanUsefulBlocks')查找。
在這裏插入圖片描述

3.3 Depth Camera

Depth攝像機包含的是深度值,用來校準和驗證雙目相機的深度計算。
輸出到Simulink的數據
在這裏插入圖片描述
深度的分辨率是非線性的,如下:
d=z2znear224zd = \frac{z^2}{z_{near}2^{24} - z}

3.4 Bounding Rectangle Sensor

bounding rectangle傳感器提供傳感器可檢測目標的包圍矩形信息,併爲相機的bounding rectangle算法做參考。例如行人識別算法,實現照明條件不良情況下的行人檢測,輸出按距離排序。

輸出到Simulink的數據
在這裏插入圖片描述
在這裏插入圖片描述
在Simulink中可視化bounding box,可以藉助Prescan提供的’BRS data on Camera image’實現,該模塊利用open('PreScanUsefulBlocks')查找。

3.5 Object Camera Sensor

該傳感器對包含攝像機單元和圖像處理單元的系統進行建模。此外,它還提供了有關目標的距離和多普勒速度信息。
OCS傳感器檢測所有標記爲sensor detectable的目標物。

輸出到Simulink的數據
在這裏插入圖片描述

Signal Description
Object ID [-] Numerical ID of the detected object.
ObjectTypeID [-] The Type ID of the detected object.
Left [-] Horizontal screen coordinate of the left side of the bounding box
Right [-] Horizontal screen of the right side of the bounding box
Bottom [-] Vertical screen coordinate of the bottom side of the bounding box
Top [-] Vertical screen coordinate of the top side of the bounding box
Range [m] Range at which the target object has been detected. The distance to the nearest point is returned.
RangeX [m] X component of the Range, in sensor coordinates.
RangeY [m] Y component of the Range, in sensor coordinates.
RangeZ [m] Z component of the Range, in sensor coordinates.
DopplerVelocity [m/s] Velocity of target point, relative to the sensor, along the line-of-sight between sensor and target point.
DopplerVelocityX/Y/Z[m/s] Velocity of target point, relative to the sensor, along the line-of-sight between sensor and target point, decomposed into X,Y,Z of the sensor’s coordinate system.
Theta [deg] Azimuth angle in the sensor’s coordinate system at which the target is detected.
Phi [deg] Elevation angle in the sensor’s coordinate system at which the target is detected.

3.6 Image Segmentation Sensor

ISS傳感器輸出到Simulink的數據
在這裏插入圖片描述
在這裏插入圖片描述

3.7 Point Cloud Sensor

點雲傳感器用於構建周邊環境的點雲,用於算法的開發、激光雷達的尺寸驗證以及HIL測試。
基本參數:

Parameter Description Defaut Min. Max.
FoV in Azimuth[deg] The horizontal field of view of the sensor in degrees. 60 0.1 120
FoV in Elevation[deg] For each azimuth direction, the same vertical field of view in degrees. 30 0.1 60
#horizontal samples The number of equi-angular-distant samples in the azimuth direction. 320 1 3840
#vertical samples The number of equi-angular-distant samples in the elevation direction. 160 1 2160

傳感器輸出到Simulink的數據
在這裏插入圖片描述
PCS mux for World Position
在這裏插入圖片描述
PCS mux for World Position and Intensity
在這裏插入圖片描述
PCS mux for Range.
在這裏插入圖片描述
PCS mux for Range and Intensity.

Data Model API通過matlab腳本提供了對傳感器參數的便捷式訪問。


%% Sample code for configuring the PCS via the Data Model API
%% Part 1
% Get the model
models = prescan.experiment.readDataModels();
% Find camera sensor CameraSensor_1
sensorData = prescan.sensors.findByName(models, 'PointCloudSensor_1');

% Exit the script when the sensor is not found
if isempty(sensorData)
display('Sensor with the specified name is not found.');
return;
end

%% Part 2
% Create copies of point cloud sensor structures
sensor = models.(sensorData{1}.modelName).sensor{sensorData{1}.indices(1)};
pointCloudSensor = sensor.pointCloudSensor;
sensorBase = sensor.sensorBase;
% Update settings
pointCloudSensor.sensorOutputMode = 'worldPosition'; % Can also be 'range'.
pointCloudSensor.outputIntensity = false;
pointCloudSensor.nearClippingDistance = 0.1; % [m]
pointCloudSensor.farClippingDistance = 150; % [m]
pointCloudSensor.extrapolateRange = true;
pointCloudSensor.sampleAccuracy.x = 0.05; % [deg]
pointCloudSensor.sampleAccuracy.y = 0.05; % [deg]
pointCloudSensor.integerOutput = false; % Do not use sensorOutputMode =
%'worldPosition' with integerOutput = true.
% Doing so will result in undefined
behaviour.
%sensorBase.name = 'PointCloudSensor_1';
sensorBase.fovAzimuth = 60 * pi/180; % [rad]
sensorBase.fovElevation = 30 * pi/180; % [rad]
sensor.resolution.x = 320; % [#samples]
sensor.resolution.y = 160; % [#samples]
sensor.frameRate = int32(20); % [Hz]

% Configure sensor's pose, defaults depend on the actor it is placed on.
% sensorBase.relativePose.position.x = 1.56; % [m]
% sensorBase.relativePose.position.y = 0; % [m]
% sensorBase.relativePose.position.z = 1.22; % [m]
% sensorBase.relativePose.orientation.roll = 0; % [rad]
% sensorBase.relativePose.orientation.pitch = 0; % [rad]
% sensorBase.relativePose.orientation.yaw = 0; % [rad]
% Copy updated structures back into the model.
sensor.pointCloudSensor = pointCloudSensor;
sensor.sensorBase = sensorBase;
models.cameramodel.sensor{1} = sensor;

%% Part 3
% Run the experiment for 10 seconds
simOut = prescan.experiment.runWithDataModels(models, 'StopTime', '10.0');

4 Tripod

目的:用來進行傳感器標定,Tripod對傳感器是不可見的。

5 Physics Based

5.1 Physics Based Camera Sensor

5.2 V2X Transceiver

以上兩個目前沒有用到,先挖坑吧

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章