基於同步雷達和攝像頭數據的傳感器融合
本例程展示瞭如何生成駕駛場景,模擬傳感器檢測和利用傳感器融合跟蹤模擬的車輛。利用場景生成和傳感器仿真的優點在於可以創造小概率發生的並且有潛在危險的場景以檢測算法的可靠性。本例程涵蓋了以上的所有步驟。
場景生成
場景生成包括定義道路,定義道路上的汽車並且移動汽車。本例中,我們將測試基於傳感器融合的追蹤左側超車車輛的能力。仿真場景爲高速場景,車前和車後各有一輛行駛的汽車。
% Define an empty scenario
scenario = drivingScenario;
scenario.SampleTime = 0.01;
添加一條500米包含兩車道的高速路。道路通過一串代表道路中點的三維點和道路寬度定義。
roadCenters = [0 0; 50 0; 100 0; 250 20; 500 40];
roadWidth = 7.2; % Two lanes, each 3.6 meters
road(scenario, roadCenters, roadWidth);
創建本車和周圍的三輛車:一輛將從左側超車的車輛,一輛位於正前方的車輛和一輛位於正後方的車輛。所有車輛遵從path駕駛規則,沿規定的道路節點行駛。超車車輛從右側車道開始,變道至左車道超車後變道回右車道。
% Create the ego vehicle that travels at 25 m/s along the road
egoCar = vehicle(scenario, 'ClassID', 1);
path(egoCar, roadCenters(2:end,:) - [0 1.8], 25); % On right lane
% Add a car in front of the ego vehicle
leadCar = vehicle(scenario, 'ClassID', 1);
path(leadCar, [70 0; roadCenters(3:end,:)] - [0 1.8], 25); % On right lane
% Add a car that travels at 35 m/s along the road and passes the ego vehicle
passingCar = vehicle(scenario, 'ClassID', 1);
waypoints = [0 -1.8; 50 1.8; 100 1.8; 250 21.8; 400 32.2; 500 38.2];
path(passingCar, waypoints, 35);
% Add a car behind the ego vehicle
chaseCar = vehicle(scenario, 'ClassID', 1);
path(chaseCar, [25 0; roadCenters(2:end,:)] - [0 1.8], 25); % On right lane
定義雷達和視覺傳感器
本例中,我們的仿真車輛帶有6個雷達傳感器和2個視覺傳感器,傳感器檢測範圍爲360度。傳感器檢測區域有一些重疊和覆蓋的間隙。本車在前後各裝有長距雷達傳感器和視覺傳感器,兩側分別裝有兩個短距雷達,每個短距雷達可覆蓋90度檢測範圍。一個雷達覆蓋車輛中部到後部區域,另一個雷達覆蓋車輛中部到前部區域。下圖描述了監測區域。
sensors = cell(8,1);
% Front-facing long-range radar sensor at the center of the front bumper of the car.
sensors{1} = radarDetectionGenerator('SensorIndex', 1, 'Height', 0.2, 'MaxRange', 174, ...
'SensorLocation', [egoCar.Wheelbase + egoCar.FrontOverhang, 0], 'FieldOfView', [20, 5]);
% Rear-facing long-range radar sensor at the center of the rear bumper of the car.
sensors{2} = radarDetectionGenerator('SensorIndex', 2, 'Height', 0.2, 'Yaw', 180, ...
'SensorLocation', [-egoCar.RearOverhang, 0], 'MaxRange', 174, 'FieldOfView', [20, 5]);
% Rear-left-facing short-range radar sensor at the left rear wheel well of the car.
sensors{3} = radarDetectionGenerator('SensorIndex', 3, 'Height', 0.2, 'Yaw', 120, ...
'SensorLocation', [0, egoCar.Width/2], 'MaxRange', 30, 'ReferenceRange', 50, ...
'FieldOfView', [90, 5], 'AzimuthResolution', 10, 'RangeResolution', 1.25);
% Rear-right-facing short-range radar sensor at the right rear wheel well of the car.
sensors{4} = radarDetectionGenerator('SensorIndex', 4, 'Height', 0.2, 'Yaw', -120, ...
'SensorLocation', [0, -egoCar.Width/2], 'MaxRange', 30, 'ReferenceRange', 50, ...
'FieldOfView', [90, 5], 'AzimuthResolution', 10, 'RangeResolution', 1.25);
% Front-left-facing short-range radar sensor at the left front wheel well of the car.
sensors{5} = radarDetectionGenerator('SensorIndex', 5, 'Height', 0.2, 'Yaw', 60, ...
'SensorLocation', [egoCar.Wheelbase, egoCar.Width/2], 'MaxRange', 30, ...
'ReferenceRange', 50, 'FieldOfView', [90, 5], 'AzimuthResolution', 10, ...
'RangeResolution', 1.25);
% Front-right-facing short-range radar sensor at the right front wheel well of the car.
sensors{6} = radarDetectionGenerator('SensorIndex', 6, 'Height', 0.2, 'Yaw', -60, ...
'SensorLocation', [egoCar.Wheelbase, -egoCar.Width/2], 'MaxRange', 30, ...
'ReferenceRange', 50, 'FieldOfView', [90, 5], 'AzimuthResolution', 10, ...
'RangeResolution', 1.25);
% Front-facing camera located at front windshield.
sensors{7} = visionDetectionGenerator('SensorIndex', 7, 'FalsePositivesPerImage', 0.1, ...
'SensorLocation', [0.75*egoCar.Wheelbase 0], 'Height', 1.1);
% Rear-facing camera located at rear windshield.
sensors{8} = visionDetectionGenerator('SensorIndex', 8, 'FalsePositivesPerImage', 0.1, ...
'SensorLocation', [0.2*egoCar.Wheelbase 0], 'Height', 1.1, 'Yaw', 180);
創建跟蹤器
創建multipleObjecttracker跟蹤本車附件的車輛。跟蹤器使用initSimDemoFilter幫助函數初始化線性卡爾曼濾波器來處理位置和速度。
跟蹤是在2d平面內完成,儘管傳感器返回的是3d測量數據,由於車輛運動被限制在水平面上,所以本例不需要檢測高度。
tracker = multiObjectTracker('FilterInitializationFcn', @initSimDemoFilter, ...
'AssignmentThreshold', 30, 'ConfirmationParameters', [4 5]);
positionSelector = [1 0 0 0; 0 0 1 0]; % Position selector
velocitySelector = [0 1 0 0; 0 0 0 1]; % Velocity selector
% Create the display and return a handle to the bird's-eye plot
BEP = createDemoDisplay(egoCar, sensors);
場景仿真
下面幾步包括移動車輛,調用傳感器仿真和實行追蹤。
請注意場景生成和傳感器仿真可以有不同的時間步長,而不同的時間步長可以解耦場景仿真和傳感器仿真。這對不考慮傳感器測量精度情況下精準的模擬車輛的運動很有好處。
另一個目的是當傳感器有不同的更新頻率時。假設一個傳感器每20ms更新一次,另一個傳感器每50ms更新一次。而當場景的更新頻率設定爲10ms時,傳感器可以保持它們各自的更新頻率不變。
本例中,場景更新時間間隔爲0.01s,傳感器探測時間間隔爲0.1s。傳感器會返回邏輯標識isValidTime,標識爲真即表示傳感器返回探測信息。該標識符只在有探測信息返回時調用跟蹤器。
另一個重要的注意事項是傳感器可以模擬每個目標的多個檢測,特別是當目標非常接近雷達傳感器時。由於跟蹤器假定每個傳感器對每個目標只進行一次檢測,因此在跟蹤器處理檢測信息之前需進行聚類處理。這部分處理由clusterDetections完成,詳情參考‘幫助函數’章節。
toSnap = true;
while advance(scenario) && ishghandle(BEP.Parent)
% Get the scenario time
time = scenario.SimulationTime;
% Get the position of the other vehicle in ego vehicle coordinates
ta = targetPoses(egoCar);
% Simulate the sensors
detections = {};
isValidTime = false(1,8);
for i = 1:8
[sensorDets,numValidDets,isValidTime(i)] = sensors{i}(ta, time);
if numValidDets
for j = 1:numValidDets
% Vision detections do not report SNR. The tracker requires
% that they have the same object attributes as the radar
% detections. This adds the SNR object attribute to vision
% detections and sets it to a NaN.
if ~isfield(sensorDets{j}.ObjectAttributes{1}, 'SNR')
sensorDets{j}.ObjectAttributes{1}.SNR = NaN;
end
end
detections = [detections; sensorDets]; %#ok<AGROW>
end
end
% Update the tracker if there are new detections
if any(isValidTime)
vehicleLength = sensors{1}.ActorProfiles.Length;
detectionClusters = clusterDetections(detections, vehicleLength);
confirmedTracks = updateTracks(tracker, detectionClusters, time);
% Update bird's-eye plot
updateBEP(BEP, egoCar, detections, confirmedTracks, positionSelector, velocitySelector);
end
% Snap a figure for the document when the car passes the ego vehicle
if ta(1).Position(1) > 0 && toSnap
toSnap = false;
snapnow
end
end
總結
本例展示瞭如何生成場景,模擬傳感器檢測並利用檢測信息跟蹤本車周圍的運動車輛。
可以更改道路信息,增加或刪除車輛。也可以嘗試增加移除或修改本車的傳感器或者修改跟蹤參數。
輔助函數
initSimDemoFilter
本函數初始化基於檢測信息的勻速濾波器。
function filter = initSimDemoFilter(detection)
% Use a 2-D constant velocity model to initialize a trackingKF filter.
% The state vector is [x;vx;y;vy]
% The detection measurement vector is [x;y;vx;vy]
% As a result, the measurement model is H = [1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1]
H = [1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1];
filter = trackingKF('MotionModel', '2D Constant Velocity', ...
'State', H' * detection.Measurement, ...
'MeasurementModel', H, ...
'StateCovariance', H' * detection.MeasurementNoise * H, ...
'MeasurementNoise', detection.MeasurementNoise);
end
clusterDetections
本函數將單次檢測中疑似同一輛車的多個檢測信息進行了融合。函數根據檢測到的對象的距離是否小於車輛大小進行判斷,小於此距離的將可聚類爲單個檢測目標,中心即爲羣集的質心。修改測量噪聲以表示每次檢測可以在車輛上的任何位置的可能性。 因此,噪音應與車輛尺寸相同。
另外,函數移除了測量的高度信息,並且縮減測試向量爲[x;y;vx;vy]。
function detectionClusters = clusterDetections(detections, vehicleSize)
N = numel(detections);
distances = zeros(N);
for i = 1:N
for j = i+1:N
if detections{i}.SensorIndex == detections{j}.SensorIndex
distances(i,j) = norm(detections{i}.Measurement(1:2) - detections{j}.Measurement(1:2));
else
distances(i,j) = inf;
end
end
end
leftToCheck = 1:N;
i = 0;
detectionClusters = cell(N,1);
while ~isempty(leftToCheck)
% Remove the detections that are in the same cluster as the one under
% consideration
underConsideration = leftToCheck(1);
clusterInds = (distances(underConsideration, leftToCheck) < vehicleSize);
detInds = leftToCheck(clusterInds);
clusterDets = [detections{detInds}];
clusterMeas = [clusterDets.Measurement];
meas = mean(clusterMeas, 2);
meas2D = [meas(1:2);meas(4:5)];
i = i + 1;
detectionClusters{i} = detections{detInds(1)};
detectionClusters{i}.Measurement = meas2D;
leftToCheck(clusterInds) = [];
end
detectionClusters(i+1:end) = [];
% Since the detections are now for clusters, modify the noise to represent
% that they are of the whole car
for i = 1:numel(detectionClusters)
measNoise(1:2,1:2) = vehicleSize^2 * eye(2);
measNoise(3:4,3:4) = eye(2) * 100 * vehicleSize^2;
detectionClusters{i}.MeasurementNoise = measNoise;
end
end
createDemoDisplay
本函數創建三面板顯示:
- 左上方顯示:本車的俯視圖;
- 左下方顯示:追蹤本車的視角;
- 右側顯示:鳥瞰圖顯示;
function BEP = createDemoDisplay(egoCar, sensors)
% Make a figure
hFigure = figure('Position', [0, 0, 1200, 640], 'Name', 'Sensor Fusion with Synthetic Data Example');
movegui(hFigure, [0 -1]); % Moves the figure to the left and a little down from the top
% Add a car plot that follows the ego vehicle from behind
hCarViewPanel = uipanel(hFigure, 'Position', [0 0 0.5 0.5], 'Title', 'Chase Camera View');
hCarPlot = axes(hCarViewPanel);
chasePlot(egoCar, 'Centerline', 'on', 'Parent', hCarPlot);
% Add a car plot that follows the ego vehicle from a top view
hTopViewPanel = uipanel(hFigure, 'Position', [0 0.5 0.5 0.5], 'Title', 'Top View');
hCarPlot = axes(hTopViewPanel);
chasePlot(egoCar, 'Centerline', 'on', 'Parent', hCarPlot, 'ViewHeight', 130, 'ViewLocation', [0 0], 'ViewPitch', 90);
% Add a panel for a bird's-eye plot
hBEVPanel = uipanel(hFigure, 'Position', [0.5 0 0.5 1], 'Title', 'Bird''s-Eye Plot');
% Create bird's-eye plot for the ego car and sensor coverage
hBEVPlot = axes(hBEVPanel);
frontBackLim = 60;
BEP = birdsEyePlot('Parent', hBEVPlot, 'Xlimits', [-frontBackLim frontBackLim], 'Ylimits', [-35 35]);
% Plot the coverage areas for radars
for i = 1:6
cap = coverageAreaPlotter(BEP,'FaceColor','red','EdgeColor','red');
plotCoverageArea(cap, sensors{i}.SensorLocation,...
sensors{i}.MaxRange, sensors{i}.Yaw, sensors{i}.FieldOfView(1));
end
% Plot the coverage areas for vision sensors
for i = 7:8
cap = coverageAreaPlotter(BEP,'FaceColor','blue','EdgeColor','blue');
plotCoverageArea(cap, sensors{i}.SensorLocation,...
sensors{i}.MaxRange, sensors{i}.Yaw, 45);
end
% Create a vision detection plotter put it in a struct for future use
detectionPlotter(BEP, 'DisplayName','vision', 'MarkerEdgeColor','blue', 'Marker','^');
% Combine all radar detctions into one entry and store it for later update
detectionPlotter(BEP, 'DisplayName','radar', 'MarkerEdgeColor','red');
% Add road borders to plot
laneBoundaryPlotter(BEP, 'DisplayName','road', 'Color', [.75 .75 0]);
% Add the tracks to the bird's-eye plot. Show last 10 track updates.
trackPlotter(BEP, 'DisplayName','track', 'HistoryDepth',10);
axis(BEP.Parent, 'equal');
xlim(BEP.Parent, [-frontBackLim frontBackLim]);
ylim(BEP.Parent, [-40 40]);
% Add an outline plotter for ground truth
outlinePlotter(BEP, 'Tag', 'Ground truth');
end
updateBEP
本函數在鳥瞰圖下更新車道線,檢測目標和跟蹤目標。
function updateBEP(BEP, egoCar, detections, confirmedTracks, psel, vsel)
% Update road boundaries and their display
rb = roadBoundaries(egoCar);
plotLaneBoundary(findPlotter(BEP,'DisplayName','road'),rb);
% update ground truth data
[position, yaw, length, width, originOffset, color] = targetOutlines(egoCar);
plotOutline(findPlotter(BEP,'Tag','Ground truth'), position, yaw, length, width, 'OriginOffset', originOffset, 'Color', color);
% Prepare and update detections display
N = numel(detections);
detPos = zeros(N,2);
isRadar = true(N,1);
for i = 1:N
detPos(i,:) = detections{i}.Measurement(1:2)';
if detections{i}.SensorIndex > 6 % Vision detections
isRadar(i) = false;
end
end
plotDetection(findPlotter(BEP,'DisplayName','vision'), detPos(~isRadar,:));
plotDetection(findPlotter(BEP,'DisplayName','radar'), detPos(isRadar,:));
% Prepare and update tracks display
trackIDs = {confirmedTracks.TrackID};
labels = cellfun(@num2str, trackIDs, 'UniformOutput', false);
[tracksPos, tracksCov] = getTrackPositions(confirmedTracks, psel);
tracksVel = getTrackVelocities(confirmedTracks, vsel);
plotTrack(findPlotter(BEP,'DisplayName','track'), tracksPos, tracksVel, tracksCov, labels);
end