Multi-view point cloud fusion for LiDAR based cooperative environment detection
[摘要] A key component for automated driving is 360° environment detection.The recognition capabilities of modern sensors are always limited to theirdirect field of view. In urban areas a lot of objects occlude important areasof interest. The information captured by another sensor from anotherperspective could solve such occluded situations. Furthermore, thecapabilities to detect and classify various objects in the surrounding can beimproved by taking multiple views into account.
In order to combine the data of two sensors into one coordinate system, arigid transformation matrix has to be derived. The accuracy of modern e.g.satellite based relative pose estimation systems is not sufficient toguarantee a suitable alignment. Therefore, a registration based approach isused in this work which aligns the captured environment data of two sensorsfrom different positions. Thus their relative pose estimation obtained bytraditional methods is improved and the data can be fused.
To support this we present an approach which utilizes the uncertaintyinformation of modern tracking systems to determine the possible field ofview of the other sensor. Furthermore, it is estimated which parts of thecaptured data is directly visible to both, taking occlusion and shadowingeffects into account. Afterwards a registration method, based on theiterative closest point (ICP) algorithm, is applied to that data in order toget an accurate alignment.
The contribution of the presented approch to the achievable accuracy is shownwith the help of ground truth data from a LiDAR simulation within a 3-Dcrossroad model. Results show that a two dimensional position and headingestimation is sufficient to initialize a successful 3-D registration process.Furthermore it is shown which initial spatial alignment is necessary toobtain suitable registration results.
[发布日期] [发布机构]
[效力级别] [学科分类] 电子、光学、磁材料
[关键词] [时效性]