5/17/2023 0 Comments Kc radar in motion![]() To create a GGIW-PHD tracker, first define the tracking sensor configuration for each sensor reporting to the tracker. This second approach is preferable because you do not have too many detections per object and clustering is less accurate than extended target tracking This tracker estimates the extent of each target using an inverse Wishart distribution, whose expectation is a 3-by-3 positive definite matrix, representing the extent of a target as a 3-D ellipse. In the other approach, you can feed the detections to an extended target tracker adopted in this example by using a GGIW-PHD tracker. One of the approaches is that you can cluster the detections and augment the state with dimensions and orientation constants as done previously with the lidar cuboids. There are two possible approaches to track with the high-resolution radar detections. The radar resolution is fine enough to generate multiple returns per UAV target and its detections should not be fed directly to a point target tracker. In this example, you assume that the radar returns are preprocessed such that only returns from moving objects are preserved, that is, there are no returns from the ground or the buildings. ![]() As a result, the track state and state transition equations are X = and Moreover, assume the orientation of the UAV is constant and assume the dimensions of the UAVs are constant. The constant velocity model is sufficient to track trajectories consisting of straight flight legs or slowly varying segments. In this example, you model the dynamics of UAVs using an augmented constant velocity model. ![]() To set up a tracker, you need to define the motion model and the measurement model. This assumption is valid because you have clustered the point cloud into cuboids. A point tracker assumes that each UAV can generate at most one detection per sensor scan. Use a point target tracker, trackerJPDA, to track the lidar bounding box detections. The measurement state for these detections is, where: Lidar cuboid detections are formatted using the objectDetection object. The helper class helperLidarDetector available in this example has the implementation details. Fit each cluster with a cuboid to extract a bounding box detection. The remaining point cloud is clustered, and a simple threshold is applied to each cluster mean elevation to filter out building detections. Segment out the terrain using the segmentGroundSMRF function from Lidar Toolbox. To fuse the lidar output, the point cloud must be clustered to extract object detections. ![]() Radar = uavSensor( "Radar",egoUAV,helperRadarAdaptor(radarSensor)) Define Tracking System Lidar Point Cloud Processing "DetectionCoordinates", "Sensor rectangular". RadarSensor = radarDataGenerator( "no scanning", "SensorIndex",1. ![]()
0 Comments
Leave a Reply. |