Abstract: Despite the rise in interest for surrogate safety analysis, little work has been done to understand and test the impact of the methods for motion prediction which are needed to identify whether two road users are on a collision course and to compute many surrogate safety indicators such as the time to collision (TTC). The default, unjustified method used in much of the literature is prediction at constant velocity. In this paper, a generic framework is presented to predict road users' future positions depending on their current position and their choice of acceleration and direction. This results in the possibility of generating many predicted trajectories by sampling distributions of acceleration and direction. Three safety indicators, the TTC, an extended version of predicted post encroachment time pPET and a new indicator measuring the probability that the road users attempting evasive actions fail to avoid the collision P(UAE), are computed over all predicted trajectories. These methods and indicators are illustrated on four case studies of lateral road user interactions. The evidence suggests that the prediction method based on the use of a set of initial positions seems to be the most robust. The last contribution of this paper is to make all the data and code used for this paper available (the code as open source) to enable reproducibility and to start a collaborative effort to compare and improve the methods for surrogate safety analysis.
This page presents all the necessary information to replicate the results presented in this paper, the code, available under the open source MIT license and the data used as a support for discussion in the paper.
Four traffic events were used in this paper to illustrate the impact of the method used for motion prediction to identify potential collision points and compute indicators such as the TTC, the pPET and P(UEA). The trajectories extracted from the videos are provided, in two files for each video sequence (there are three video sequence, with case study 3 and 4 in the same sequence Miss/0208030956). The trajectory data is stored in two files for each sequence, one for features (-features.txt) and one for the vehicles or "objects" (-objects.txt). In these text files, each trajectory data is written one after the other (features and vehicles) (it is replaced in the new projects by a sqlite database, see project on bitbucket):
sequence_num first_instant last_instant X1 X2 ... Y1 Y2 ... Vx1 Vx2 ... Vy1 Vy2 ... % ...
where Xi and Yi and Vxi and Vyi are respectively the position coordinates and the velocity vector components at index i (counting 0 as the index of the first measurement, the corresponding frame number is first_instant+i). The homography files, ie the 3x3 matrix that is used to project from image space (in pixels) to the ground plane (in meters), is also provided in the corresponding -homography.txt files. All files, 3 per video sequence, are in a zip archive in their original hierarchy, ie in a Miss or Incident directory respectively for conflicts and collisions.
The code has been written in the open source and cross-platform Python language and depends on larger open source project called Traffic Intelligence. There are two ways to get the necessary files, the first being to downlowad a snapshot of the last version (click on the "get source" link on the project webpage), the second being to clone the code repository using the Mercurial version control software ($ hg clone ssh://email@example.com/Nicolas/trafficintelligence). The python subdirectory must be in your path to be able to import the modules. The NumPy and Matplotlib libraries must also be installed.
The script that generates the results is called process-extrapolation-hypotheses.py. Two other scripts are provided to plot the graphics, plot-results.py and other-figures.py. The only thing that you should have to change should be the dirname variable that should contain the directories Miss and Incident extracted from the data archive. The results are stored in csv files for the collision points, crossing zones respectively with pPET and TTC values (-collision-points.csv and -crossing-zones.csv files) and probabilities of unsucessful evasive action (-probability-collision.csv files) in subfolder for each video sequence. There is a file for each motion prediction method, and each line correspond to measurements at a given instant. The formats are the following:
vehicle_id1, vehicle_id2, instant, x_coordinate, y_coordinate, probability, indicator_value
vehicle_id1, vehicle_id2, nSamples, instant, collision_probability
Extrapolation parameters are saved as a comment on the first line (line starting with #) so that the parameters used to generate the results can be traced back once the files are generated. The date and time are appended to the filenames of all result files so that the script can be started multiple times in parallel (eg as many times as the computer has cores).
Please do not hesitate to contact us if you have any question.