This is the official repository of the paper INT2: Interactive Trajectory Prediction at Intersections.
Zhijie Yan, Pengfei Li, Zheng Fu, Shaocong Xu, Yongliang Shi, Xiaoxue Chen, Yuhang Zheng, Yang Li, Tianyu Liu, Chuxuan Li, Nairui Luo, Xu Gao, Yilun Chen, Zuoxu Wang, Yifeng Shi, Pengfei Huang, Zhengxiao Han, Jirui Yuan, Jiangtao Gong, Guyue Zhou, Hang Zhao, Hao Zhao
Motion forecasting is an important component in autonomous driving systems. One of the most challenging problems in motion forecasting is interactive trajectory prediction, whose goal is to jointly forecasts the future trajectories of interacting agents. To this end, we present a large-scale interactive trajectory prediction dataset named INT2 for INTeractive trajectory prediction at INTersections. INT2 includes 612,000 scenes, each lasting 1 minute, containing up to 10,200 hours of data. The agent trajectories are auto-labeled by a high-performance offline temporal detection and fusion algorithm, whose quality is further inspected by human judges. Vectorized semantic maps and traffic light information are also included in INT2. Additionally, the dataset poses an interesting domain mismatch challenge. For each intersection, we treat rush-hour and non-rush-hour segments as different domains. We benchmark the best open-sourced interactive trajectory prediction method on INT2 and Waymo Open Motion, under in-domain and cross-domain settings.
[coming soon]: INT2 Motion Prediciton Challenge 2023 and INT2 Interactive Motion Prediction Challenge 2023 in this challenges page.
[2023-8-9]: The INT2 Dataset, Benchmark, Visulization toolbox and Interaction filter toolbox are released in this code-base page.
[2023-8-8]: The INT2 Dataset Website are open in this website page.
We processed the data in a data format similar to WOMD.
INT2_Dataset/
├──hdmap
│ ├──LANE
│ │ ├──has_traffic_control # Whether the road is controlled by traffic signal lights.
│ │ ├──lane_type # The type of road.
│ │ ├──turn_direction # Whether the road have a turning direction.
│ │ ├──is_intersection # Whether the road is an intersection.
│ │ ├──left_neighbor_id # The ID of the adjacent lane on the left side.
│ │ ├──right_neighbor_id # The ID of the adjacent lane on the right side.
│ │ ├──predecessors # The lane ID that follows the current lane.
│ │ ├──successors # The lane ID reached after crossing the current lane.
│ │ ├──centerline # The centerline of the lane.
│ │ ├──left_boundary # The left boundary of the lane.
│ │ └──right_boundary # The right boundary of the lane.
│ ├──STOPLINE
│ │ └──centerline # The stop line.
│ ├──CROSSWALK
│ │ └──polygon # The outer boundary line of the crosswalk.
│ ├──JUNCTION
│ │ └──polygon # The outer boundary line of the junction.
│ └──MAP_RANGE # The extent of the intersection."
│ ├──x_start
│ ├──x_end
│ ├──y_start
│ └──y_end
└──interaciton_scenario
├──SCENARIO_ID # The name of the scene, named as "start time - end time".
├──MAP_ID # The hdmap ID corresponding to the scene
├──DATA_ACQUISITION_TIME
│ ├──begin # The start time of data segment collection, specified precisely to the month, day, hour, minute, and second.
│ │ ├──day
│ │ ├──hour
│ │ ├──minute
│ │ ├──second
│ │ └──weekday
│ └──end # The end time of data segment collection, specified precisely to the month, day, hour, minute, and second.
│ ├──day
│ ├──hour
│ ├──minute
│ ├──second
│ └──weekda
├──TIMESTAMP_SCENARIO # The complete timestamp on the complete scene.
├──AGENT_INFO
│ ├──object_id # An integer ID for each object.
│ ├──object_type # An integer type for each object (Vehicle, Pedestrian, or Cyclist)
│ ├──object_sub_type # An integer type for each object (CYCLIST, MOTORCYCLIST, TRICYCLIST et al.)
│ └──state
│ ├──position_x # The x coordinate of each object at each time step.
│ ├──position_y # The y coordinate of each object at each time step.
│ ├──position_z # The z coordinate of each object at each time step.
│ ├──theta # The theta coordinate of each object at each time step.
│ ├──velocity_x # The x component of the object velocity at each time step.
│ ├──velocity_y # The y component of the object velocity at each time step.
│ ├──length # The length of each object at each time step.
│ ├──width # The width of each object at each time step.
│ ├──height # The height of each object at each time step.
│ └──valid # A valid flag for all elements of features AGENT_INFO/state/XX. If set to 1, the element is populated with valid data, otherwise it is populated with -1.
├──TRAFFIC_LIGHTS_INFO
│ ├──tf_mapping_lane_id # Road ID controlled by traffic signal lights.
│ ├──tf_state_valid # A valid flag for all elements of features TRAFFIC_LIGHTS_INFO/XX. If set to 1, the element is populated with valid data, otherwise it is populated with -1.
│ └──tf_state # The state of each traffic light at each time step.
└──INTERACTION_INFO
├──interested_agents # The ID of interested agents.
└──interaction_pair_info
├──influencer_id # The ID of influencer agent.
├──reactor_id # The ID of reactor agent.
├──influencer_type # The type of influencer agent.
├──reactor_type # The type of reactor agent.
├──coexistence_time # The time when both influencer agent and reactor agent coexist.
└──interaction_time # The index corresponding to the time during which there is interaction between influencer agent and reactor agent.
for details, please refer to the dataset documentation.
We propose an algorithm that enables us to efficiently mine our vast dataset for interactions of research value.
INT2 includes interactions between vehicles-vehicles, vechile-cyclist, and vehicle-pedestrian:
Retrieve the interaction within the scenario dataset:
python interaction_filter.py --scenario_path int2_dataset_example/scenario/8/012510365201-012510382601.pickle --output_dir int2_dataset_example/interaction_scenario/complete_scenario
Split the complete interactive scenario into interactive scenarios with a length of 9.1 seconds:
python split_interaction.py --interaction_scenario_path int2_dataset_example/interaction_scenario/complete_scenario/8/012510365201-012510382601.pickle --output_dir int2_dataset_example/interaction_scenario/split_scenario
The visualization of the complete interactive scenario:
python vis_interaction_scenario.py --scenario_path int2_dataset_example/interaction_scenario/complete_scenario/8/012510365201-012510382601.pickle
The results will be saved by default in the output/visualization folder, including an XML file in CommonRoad format, frame-by-frame visualization images, and a complete video.
The visualization of the interactive scenario segments split into 9.1-second lengths.
python vis_split_interaction_scenario.py --scenario_path int2_dataset_example/interaction_scenario/complete_scenario/8/012510365201-012510382601.pickle
multiple xml format files, visualization images, and videos with a length of 9.1 seconds will be saved by default in the
output/visualization
folder
We report collision rates so that they function as baselines for potential trajectory generation (instead of trajectory forecasting) applications. Generated trajectories should be as collision-free as possible, under the criteria mentioned above. To calculate collision:
python calculate_collision.py --scenario_path int2_dataset_example/scenario/0/010213250706-010213264206.pickle --hdmap_dir int2_dataset_example/hdmap
we rasterize both agents and road elements, where agents are represented as rectangles and the road elements are decomposed into a combination of triangles. We use the IOU criteria to detect collisions between agents by computing the overlap between their corresponding rectangles. We also detect collisions between agents and road elements by checking if the rectangles overlap with the road element triangles. The collision rate equals the number of collisions divided by the total number of agent-agent pairs or agent-boundary pairs, you can find it in the code.
We used M2I and MTR as benchmarks for our dataset.
If you want to use them, please refer to
Quantitative results of M2I on our INT2 dataset.
Quantitative results of MTR on our INT2 dataset.
Comming soon.
If you find this work useful in your research, please consider cite:
@article{yan2023int2,
title={INT2: Interactive Trajectory Prediction at Intersections},
author={Yan, Zhijie and Li, Pengfei and Fu, Zheng and Xu, Shaocong and Shi, Yongliang and Chen, Xiaoxue and Zheng, Yuhang and Li, Yang and Liu, Tianyu and Li, Chuxuan and Luo, Nairui and Gao, Xu and Chen, Yilun and Wang, Zuoxu and Shi, Yifeng and Huang, Pengfei and Han, Zhengxiao and Yuan, Jirui and Gong, Jiangtao and Zhou, Guyue and Zhao, Hang and Zhao, Hao},
journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2023}
}
- Waymo open motion dataset: https://github.com/waymo-research/waymo-open-dataset
- Commonroad: https://commonroad.in.tum.de/getting-started
- M2I: https://github.com/Tsinghua-MARS-Lab/M2I
- MTR: https://github.com/sshaoshuai/MTR