[go: up one dir, main page]

Skip to content

Official code for article "Large Language Models as Traffic Signal Control Agents: Capacity and Opportunity".

License

Notifications You must be signed in to change notification settings

eltociear/LLMTSCS

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Large Language Models as Traffic Signal Control Agents: Capacity and Opportunity

Testing Status Testing Status Testing Status Stars Visits Badge

| 1 Introduction | 2 Requirements | 3 Usage | 4 Baselines | 5 Code structure | 6 Datasets | 7 Citation|

1 Introduction

Official code for article "Large Language Models as Traffic Signal Control Agents: Capacity and Opportunity".

Traffic signal control is crucial for optimizing the efficiency of road network by regulating traffic light phases. Existing research predominantly focuses on heuristic or reinforcement learning (RL)-based methods, which often lack transferability across diverse traffic scenarios and suffer from poor interpretability. This paper introduces a novel approach, LLMLight, utilizing large language models (LLMs) for traffic signal control tasks. By leveraging LLMs' impressive generalization and zero-shot reasoning capabilities, LLMLight executes a human-like decision-making process for efficient traffic management. Specifically, the framework begins by composing task descriptions, current traffic conditions, and prior knowledge into a prompt. Subsequently, we utilize LLM's chain-of-thought (CoT) reasoning ability to identify the next traffic signal phase, ensuring optimal efficiency in the road network. LLMLight achieves state-of-the-art (SOTA) or competitive results across five real-world traffic datasets. Notably, LLMLight showcases remarkable generalization, interpretability, and zero-shot reasoning abilities, even without any training for transportation management tasks.

The code structure is based on Efficient_XLight.

Demo

Watch Our Demo Video Here:

Demo.mp4

2 Requirements

python=3.9,tensorflow=2.8, cityflow, pandas=1.5.0, numpy=1.26.2, wandb, transformers=4.37.0, accelerate=0.25.0, fastapi, uvicorn

cityflow needs a linux environment, and we run the code on Ubuntu.

3 Usage

Parameters are well-prepared, and you can run the code directly.

  • For axample, to run Advanced-MPLight:
python run_advanced_mplight.py --dataset jinan --traffic_file anon_4_4_hangzhou_real.json
  • To run OpenAI LLM agent, you need to set your key in ./models/chatgpt.py:
headers = {
    "Content-Type": "application/json",
    "Authorization": "YOUR_KEY_HERE"
}

Then, run the OpenAI LLM traffic agent:

python run_chatgpt_commonsense.py --dataset jinan --traffic_file anon_4_4_hangzhou_real.json --gpt_version gpt-4
  • To run open LLMs, you can either run an API backend:
python open_llm_api.py --workers 2

Then, run the open LLM traffic agent:

python run_open_llm_commonsense.py --dataset jinan --traffic_file anon_4_4_hangzhou_real.json --llm_model llama_2_13b_chat_hf --llm_api_thread_num 2 --with_external_api false

Note:

  • You first need to download your LLM and put it under the ./llm_models directory.
  • You should set the number of workers of the open LLM api backend and the traffic agent the same.

Or, you can also run with Perplexity API by setting your key in ./models/open_sourced_llm_models.py:

ex_headers = {
    "accept": "application/json",
    "content-type": "application/json",
    "Authorization": "YOUR_PERPLEXITY_KEY_HERE"
}

Then, run the open LLM traffic agent:

python run_open_llm_commonsense.py --dataset jinan --traffic_file anon_4_4_hangzhou_real.json --llm_model llama_2_13b_chat_hf --llm_api_thread_num 2 --with_external_api true

4 Baselines

  • Heuristic Methods:
    • Fixedtime, MaxPressure, EfficientMaxPressure
  • DNN-RL:
    • PressLight, MPLight, CoLight, AttendLight, EfficientMPLight, EfficientPressLight, EfficientColight
  • Adv-DNN-RL:
    • AdvancedMaxPressure, AdvancedMPLight, AdvancedColight
  • LLMs:
    • gpt-3.5-turbo-0613, gpt-4-0613, llama-2-13b-chat-hf, llama-2-70b-chat-hf

5 Code structure

  • models: contains all the models used in our article.
  • utils: contains all the methods to simulate and train the models.
  • frontend: contains visual replay files of different agents.
  • errors: contains error logs of ChatGPT agents.
  • {LLM_MODEL}_logs: contains dialog log files of a LLM.
  • prompts: contains base prompts of ChatGPT agents.

6 Datasets

Road networks Intersections Road network arg Traffic files
Jinan 3 X 4 jinan anon_3_4_jinan_real.json
anon_3_4_jinan_real_2000.json
anon_3_4_jinan_real_2500.json
anon_3_4_jinan_synthetic_4000_10min.json
Hangzhou 4 X 4 hangzhou anon_4_4_hangzhou_real.json
anon_4_4_hangzhou_real_5816.json
anon_4_4_hangzhou_synthetic_4000_10min.json
New York-16X3 16 X 3 newyork_16x3 anon_16_3_newyork_real.json
New York-28X7 28 X 7 newyork_28x7 anon_28_7_newyork_real_double.json
anon_28_7_newyork_real_triple.json

7 Citation

@inproceedings{Lai2023LargeLM,
  title={Large Language Models as Traffic Signal Control Agents: Capacity and Opportunity},
  author={Siqi Lai and Zhao Xu and Weijia Zhang and Hao Liu and Hui Xiong},
  year={2023},
  url={https://api.semanticscholar.org/CorpusID:266551220}
}

About

Official code for article "Large Language Models as Traffic Signal Control Agents: Capacity and Opportunity".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.4%
  • JavaScript 5.4%
  • HTML 1.8%
  • CSS 0.4%