Skip to content

Latest commit

 

History

History
166 lines (121 loc) · 8.59 KB

README.md

File metadata and controls

166 lines (121 loc) · 8.59 KB

[NeurIPS 2024] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution

logo

🥳 Welcome! This is a codebase that accompanies the paper ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution.

Give ReEvo 5 minutes, and get a state-of-the-art algorithm in return!

Table of Contents

1. News 📰

  • Sep. 2024: ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution has been accepted at NeurIPS 2024 🥳
  • May 2024: We release a new paper version
  • Apr. 2024: Novel use cases for Neural Combinatorial Optimization (NCO) and Electronic Design Automation (EDA)
  • Feb. 2024: We are excited to release ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution 🚀

2. Introduction 🚀

Diagram of ReEvo

We introduce Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics (HHs) that leverages LLMs for heuristic generation, featuring minimal manual intervention and open-ended heuristic spaces.

To empower LHHs, we present Reflective Evolution (ReEvo), a generic searching framework that emulates the reflective design approach of human experts while much surpassing human capabilities with its scalable LLM inference, Internet-scale domain knowledge, and powerful evolutionary search.

3. Exciting Highlights 🌟

We can improve the following types of algorithms:

  • Neural Combinatorial Optimization (NCO)
  • Genetic Algorithm (GA)
  • Ant Colony Optimization (ACO)
  • Guided Local Search (GLS)
  • Constructive Heuristics

on the following problems:

  • Traveling Salesman Problem (TSP)
  • Capacitated Vehicle Routing Problem (CVRP)
  • Orienteering Problem (OP)
  • Multiple Knapsack Problems (MKP)
  • Bin Packing Problem (BPP)
  • Decap Placement Problem (DPP)

with both black-box and white-box settings.

4. Usage 🔑

  • Set your LLM API key (OpenAI API, ZhiPu API, Llama API) as an environment variable or like this:
    $ python main.py llm_client=openai llm_client.api_key="<Your API key>" # see more options in ./cfg/llm_client
  • Running logs and intermediate results are saved in ./outputs/main/ by default.
  • Datasets are generated on the fly.
  • Some test notebooks are provided in ./problems/*/test.ipynb.

4.1. Dependency

  • Python >= 3.11
  • openai >= 1.0.0
  • hydra-core
  • scipy

You may install the dependencies above via pip install -r requirements.txt.

Problem-specific dependencies:

  • tsp_aco(_black_box): pytorch, scikit-learn
  • cvrp_aco(_black_box) / mkp_aco(_black_box) / op_aco(_black_box) / NCO: pytorch
  • tsp_gls: numba==0.58

4.2. To run ReEvo

# e.g., for tsp_aco
python main.py \
    problem=tsp_aco \  # problem name
    init_pop_size=4 \  # initial population size
    pop_size=4 \  # population size
    max_fe=20 \  # maximum number of heuristic evaluations
    timeout=20  # allowed evaluation time for one generation

Check out ./cfg/ for more options.

4.3. Available problems

  • Traveling Salesman Problem (TSP): tsp_aco, tsp_aco_black_box, tsp_constructive, tsp_gls, tsp_pomo, tsp_lehd
  • Capacitated Vehicle Routing Problem (CVRP): cvrp_aco, cvrp_aco_black_box, cvrp_pomo, cvrp_lehd
  • Bin Packing Problem (BPP): bpp_offline_aco, bpp_offline_aco_black_box, bpp_online
  • Multiple Knapsack Problems (MKP): mkp_aco, mkp_aco_black_box
  • Orienteering Problem (OP): op_aco, op_aco_black_box
  • Decap Placement Problem (DPP): dpp_ga

4.4. Simple steps to apply ReEvo to your problem

  • Define your problem in ./cfg/problem/.
  • Generate problem instances and implement the evaluation pipeline in ./problems/.
  • Add function_description, function_signature, and seed_function in ./prompts/.

By default:

  • The LLM-generated heuristic is written into f"./problems/YOUR_PROBLEM/gpt.py", and will be imported into ./problems/YOUR_PROBLEM/eval.py (e.g. for TSP_ACO), which is called by reevo._run_code during ReEvo.
  • In training mode, ./problems/YOUR_PROBLEM/eval.py (e.g. for TSP_ACO) should print out the meta-objective value as the last line of stdout, which is parsed by reevo.evaluate_population for heuristic evaluation.

4.5. Use Alternative LLMs

Use the cli parameter llm_client to designate an LLM API provider, and llm_client.model to determine the model to use. For example,

$ export LLAMA_API_KEY=xxxxxxxxxxxxxxxxxxxx
$ python main.py llm_client=llama_api llm_client.model=gemma2-9b

Supported LLM API providers and models include (note that only chat models are supported):

5. Citation 🤩

If you encounter any difficulty using our code, please do not hesitate to submit an issue or directly contact us!

We are also on Slack if you have any questions or would like to discuss ReEvo with us. We are open to collaborations and would love to hear from you 🚀

If you find our work helpful (or if you are so kind as to offer us some encouragement), please consider giving us a star, and citing our paper.

@inproceedings{ye2024reevo,
    title={ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution}, 
    author={Haoran Ye and Jiarui Wang and Zhiguang Cao and Federico Berto and Chuanbo Hua and Haeyeon Kim and Jinkyoo Park and Guojie Song},
    booktitle={Advances in Neural Information Processing Systems},
    year={2024},
    note={\url{https://github.com/ai4co/reevo}}
}

6. Acknowledgments 🫡

We are very grateful to Yuan Jiang, Yining Ma, Yifan Yang, and AI4CO community for valuable discussions and feedback.

Also, our work is built upon the following projects, among others: