Skip to content

VyetGokyra/awesome-LLM-bayesian-optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Awesome LLM Bayes Optimization

Curated papers & projects (2023→) that combine Large Language Models (LLMs) with Bayesian Optimization (BO).
Goal: help researchers quickly survey strategies, applications, and design patterns for LLM-based BO.

Awesome

LLMs are increasingly used to warm-start, guide acquisition, embed complex spaces, and even generate BO algorithms.
Each entry lists year, scope, strategy, highlights, and a source link.


Scope

We include papers, preprints, and project pages that either:

  • use BO with LLMs (e.g., LLM-guided acquisition, embedding, warm-start), or
  • use LLMs to guide/accelerate a BO procedure.

Coverage: 2023–2025 (and growing). Please contribute via PRs/issues (see How to Contribute).


Table of Papers

Longer notes follow the table to keep things tidy.

Year Paper Application / Domain LLM + BO Strategy Highlights Source
2025 LLaMEA-BO: LLM-aided Evolutionary Algorithm for BO Meta-optimization LLM + evolution to auto-generate BO algorithm code (init/surrogate/acquisition) Outperforms several BO baselines on BBOB; no extra finetuning https://arxiv.org/abs/2505.21034
2025 Reasoning BO General & chemistry LLM reasoning + multi-agent + knowledge graph coupled to BO Real-time sampling recs; improved yields in a direct arylation example https://arxiv.org/abs/2505.12833
2025 Distilling and Exploiting Quantitative Insights from LLMs for Chemical Reaction BO Chemical reaction optimization Prompt LLM to elicit priors; train a utility guiding BO Learned utility correlates with yields; speeds optimization https://arxiv.org/abs/2504.08874
2025 GOLLuM: Gaussian Process Optimized LLMs Reaction optimization & finetuning Treat finetuning via GP ML objective; BO tunes hyperparams with LLM kernel Higher discovery rate; +14% vs domain-specific reps https://arxiv.org/abs/2504.06265
2024 LLAMBO: Large Language Models to Enhance BO General black-box Frame BO in natural language; LLM proposes candidates + warm-starts Strong early-stage performance; modular components https://openreview.net/forum?id=OOxotBmGol
2024 Language Model Embeddings for BO (Embed-then-Regress) General black-box Use LLM to embed string inputs; regress in embedding space; apply BO Enables BO over arbitrary strings; GP-competitive https://arxiv.org/abs/2410.10190
2024 PEBOL: BO with LLM-Based Acquisition Functions for NL Preference Elicitation Conversational recommenders Use NLI + LLM-guided TS/UCB queries in BO Better MRR after few rounds vs monolithic LLM baselines https://arxiv.org/abs/2405.00981
2024 LLANA: LLM-Enhanced BO for Analog Layout Constraint Generation Analog circuit synthesis LLM generates design-dependent constraints that guide BO Faster exploration; SOTA-level performance https://arxiv.org/abs/2406.05250
2023 Bayesian Approach for Prompt Optimization in Pre-trained LMs Prompt tuning Relax discrete prompts to embeddings; run BO (BoTorch) Finds hard prompts w/o LM changes; analyzes accuracy/time trade-offs https://arxiv.org/abs/2312.00471
2023 BoChemian: LLM Embeddings for BO of Chemical Reactions Chemistry Map text procedures via LLM embeddings; optimize with BO Open-source LLMs yield effective reaction features for BO https://neurips.cc/virtual/2023/78776

Feel free to open a PR to add: Bayesian Optimization for Instruction Generation (BOInG), Optimal RoPE Extension via BO, BOPRO, HOLLM, ADO-LLM, LEDRO, HbBoPs, Bilevel-BO-SWA, Multi-task BO with LLM inits, model fusion via BO, and more.


Additional Notes

  • Reasoning BO / LLM research assistants for BO.
    Works like Reasoning BO integrate long-context reasoning, multi-agent setups, and knowledge graphs to steer sampling; similar ideas appear in LLM assistants that warm-start BO and supply interpretable commentary. See sources for concrete gains and case studies.

  • Analog design (LLANA et al.).
    Analog layout/sizing benefits from LLMs generating constraints or candidate points that focus BO, cutting exploration cost while hitting competitive figures of merit.

  • Prompt & instruction optimization.
    Papers relax discrete prompts to embeddings or use stochastic mini-batch surrogates, allowing BO to search combinatorial prompt spaces without backprop access to the LM.

  • Chemical reaction optimization.
    LLM embeddings and elicited priors provide strong features/priors for BO, improving yields and sample efficiency in reaction optimization campaigns.

  • Meta-optimization & algorithm generation.
    LLaMEA-BO shows LLMs can design BO algorithms themselves; other works learn to initialize or fuse models, creating a virtuous cycle where LLMs and BO co-improve.


How to Contribute

  1. Add a row to the table (keep it concise).
  2. Include a short 1–2 line summary and a public source link (arXiv/OpenReview/project page).
  3. PRs welcome for new categories (e.g., robotics, materials, compiler/hardware).

Template row:

| YEAR | **Paper title** | Domain | LLM + BO strategy | Key highlight | https://link |

About

a curated list of LLMs bayesian optimization tutorials, papers, softwares

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published