🌻 Homepage • 📖 Paper List • 📊 Meta-eval • 🌟 Arxiv • 🔗 Talk
This repo include the papers discussed in our survey paper A Survey on LLM-as-a-Judge
Feel free to cite if you find our survey is useful for your research:
@article{gu2024surveyllmasajudge,
	title   = {A Survey on LLM-as-a-Judge},
	author  = {Jiawei Gu and Xuhui Jiang and Zhichao Shi and Hexiang Tan and Xuehao Zhai and Chengjin Xu and Wei Li and Yinghan Shen and Shengjie Ma and Honghao Liu and Yuanzhuo Wang and Jian Guo},
	year    = {2024},
	journal = {arXiv preprint arXiv: 2411.15594}
}
🔥 [2025-01-28] We added analysis on LLM-as-a-Judge and o1-like Reasoning Enhancement, as well as meta-evaluation results on o1-mini, Gemini-2.0-Flash-Thinking-1219, and DeepSeek-R1!
🌟 [2025-01-16] We shared and discussed the methodologies, applications (Finance, RAG, and Synthetic Data), and future research directions of LLM-as-a-Judge at BAAI Talk! 🤗 [Replay] [Methodology] [RAG & Synthetic Data]
🚀 [2024-11-23] We released A Survey on LLM-as-a-Judge, exploring LLMs as reliable, scalable evaluators and outlining key challenges and future directions!
- Reference
- Overview of LLM-as-a-Judge
- Evaluation Pipelines of LLM-as-a-Judge
- How to Build a Reliable LLM-as-a-Judge
- Table of Content
- Paper List
- 1 What is LLM-as-a-Judge?
- 2 How to use LLM-as-a-Judge?
- 3 How to improve LLM-as-a-Judge?
- 4 How to evaluate LLM-as-a-Judge?
- 5 Application
- 6 Challenges
 
- 
A Multi-Aspect Framework for Counter Narrative Evaluation using Large Language Models NAACL2024Jaylen Jones, Lingbo Mo, Eric Fosler-Lussier, and Huan Sun. [Paper] 
- 
Generative judge for evaluating alignment. ArXiv preprint2023Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper] 
- 
Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint2023Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper] 
- 
Large Language Models are Better Reasoners with Self-Verification. EMNLP findings2023Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao. [Paper] 
- 
Benchmarking Foundation Models with Language-Model-as-an-Examiner. NeurIPS2023Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou. [Paper] 
- 
Human-like summarization evaluation with chatgpt. ArXiv preprint2023Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. [Paper] 
- 
Reflexion: language agents with verbal reinforcement learning. NeurIPS2023Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. [Paper] 
- 
MacGyver: Are Large Language Models Creative Problem Solvers? NAACL2024Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas Griffiths, and Faeze Brahman. [Paper] 
- 
Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. ArXiv preprint2023Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Heung-Yeung Shum, and Jian Guo. [Paper] 
- 
Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. NAACL findings2024Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. [Papaer] 
- 
**Aligning with human judgement: The role of pairwise preference in large language model evaluators. ** COLM2024Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. [Paper] 
- 
LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models. EACL2024Adian Liusie, Potsawee Manakul, and Mark Gales. [Paper] 
- 
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS2023Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper] 
- 
Rrhf: Rank responses to align language models with human feedback without tears. ArXiv preprint2023Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. [Paper] 
- 
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint2023Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper] 
- 
Human-like summarization evaluation with chatgpt. ArXiv preprint2023Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. [Paper] 
- 
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS2023Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper] 
- 
AlpacaEval: An Automatic Evaluator of Instruction-following Models. 2023Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. [Code] 
- 
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint2023Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper] 
- 
Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint2023Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper] 
- 
Generative judge for evaluating alignment. ArXiv preprint2023Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper] 
- 
Prometheus: Inducing Fine-grained Evaluation Capability in Language Models. ArXiv preprint2023Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. [Paper] 
- 
xFinder: Robust and Pinpoint Answer Extraction for Large Language Models. ArXiv preprint2024Qingchen Yu, Zifan Zheng, Shichao Song, Zhiyu Li, Feiyu Xiong, Bo Tang, and Ding Chen. [Paper] 
- 
MacGyver: Are Large Language Models Creative Problem Solvers? NAACL2024Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas Griffiths, and Faeze Brahman. [Paper] 
- 
Guiding LLMs the right way: fast, non-invasive constrained generation. ICML2024Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. [Paper] 
- 
XGrammar: Flexible and Efficient Structured Generation Engine for Large Language Models. ArXiv preprint2024Yixin Dong, Charlie F. Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. [Paper] 
- 
SGLang: Efficient Execution of Structured Language Model Programs. NeurIPS2025Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. [Paper] 
- 
Reasoning with Language Model is Planning with World Model. EMNLP2023Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper] 
- 
Speculative rag: Enhancing retrieval augmented generation through drafting. ArXiv preprint2024Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, et al. [Paper] 
- 
**Agent-as-a-Judge: Evaluate Agents with Agents. ** ArXiv preprint2024Mingchen Zhuge, Changsheng Zhao, Dylan Ashley, Wenyi Wang, Dmitrii Khizbullin, Yunyang Xiong, Zechun Liu, Ernie Chang, Raghuraman Krishnamoorthi, Yuandong Tian, et al. [Paper] 
- 
Reasoning with Language Model is Planning with World Model. EMNLP2023Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper] 
- 
AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback. NeurIPS2023Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. [Paper] 
- 
Large language models are not fair evaluators. ACL2024Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper] 
- 
Wider and deeper llm networks are fairer llm evaluators. ArXiv preprint2023Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. [Paper] 
- 
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS2023Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper] 
- 
**SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation. ** Blog2023Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. [Blog] 
- 
Shepherd: A Critic for Language Model Generation. ArXiv preprint2023Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O’Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Celikyilmaz. [Paper] 
- 
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint2023Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper] 
- 
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment. ArXiv preprint2023Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. [Paper] 
- 
Rrhf: Rank responses to align language models with human feedback without tears. ArXiv preprint2023Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. [Paper] 
- 
Stanford Alpaca: An Instruction-following LLaMA model. 2023Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. [Code] 
- 
Languages are rewards: Hindsight finetuning using human feedback. ArXiv preprint2023Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. [Paper] 
- 
The Wisdom of Hindsight Makes Language Models Better Instruction Followers. PMLR2023Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E. Gonzalez. [Paper] 
- 
**Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. ** NeurIPS2023Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David D. Cox, Yiming Yang, and Chuang Gan. [Paper] 
- 
Wizardmath: Empowering mathematical reasoning for large language models via **reinforced evol-instruct. ** ArXiv preprint2023Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. [Paper] 
- 
Self-taught evaluators. ArXiv preprint2024Tianlu Wang, Ilia Kulikov, Olga Golovneva, Ping Yu, Weizhe Yuan, Jane Dwivedi-Yu, Richard Yuanzhe Pang, Maryam Fazel-Zarandi, Jason Weston, and Xian Li. [Paper] 
- 
Holistic analysis of hallucination in gpt-4v (ision): Bias and interference challenges. ArXiv preprint2023Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, and Huaxiu Yao. [Paper] 
- 
Evaluating Object Hallucination in Large Vision-Language Models. EMNLP2023Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Xin Zhao, and Ji-Rong Wen. [Paper] 
- 
Evaluation and analysis of hallucination in large vision-language models. ArXiv preprint2023Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, et al. [Paper] 
- 
Aligning large multimodal models with factually augmented rlhf. ArXiv preprint2023Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. [Paper] 
- 
MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML2024Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. [Paper] 
- 
**Agent-as-a-Judge: Evaluate Agents with Agents. ** ArXiv preprint2024Mingchen Zhuge, Changsheng Zhao, Dylan Ashley, Wenyi Wang, Dmitrii Khizbullin, Yunyang Xiong, Zechun Liu, Ernie Chang, Raghuraman Krishnamoorthi, Yuandong Tian, et al. [Paper] 
- 
Reasoning with Language Model is Planning with World Model. EMNLP2023Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper] 
- 
Reflexion: language agents with verbal reinforcement learning. NeurIPS2023Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. [Paper] 
- 
Towards Reasoning in Large Language Models: A Survey. ACL findings2023Jie Huang and Kevin Chen-Chuan Chang. [Paper] 
- 
Let’s verify step by step. ICLR2023Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. [Paper] 
- 
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. EMNLP2023Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. [Paper] 
- 
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models. ACL findings2024Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, and Jing Shao. [Paper] 
- 
GPTScore: Evaluate as You Desire. NAACL2024Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. [Paper] 
- 
G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. EMNLP2023Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. [Paper] 
- 
DHP Benchmark: Are LLMs Good NLG Evaluators? ArXiv preprint2024Yicheng Wang, Jiayi Yuan, Yu-Neng Chuang, Zhuoer Wang, Yingchi Liu, Mark Cusick, Param Kulkarni, Zhengping Ji, Yasser Ibrahim, and Xia Hu. [Paper] 
- 
SocREval: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation. NAACL findings2024Hangfeng He, Hongming Zhang, and Dan Roth. [Paper] 
- 
Branch-Solve-Merge Improves Large Language Model Evaluation and Generation. NAACL2024Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, and Xian Li. [Paper] 
- 
HD-Eval: Aligning Large Language Model Evaluators Through Hierarchical Criteria Decomposition. ACL2024Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. [Paper] 
- 
Are LLM-based Evaluators Confusing NLG Quality Criteria? ACL2024Xinyu Hu, Mingqi Gao, Sen Hu, Yang Zhang, Yicheng Chen, Teng Xu, and Xiaojun Wan. [Paper] 
- 
Large language models are not fair evaluators. ACL2024Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper] 
- 
Generative judge for evaluating alignment. ArXiv preprint2023Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper] 
- 
Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint2023Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper] 
- 
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint2023Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper] 
- 
**Aligning with human judgement: The role of pairwise preference in large language model evaluators. ** COLM2024Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. [Paper] 
- 
G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. EMNLP2023Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. [Paper] 
- 
DHP Benchmark: Are LLMs Good NLG Evaluators? ArXiv preprint2024Yicheng Wang, Jiayi Yuan, Yu-Neng Chuang, Zhuoer Wang, Yingchi Liu, Mark Cusick, Param Kulkarni, Zhengping Ji, Yasser Ibrahim, and Xia Hu. [Paper] 
- 
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models. NLP4ConvAI2023Yen-Ting Lin and Yun-Nung Chen. [Paper] 
- 
CLAIR: Evaluating Image Captions with Large Language Models. EMNLP2023David Chan, Suzanne Petryk, Joseph Gonzalez, Trevor Darrell, and John Canny. [Paper] 
- 
FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model. ACL2024Yebin Lee, Imseong Park, and Myungjoo Kang. [Paper] 
- 
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. ArXiv preprint2023Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. 2023. [Paper] 
- 
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models. ACL findings2024Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, and Jing Shao. [Paper] 
- 
Offsetbias: Leveraging debiased data for tuning evaluators. ArXiv preprint2024Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. [Papaer] 
- 
Judgelm: Fine-tuned large language models are scalable judges. ArXiv preprint2023Lianghui Zhu, Xinggang Wang, and Xinlong Wang. [Paper] 
- 
CritiqueLLM: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation. ACL2024Pei Ke, Bosi Wen, Andrew Feng, Xiao Liu, Xuanyu Lei, Jiale Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao Dong, Hongning Wang, et al. [Paper] 
- 
INSTRUCTSCORE: Towards Explainable Text Generation Evaluation with Automatic Feedback. EMNLP2023Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Wang, and Lei Li. [Paper] 
- 
Jade: A linguistics-based safety evaluation platform for llm. ArXiv preprint2023Mi Zhang, Xudong Pan, and Min Yang. [Paper] 
- 
Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks. EMNLP2023Andrea Sottana, Bin Liang, Kai Zou, and Zheng Yuan. [Paper] 
- 
On the humanity of conversational ai: Evaluating the psychological portrayal of llms. ICLR2023Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael Lyu. [Paper] 
- 
Generative judge for evaluating alignment. ArXiv preprint2023Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. [Paper] 
- 
Goal-Oriented Prompt Attack and Safety Evaluation for LLMs. ArXiv preprint2023Chengyuan Liu, Fubang Zhao, Lizhi Qing, Yangyang Kang, Changlong Sun, Kun Kuang, and Fei Wu. [Paper] 
- 
Benchmarking Foundation Models with Language-Model-as-an-Examiner. NeurIPS2023Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou. [Paper] 
- 
EvalMORAAL: Interpretable Chain-of-Thought and LLM-as-Judge Evaluation for Moral Alignment in Large Language Models. ArXiv preprint2025Hadi Mohammadi, Anastasia Giachanou, and Ayoub Bagheri. [Paper] 
- 
FLEUR: An Explainable Reference-Free Evaluation Metric for Image Captioning Using a Large Multimodal Model. ACL2024Yebin Lee, Imseong Park, and Myungjoo Kang. [Paper] 
- 
G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. EMNLP2023Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. [Paper] 
- 
DHP Benchmark: Are LLMs Good NLG Evaluators? ArXiv preprint2024Yicheng Wang, Jiayi Yuan, Yu-Neng Chuang, Zhuoer Wang, Yingchi Liu, Mark Cusick, Param Kulkarni, Zhengping Ji, Yasser Ibrahim, and Xia Hu. [Paper] 
- 
TrustJudge: Inconsistencies of LLM-as-a-Judge and How to Alleviate Them. ArXiv preprint2025Yidong Wang, Yunze Song, Tingyuan Zhu, Xuanwang Zhang, Zhuohao Yu, Hao Chen, Chiyu Song, Qiufeng Wang, Cunxiang Wang, Zhen Wu, Xinyu Dai, Yue Zhang, Wei Ye, and Shikun Zhang. [Paper] 
- 
TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models. EMNLP2023Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. [Paper] 
- 
Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges. ArXiv preprint2024Aman Singh Thakur, Kartik Choudhary, Venkat Srinik Ramayapally, Sankaran Vaidyanathan, and Dieuwke Hupkes. [Paper] 
- 
Benchmarking Foundation Models with Language-Model-as-an-Examiner. NeurIPS2023Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou. [Paper] 
- 
**Aligning with human judgement: The role of pairwise preference in large language model evaluators. ** COLM2024Yinhong Liu, Han Zhou, Zhijiang Guo, Ehsan Shareghi, Ivan Vulic, Anna Korhonen, and Nigel Collier. [Paper] 
- 
MTBench & Chatbot Arena Conversations:Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS2023Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper] 
- 
FairEval:Large language models are not fair evaluators. ACL2024Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper] 
- 
LLMBar:Evaluating Large Language Models at Evaluating Instruction Following. ArXiv preprint2023Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, and Danqi Chen. [Paper] 
- 
MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML2024Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. [Paper] 
- 
CodeJudge-Eval: Can Large Language Models be Good Judges in Code Understanding? COLING2025Yuwei Zhao, Ziyang Luo, Yuchen Tian, Hongzhan Lin, Weixiang Yan, Annan Li, and Jing Ma. [Paper] 
- 
KUDGE:LLM-as-a-Judge & Reward Model: What They Can and Cannot Do. ArXiv preprint2024Guijin Son, Hyunwoo Ko, Hoyoung Lee, Yewon Kim, and Seunghyeok Hong. [Paper] 
- 
CALM:Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge. ArXiv preprint2024Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, et al. [Paper] 
- 
LLMEval $^2$ :Wider and deeper llm networks are fairer llm evaluators.ArXiv preprint2023Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. [Paper] 
- 
Judging the Judges: A Systematic Investigation of Position Bias in Pairwise Comparative Assessments by LLMs. ArXiv preprint2024Lin Shi, Weicheng Ma, and Soroush Vosoughi. [Paper] 
- 
Large language models are not fair evaluators. ACL2024Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. [Paper] 
- 
Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge. ArXiv preprint2024Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, et al. [Paper] 
- 
An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Model is not a General Substitute for GPT-4 ArXiv preprint2024Hui Huang, Yingqi Qu, Xingyuan Bu, Hongli Zhou, Jing Liu, Muyun Yang, Bing Xu, Tiejun Zhao. [Paper] 
- 
Offsetbias: Leveraging debiased data for tuning evaluators. ArXiv preprint2024Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. [Papaer] 
- 
Verbosity Bias in Preference Labeling by Large Language Models. ArXiv preprint2023Keita Saito, Akifumi Wachi, Koki Wataoka, and Youhei Akimoto. [Paper] 
- 
Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge. ArXiv preprint2024Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, et al. [Paper] 
- 
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. NeurIPS2023Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. [Paper] 
- 
Humans or LLMs as the Judge? A Study on Judgement Bias. EMNLP2024Guiming Hardy Chen, Shunian Chen, Ziche Liu, Feng Jiang, Benyou Wang. [Paper] 
- 
**Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models. ** ACL2024Abhishek Kumar, Sarfaroz Yunusov, Ali Emami. [Paper] 
- 
Examining Query Sentiment Bias Effects on Search Results in Large Language Models. ESSIR2023Alice Li, and Luanne Sinnamon. [Paper] 
- 
Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment. EMNLP2024Vyas Raina, Adian Liusie, Mark Gales. [Paper] 
- 
Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation. ArXiv preprint2024Dongryeol Lee, Yerin Hwang, Yongil Kim, Joonsuk Park, and Kyomin Jung. [Paper] 
- 
Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates. ICLR2025Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Jing Jiang, and Min Lin. [Paper] 
- 
Benchmarking Cognitive Biases in Large Language Models as Evaluators. ACL Findings2024Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. [Paper] 
- 
Baseline Defenses for Adversarial Attacks Against Aligned Language Models. ArXiv preprint2023Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, and Tom Goldstein. [Paper] 
- 
Reference-Guided Verdict: LLMs-as-Judges in Automatic Evaluation of Free-Form Text. ArXiv preprint2024Sher Badshah, and Hassan Sajjad. [Paper] 
- 
Enhancing Annotated Bibliography Generation with LLM Ensembles. ArXiv preprint2024Sergio Bermejo. [Paper] 
- 
Human-like summarization evaluation with chatgpt. ArXiv preprint2023Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. [Paper] 
- 
Large Language Models are Diverse Role-Players for Summarization Evaluation. NLPCC2023Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. [Paper] 
- 
Evaluating Hallucinations in Chinese Large Language Models. ArXiv preprint2023Qinyuan Cheng, Tianxiang Sun, Wenwei Zhang, Siyin Wang, Xiangyang Liu, Mozhi Zhang, Junliang He, Mianqiu Huang, Zhangyue Yin, Kai Chen, et al. [Paper] 
- 
Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model. ACL findings2024Hengyuan Zhang, Yanru Wu, Dawei Li, Sak Yang, Rui Zhao, Yong Jiang, and Fei Tan. [Paper] 
- 
Halu-J: Critique-Based Hallucination Judge. ArXiv preprint2024Binjie Wang, Steffi Chern, Ethan Chern, and Pengfei Liu. [Paper] 
- 
MD-Judge & MCQ-Judge:SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models. ACL findings2024Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, and Jing Shao. [Paper] 
- 
SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal. ArXiv preprint2024Tinghao Xie, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani Sehwag, Kaixuan Huang, Luxi He, Boyi Wei, Dacheng Li, Ying Sheng, et al. [Paper] 
- 
L-eval: Instituting standardized evaluation for long context language models. ACL2024Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. [Paper] 
- 
LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks. ArXiv preprint2024Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xiaozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao Dong, et al. 2024. [Paper] 
- 
ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. ICLR2024Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. [Paper] 
- 
StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving. NeurIPS2024Chang Gao, Haiyun Jiang, Deng Cai, Shuming Shi, and Wai Lam. [Paper] 
- 
Rationale-Aware Answer Verification by Pairwise Self-Evaluation. EMNLP2024Akira Kawabata and Saku Sugawara. [Paper] 
- 
Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting. EMNLP2023Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, and Jilin Chen. [Paper] 
- 
Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. EMNLP2024Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. [Paper] 
- 
SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents. ArXiv preprint2024Dawei Li, Zhen Tan, Peijia Qian, Yifan Li, Kumar Satvik Chaudhary, Lijie Hu, and Jiayi Shen. [Paper] 
- 
Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning. ICLR2023Antonia Creswell, Murray Shanahan, and Irina Higgins. [Paper] 
- 
Improving Model Factuality with Fine-grained Critique-based Evaluator. ArXiv preprint2024Yiqing Xie, Wenxuan Zhou, Pradyot Prakash, Di Jin, Yuning Mao, Quintin Fettes, Arya Talebzadeh, Sinong Wang, Han Fang, Carolyn Rose, et al. [Paper] 
- 
Let’s verify step by step. ICLR2023Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. [Paper] 
- 
Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning. ArXiv preprint2024Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. [Paper] 
- 
Reasoning with Language Model is Planning with World Model. EMNLP2023Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. [Paper] 
- 
Graph of Thoughts: Solving Elaborate Problems with Large Language Models. AAAI2024Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. [Paper] 
- 
Critique-out-loud reward models. ArXiv preprint2024Zachary Ankner, Mansheej Paul, Brandon Cui, Jonathan D Chang, and Prithviraj Ammanabrolu. [Paper] 
- 
CriticEval: Evaluating Large-scale Language Model as Critic. NeurIPS2024Tian Lan, Wenwei Zhang, Chen Xu, Heyan Huang, Dahua Lin, Kai Chen, and Xian-Ling Mao. [Paper] 
- 
MCQG-SRefine: Multiple Choice Question Generation and Evaluation with Iterative Self-Critique, Correction, and Comparison Feedback. ArXiv preprint2024Zonghai Yao, Aditya Parashar, Huixue Zhou, Won Seok Jang, Feiyun Ouyang, Zhichao Yang, and Hong Yu. [Paper] 
- 
A Multi-AI Agent System for Autonomous Optimization of Agentic AI Solutions via Iterative Refinement and LLM-Driven Feedback Loops. ArXiv preprint2024Kamer Ali Yuksel, and Hassan Sawaf. [Paper] 
- 
ReAct: Synergizing Reasoning and Acting in Language Models. ICLR2023Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. [Paper] 
- 
Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions. ArXiv preprint2023Hui Yang, Sifu Yue, and Yunzhong He. [Paper] 
- 
LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving. ArXiv preprint2023Hao Sha, Yao Mu, Yuxuan Jiang, Li Chen, Chenfeng Xu, Ping Luo, Shengbo Eben Li, Masayoshi Tomizuka, Wei Zhan, and Mingyu Ding. [Paper] 
- 
SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures. NeurIPS2024Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V Le, Ed H Chi, Denny Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. [Paper] 
- 
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels. NAACL2024Honglei Zhuang, Zhen Qin, Kai Hui, Junru Wu, Le Yan, Xuanhui Wang, and Michael Bendersky. [Paper] 
- 
Zero-Shot Listwise Document Reranking with a Large Language Model. ArXiv preprint2023Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. [Paper] 
- 
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models. SIGIR2024Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. [Paper] 
- 
Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models. NAACL2024Raphael Tang, Crystina Zhang, Xueguang Ma, Jimmy Lin, and Ferhan Ture. [Paper] 
- 
Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. NAACL findings2024Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Le Yan, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. [Papaer] 
- 
Self-Retrieval: Building an Information Retrieval System with One Large Language Model. ArXiv preprint2024Qiaoyu Tang, Jiawei Chen, Bowen Yu, Yaojie Lu, Cheng Fu, Haiyang Yu, Hongyu Lin, Fei Huang, Ben He, Xianpei Han, et al. [Paper] 
- 
Evaluating RAG-Fusion with RAGElo: an Automated Elo-based Framework. LLM4Eval @ SIGIR2024Zackary Rackauckas, Arthur Câmara, and Jakub Zavrel. [Paper] 
- 
Are Large Language Models Good at Utility Judgments? SIGIR2024Hengran Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, and Xueqi Cheng. [Paper] 
- 
BioRAG: A RAG-LLM Framework for Biological Question Reasoning. ArXiv preprint2024Chengrui Wang, Qingqing Long, Xiao Meng, Xunxin Cai, Chengjun Wu, Zhen Meng, Xuezhi Wang, and Yuanchun Zhou. [Paper] 
- 
DALK: Dynamic Co-Augmentation of LLMs and KG to answer Alzheimer’s Disease Questions with Scientific Literature. EMNLP findings2024Dawei Li, Shu Yang, Zhen Tan, Jae Young Baik, Sunkwon Yun, Joseph Lee, Aaron Chacko, Bojian Hou, Duy Duong-Tran, Ying Ding, et al. [Paper] 
- 
Improving medical reasoning through retrieval and self-reflection with retrieval-augmented large language models. Bioinformatics2024Minbyul Jeong, Jiwoong Sohn, Mujeen Sung, and Jaewoo Kang. [Paper] 
- 
Academically intelligent LLMs are not necessarily socially intelligent. ArXiv preprint2024Ruoxi Xu, Hongyu Lin, Xianpei Han, Le Sun, and Yingfei Sun. [Paper] 
- 
SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents. ICLR2024Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, and Maarten Sap. [Paper] 
- 
MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark. ICML2024Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. [Paper] 
- 
AlignMMBench: Evaluating Chinese Multimodal Alignment in Large Vision-Language Models. ArXiv preprint2024Yuhang Wu, Wenmeng Yu, Yean Cheng, Yan Wang, Xiaohan Zhang, Jiazheng Xu, Ming Ding, and Yuxiao Dong. [Paper] 
- 
Multi-Modal and Multi-Agent Systems Meet Rationality: A Survey. ICML Workshop on LLMs and Cognition2024Bowen Jiang, Yangxinyu Xie, Xiaomeng Wang, Weijie J Su, Camillo Jose Taylor, and Tanwi Mallick. [Paper] 
- 
LLaVA-Critic: Learning to Evaluate Multimodal Models. ArXiv preprint2024Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan Gu, Heng Huang, and Chunyuan Li. [Paper] 
- 
Automated Evaluation of Large Vision-Language Models on Self-driving Corner Cases. ArXiv preprint2024Kai Chen, Yanze Li, Wenhua Zhang, Yanxin Liu, Pengxiang Li, Ruiyuan Gao, Lanqing Hong, Meng Tian, Xinhai Zhao, Zhenguo Li, et al. [Paper] 
- 
Revolutionizing Finance with LLMs: An Overview of Applications and Insights. ArXiv preprint2024Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, et al. [Paper] 
- 
Mixing It Up: The Cocktail Effect of Multi-Task Fine-Tuning on LLM Performance -- A Case Study in Finance. ArXiv preprint2024Meni Brief, Oded Ovadia, Gil Shenderovitz, Noga Ben Yoash, Rachel Lemberg, and Eitam Sheetrit. [Paper] 
- 
FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making. NeurIPS2024Yangyang Yu, Zhiyuan Yao, Haohang Li, Zhiyang Deng, Yupeng Cao, Zhi Chen, Jordan W Suchow, Rong Liu, Zhenyu Cui, Denghui Zhang, et al. [Paper] 
- 
UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models. ArXiv preprint2024Yuzhe Yang, Yifei Zhang, Yan Hu, Yilin Guo, Ruoli Gan, Yueru He, Mingcong Lei, Xiao Zhang, Haining Wang, Qianqian Xie, et al. [Paper] 
- 
Cracking the Code: Multi-domain LLM Evaluation on Real-World Professional Exams in Indonesia. ArXiv preprint2024Fajri Koto. [Paper] 
- 
Constructing Domain-Specific Evaluation Sets for LLM-as-a-judge. Workshop CustomNLP4U2024Ravi Raju, Swayambhoo Jain, Bo Li, Jonathan Li, and Urmish Thakkar. [Paper] 
- 
QuantAgent: Seeking Holy Grail in Trading by Self-Improving Large Language Model. ArXiv preprint2024Saizhuo Wang, Hang Yuan, Lionel M. Ni, and Jian Guo. [Paper] 
- 
GPT classifications, with application to credit lending. Machine Learning with Applications2024Golnoosh Babaei and Paolo Giudici. [Paper] 
- 
Design and Implementation of an LLM system to Improve Response Time for SMEs Technology Credit Evaluation. IJASC2023Sungwook Yoon. [Paper] 
- 
Leveraging Large Language Models for Relevance Judgments in Legal Case Retrieval. ArXiv preprint2024Shengjie Ma, Chong Chen, Qi Chu, and Jiaxin Mao. [Paper] 
- 
(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice. FACCT2024Inyoung Cheong, King Xia, KJ Kevin Feng, Quan Ze Chen, and Amy X Zhang. [Paper] 
- 
Retrieval-based Evaluation for LLMs: A Case Study in Korean Legal QA. Workshop NLLP2023Cheol Ryu, Seolhwa Lee, Subeen Pang, Chanyeol Choi, Hojun Choi, Myeonggee Min, and Jy-Yong Sohn. [Paper] 
- 
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models. NeurIPS2023Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya K, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John J. Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, and Zehua Li. [Paper] 
- 
LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models. NeurIPS2024Haitao Li, You Chen, Qingyao Ai, Yueyue Wu, Ruizhe Zhang, and Yiqun Liu. [Paper] 
- 
Evaluation Ethics of LLMs in Legal Domain. ArXiv preprint2024Ruizhe Zhang, Haitao Li, Yueyue Wu, Qingyao Ai, Yiqun Liu, Min Zhang, and Shaoping Ma. [Paper] 
- 
LLMs in medicine: The need for advanced evaluation systems for disruptive technologies. The Innovation2024Yi-Da Tang, Er-Dan Dong, and Wen Gao. [Paper] 
- 
Artificial intelligence for geoscience: Progress, challenges, and perspectives. The Inovation2024Tianjie Zhao, Sheng Wang, Chaojun Ouyang, Min Chen, Chenying Liu, Jin Zhang, Long Yu, Fei Wang, Yong Xie, Jun Li, et al. [Paper] 
- 
Harnessing the power of artificial intelligence to combat infectious diseases: Progress, challenges, and future outlook. The Innovation Medicine2024Hang-Yu Zhou, Yaling Li, Jia-Ying Li, Jing Meng, and Aiping Wu. [Paper] 
- 
Comparing Two Model Designs for Clinical Note Generation; Is an LLM a Useful Evaluator of Consistency? NAACL findings2024Nathan Brake and Thomas Schaaf. [Paper] 
- 
Towards Leveraging Large Language Models for Automated Medical Q&A Evaluation. ArXiv preprint2024Jack Krolik, Herprit Mahal, Feroz Ahmad, Gaurav Trivedi, and Bahador Saket. [Paper] 
- 
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts. ICLR2024Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. [Paper] 
- 
**Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. ** ArXiv preprint2023Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. [Paper] 
- 
Solving Math Word Problems via Cooperative Reasoning induced Language Models. ACL2023Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. [Paper] 
- 
LLMs as Evaluators: A Novel Approach to Evaluate Bug Report Summarization. ArXiv preprint2024Abhishek Kumar, Sonia Haiduc, Partha Pratim Das, and Partha Pratim Chakrabarti. [Paper] 
- 
Automated Essay Scoring and Revising Based on Open-Source Large Language Models. IEEE Transactions on Learning Technologies2024Yishen Song, Qianta Zhu, Huaibo Wang, and Qinhua Zheng. [Paper] 
- 
LLM-Mod: Can Large Language Models Assist Content Moderation? CHI EA2024Mahi Kolla, Siddharth Salunkhe, Eshwar Chandrasekharan, and Koustuv Saha. [Paper] 
- 
Can LLM be a Personalized Judge? EMNLP findings2024Yijiang River Dong, Tiancheng Hu, and Nigel Collier. [Paper] 
- 
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback. EMNLP2023Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher Manning. [Paper] 
- 
Prompt Packer: Deceiving LLMs through Compositional Instruction with Hidden Attacks. ArXiv preprint2023Shuyu Jiang, Xingshu Chen, and Rui Tang. [Paper] 
- 
"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. CCS2024Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. [Paper] 
- 
Universal and Transferable Adversarial Attacks on Aligned Language Models. ArXiv preprint2023Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, and Matt Fredrikson. [Paper] 



