Skip to content

Commit 13408c2

Browse files
authored
Update AITG-APP-10_Testing_for_Content_Bias.md
1 parent 68fdc4b commit 13408c2

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

Document/content/tests/AITG-APP-10_Testing_for_Content_Bias.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,8 @@ AI-generated outputs must:
8383
### References
8484
- OWASP Top 10 for LLM Applications 2025. "LLM00:2025 Misinformation." OWASP, 2025. [Link](https://genai.owasp.org/llmrisk/llm092025-misinformation/)
8585
- Cognitive Bias in Decision-Making with LLMs - Echterhoff, Jessica, Yao Liu, Abeer Alessa, Julian McAuley, and Zexue He - [arXiv preprint arXiv:2403.00811 (2024)](https://arxiv.org/abs/2403.00811).
86-
- Bias in Large Language Models: Origin, Evaluation, and Mitigation - Guo, Yufei, Muzhe Guo, Juntao Su, Zhou Yang, Mengqiu Zhu, Hongfei Li, Mengyang Qiu, and Shuo Shuo Liu. [arXiv preprint arXiv:2411.10915](https://arxiv.org/abs/2411.10915)
87-
- On Formalizing Fairness in Prediction with Machine Learning arXiv preprint - Gajane, Pratik, and Mykola Pechenizkiy [arXiv:1710.0318](https://arxiv.org/abs/1710.03184)
86+
- Bias in Large Language Models: Origin, Evaluation, and Mitigation - [arXiv preprint arXiv:2411.10915](https://arxiv.org/abs/2411.10915)
87+
- On Formalizing Fairness in Prediction with Machine Learning - [arXiv:1710.0318](https://arxiv.org/abs/1710.03184)
8888
- LLMs recognise bias but also reproduce harmful stereotypes: an analysis of bias in leading LLMs - [Giskard](https://www.giskard.ai/knowledge/llms-recognise-bias-but-also-reproduce-harmful-stereotypes)
8989
- HELM-Safety bias-related tests - Stanford University - [Link](https://crfm.stanford.edu/helm/safety/latest/)
9090
- BIG-Bench - bias-related tests - [Link](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/bias_from_probabilities)

0 commit comments

Comments
 (0)