Skip to content

Commit f0f9286

Browse files
authored
Update README.md
1 parent 9ca3f3a commit f0f9286

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Domain-specific embeddings can significantly improve the quality of vector repre
77

88

99
## Contents
10-
- `sentence-transformer/`: This directory contains a Jupyter notebook demonstrating how to fine-tune a [sentence-transfomer](https://www.sbert.net/) embedding model using the [Multiple Negatives Ranking Loss function](https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) which is recommended when in your training data you only have positive pairs, for example, only pairs of similar texts like pairs of paraphrases, pairs of duplicate questions, pairs of (query, response), or pairs of (source_language, target_language).
10+
- `sentence-transformer/multiple-negatives-ranking-loss/`: This directory contains a Jupyter notebook demonstrating how to fine-tune a [sentence-transfomer](https://www.sbert.net/) embedding model using the [Multiple Negatives Ranking Loss function](https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) which is recommended when in your training data you only have positive pairs, for example, only pairs of similar texts like pairs of paraphrases, pairs of duplicate questions, pairs of (query, response), or pairs of (source_language, target_language).
1111
We are using the Multiple Negatives Ranking Loss function because we are utilizing [Bedrock FAQ](https://aws.amazon.com/bedrock/faqs/) as the training data, which consists of pairs of questions and answers.
1212
The code in this directory is used in the AWS blog post "Improve RAG accuracy with finetuned embedding models on Sagemaker"
1313

0 commit comments

Comments
 (0)