Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/Animated_Story_Video_Generation_gemini.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@
"# Create a client for text generation using Gemini.\n",
"MODEL = \"models/gemini-2.5-flash-lite\"\n",
"# Create a client for image generation using Imagen.\n",
"IMAGE_MODEL_ID = \"imagen-3.0-generate-002\"\n"
"IMAGE_MODEL_ID = \"imagen-4.0-generate-001\"\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/Anomaly_detection_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1214,7 +1214,7 @@
"source": [
"### API changes to Embeddings with model embedding-001\n",
"\n",
"For the embeddings model, `text-embedding-004`, there is a task type parameter and the optional title (only valid with task_type=`RETRIEVAL_DOCUMENT`).\n",
"For the embeddings model, `gemini-embedding-001`, there is a task type parameter and the optional title (only valid with task_type=`RETRIEVAL_DOCUMENT`).\n",
"\n",
"These parameters apply only to the embeddings models. The task types are:\n",
"\n",
Expand Down Expand Up @@ -1276,7 +1276,7 @@
"\n",
"\n",
"def create_embeddings(df):\n",
" MODEL_ID = \"text-embedding-004\" # @param [\"embedding-001\",\"text-embedding-004\"] {allow-input: true}\n",
" MODEL_ID = \"gemini-embedding-001\" # @param [\"embedding-001\",\"gemini-embedding-001\"] {allow-input: true}\n",
" model = f\"models/{MODEL_ID}\"\n",
" embed_fn = make_embed_text_fn(model)\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/Browser_as_a_tool.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@
"\n",
"client = genai.Client(api_key=GOOGLE_API_KEY)\n",
"\n",
"LIVE_MODEL = 'gemini-2.5-flash-native-audio-preview-09-2025' # @param ['gemini-2.0-flash-live-001', 'gemini-live-2.5-flash-preview', 'gemini-2.5-flash-native-audio-preview-09-2025'] {allow-input: true, isTemplate: true}\n",
"LIVE_MODEL = 'gemini-2.5-flash-native-audio-preview-09-2025' # @param ['gemini-2.5-flash-native-audio-preview-12-2025', 'gemini-2.5-flash-native-audio-preview-12-2025', 'gemini-2.5-flash-native-audio-preview-09-2025'] {allow-input: true, isTemplate: true}\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry for 'gemini-2.5-flash-native-audio-preview-12-2025'. Please remove the duplicate to avoid confusion and keep the list of selectable models clean.

LIVE_MODEL = 'gemini-2.5-flash-native-audio-preview-09-2025'  # @param ['gemini-2.5-flash-native-audio-preview-12-2025', 'gemini-2.5-flash-native-audio-preview-09-2025'] {allow-input: true, isTemplate: true}

"MODEL = 'gemini-2.5-flash' # @param ['gemini-2.5-flash'] {allow-input: true, isTemplate: true}"
]
},
Expand Down
4 changes: 2 additions & 2 deletions examples/Classify_text_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@
"output_type": "stream",
"text": [
"models/embedding-001\n",
"models/text-embedding-004\n",
"models/gemini-embedding-001\n",
"models/gemini-embedding-exp-03-07\n",
"models/gemini-embedding-exp\n",
"models/gemini-embedding-001\n"
Expand Down Expand Up @@ -193,7 +193,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\":true, isTemplate: true}"
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\":true, isTemplate: true}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

MODEL_ID = "gemini-embedding-001" # @param ["gemini-embedding-001"] {"allow-input":true, isTemplate: true}

]
},
{
Expand Down
8 changes: 4 additions & 4 deletions examples/Google_IO2025_Live_Coding.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@
"source": [
"### Select the Imagen3 model to be used\n",
"\n",
"The `imagen-3.0-generate-002` model is specifically designed for high-quality image generation from textual prompts."
"The `imagen-4.0-generate-001` model is specifically designed for high-quality image generation from textual prompts."
]
},
{
Expand All @@ -203,7 +203,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"imagen-3.0-generate-002\" # @param {isTemplate: true}"
"MODEL_ID = \"imagen-4.0-generate-001\" # @param {isTemplate: true}"
]
},
{
Expand Down Expand Up @@ -301,7 +301,7 @@
"## Generating images with Gemini 2.0 Flash image out model (experimental)\n",
"\n",
"\n",
"The `gemini-2.0-flash-preview-image-generation model` extends Gemini's multimodal capabilities to include conversational image generation and editing. This model can generate images along with text responses, making it highly versatile for mixed-media content creation."
"The `gemini-2.5-flash-image model` extends Gemini's multimodal capabilities to include conversational image generation and editing. This model can generate images along with text responses, making it highly versatile for mixed-media content creation."
]
},
{
Expand All @@ -323,7 +323,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-2.0-flash-preview-image-generation\""
"MODEL_ID = \"gemini-2.5-flash-image\""
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@
"client = genai.Client()\n",
"\n",
"response = client.models.generate_content(\n",
" model=\"gemini-2.0-flash\",\n",
" model=\"gemini-2.5-flash\",\n",
" contents=\"Extract invoice details: Invoice #12345 dated 2024-01-15 from Acme Corp for $1,250.00\",\n",
" config=types.GenerateContentConfig(\n",
" response_mime_type=\"application/json\",\n",
Expand Down
6 changes: 3 additions & 3 deletions examples/Search_Wikipedia_using_ReAct.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
"id": "sdkuZY1IdRal"
},
"source": [
"This notebook is a minimal implementation of [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) with the Google `gemini-2.0-flash` model. You'll use ReAct prompting to configure a model to search Wikipedia to find the answer to a user's question.\n"
"This notebook is a minimal implementation of [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) with the Google `gemini-2.5-flash` model. You'll use ReAct prompting to configure a model to search Wikipedia to find the answer to a user's question.\n"
]
},
{
Expand Down Expand Up @@ -854,7 +854,7 @@
}
],
"source": [
"gemini_ReAct_chat = ReAct(model='gemini-2.0-flash', ReAct_prompt='model_instructions.txt')\n",
"gemini_ReAct_chat = ReAct(model='gemini-2.5-flash', ReAct_prompt='model_instructions.txt')\n",
"# Note: try different combinations of generational_config parameters for variational results\n",
"gemini_ReAct_chat(\"What are the total of ages of the main trio from the new Percy Jackson and the Olympians TV series in real life?\", temperature=0.2)"
]
Expand All @@ -865,7 +865,7 @@
"id": "ZIfeyyI6hoIE"
},
"source": [
"Now, try asking the same question to `gemini-2.0-flash` model without the ReAct prompt."
"Now, try asking the same question to `gemini-2.5-flash` model without the ReAct prompt."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/Search_reranking_using_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,7 @@
" search_history.add(search_term) # add to search history\n",
"\n",
" try:\n",
" # extract the relevant data by using `gemini-2.0-flash` model\n",
" # extract the relevant data by using `gemini-2.5-flash` model\n",
" page = wikipedia.page(search_term, auto_suggest=False)\n",
" url = page.url\n",
" print(f\"Information Source: {url}\")\n",
Expand Down Expand Up @@ -1190,7 +1190,7 @@
},
"outputs": [],
"source": [
"EMBEDDINGS_MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\": true, \"isTemplate\": true}\n",
"EMBEDDINGS_MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\": true, \"isTemplate\": true}\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

EMBEDDINGS_MODEL_ID = "gemini-embedding-001"  # @param ["gemini-embedding-001"] {"allow-input": true, "isTemplate": true}

"\n",
"def get_embeddings(content: list[str]) -> np.ndarray:\n",
" embeddings = client.models.embed_content(\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/Story_Writing_with_Prompt_Chaining.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2133,7 +2133,7 @@
"source": [
"Language models like Gemini process text in units called tokens. For Gemini models, each token is equivalent to about 4 characters.\n",
"\n",
"`gemini-2.0-flash` has an output limit of 8192 tokens per generation call. This means that each individual prompt response cannot exceed this limit. By using iterative generation, you can create a story that is much longer than 8192 tokens by building it piece by piece.\n",
"`gemini-2.5-flash` has an output limit of 8192 tokens per generation call. This means that each individual prompt response cannot exceed this limit. By using iterative generation, you can create a story that is much longer than 8192 tokens by building it piece by piece.\n",
"\n",
"Let's see how many tokens the final story is. Is it longer than 8192 tokens?"
]
Expand All @@ -2155,7 +2155,7 @@
],
"source": [
"# Check the number of tokens in the final story\n",
"# gemini-2.0-flash output token limit is 8192\n",
"# gemini-2.5-flash output token limit is 8192\n",
"total_tokens=client.models.count_tokens(\n",
" model=MODEL_ID, \n",
" contents=final,\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/Tag_and_caption_images.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@
"from PIL import Image as PILImage\n",
"import time\n",
"\n",
"MODEL_ID='gemini-2.0-flash' # @param [\"gemini-2.5-flash-lite\", \"gemini-2.5-flash\", \"gemini-2.5-pro\",\"gemini-3-pro-preview\"] {\"allow-input\":true, isTemplate: true}\n",
"MODEL_ID='gemini-2.5-flash' # @param [\"gemini-2.5-flash-lite\", \"gemini-2.5-flash\", \"gemini-2.5-pro\",\"gemini-3-pro-preview\"] {\"allow-input\":true, isTemplate: true}\n",
"\n",
"# a helper function for calling\n",
"\n",
Expand Down Expand Up @@ -443,7 +443,7 @@
"source": [
"import pandas as pd\n",
"\n",
"EMBEDDINGS_MODEL_ID = \"embedding-001\" # @param [\"embedding-001\", \"text-embedding-004\",\"gemini-embedding-exp-03-07\"] {\"allow-input\":true, isTemplate: true}\n",
"EMBEDDINGS_MODEL_ID = \"embedding-001\" # @param [\"embedding-001\", \"gemini-embedding-001\",\"gemini-embedding-exp-03-07\"] {\"allow-input\":true, isTemplate: true}\n",
"\n",
"def embed(text):\n",
" embedding = client.models.embed_content(\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/Talk_to_documents_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@
" prototype with generative AI applications\n",
"\"\"\"\n",
"\n",
"EMBEDDING_MODEL_ID = MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\": true, \"isTemplate\": true}\n",
"EMBEDDING_MODEL_ID = MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\": true, \"isTemplate\": true}\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

EMBEDDING_MODEL_ID = MODEL_ID = "gemini-embedding-001"  # @param ["gemini-embedding-001"] {"allow-input": true, "isTemplate": true}

"embedding = client.models.embed_content(\n",
" model=EMBEDDING_MODEL_ID,\n",
" contents=sample_text,\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/anomaly_detection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@
"output_type": "stream",
"text": [
"models/embedding-001\n",
"models/text-embedding-004\n",
"models/gemini-embedding-001\n",
"models/gemini-embedding-exp-03-07\n",
"models/gemini-embedding-exp\n",
"models/gemini-embedding-001\n"
Expand Down Expand Up @@ -213,7 +213,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\":true, isTemplate: true}"
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\":true, isTemplate: true}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

MODEL_ID = "gemini-embedding-001" # @param ["gemini-embedding-001"] {"allow-input":true, isTemplate: true}

]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/chromadb/Vectordb_with_chroma.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@
"output_type": "stream",
"text": [
"models/embedding-001\n",
"models/text-embedding-004\n",
"models/gemini-embedding-001\n",
"models/gemini-embedding-exp-03-07\n",
"models/gemini-embedding-exp\n",
"models/gemini-embedding-001\n"
Expand Down Expand Up @@ -312,7 +312,7 @@
"\n",
"class GeminiEmbeddingFunction(EmbeddingFunction):\n",
" def __call__(self, input: Documents) -> Embeddings:\n",
" EMBEDDING_MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\": true, \"isTemplate\": true}\n",
" EMBEDDING_MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\": true, \"isTemplate\": true}\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

    EMBEDDING_MODEL_ID = "gemini-embedding-001"  # @param ["gemini-embedding-001"] {"allow-input": true, "isTemplate": true}

" title = \"Custom query\"\n",
" response = client.models.embed_content(\n",
" model=EMBEDDING_MODEL_ID,\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/clustering_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@
"output_type": "stream",
"text": [
"models/embedding-001\n",
"models/text-embedding-004\n",
"models/gemini-embedding-001\n",
"models/gemini-embedding-exp-03-07\n",
"models/gemini-embedding-exp\n",
"models/gemini-embedding-001\n"
Expand Down Expand Up @@ -217,7 +217,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\":true, isTemplate: true}"
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\":true, isTemplate: true}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

MODEL_ID = "gemini-embedding-001" # @param ["gemini-embedding-001"] {"allow-input":true, isTemplate: true}

]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/document_search.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@
"output_type": "stream",
"text": [
"models/embedding-001\n",
"models/text-embedding-004\n",
"models/gemini-embedding-001\n",
"models/gemini-embedding-exp-03-07\n",
"models/gemini-embedding-exp\n",
"models/gemini-embedding-001\n"
Expand Down Expand Up @@ -209,7 +209,7 @@
},
"outputs": [],
"source": [
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\":true, isTemplate: true}"
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\":true, isTemplate: true}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

MODEL_ID = "gemini-embedding-001" # @param ["gemini-embedding-001"] {"allow-input":true, isTemplate: true}

]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/langchain/Gemini_LangChain_QA_Chroma_WebLoad.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@
"### Initialize Gemini\n",
"\n",
"You must import `ChatGoogleGenerativeAI` from LangChain to initialize your model.\n",
" In this example, you will use **gemini-2.0-flash**, as it supports text summarization. To know more about the text model, read Google AI's [language documentation](https://ai.google.dev/models/gemini).\n",
" In this example, you will use **gemini-2.5-flash**, as it supports text summarization. To know more about the text model, read Google AI's [language documentation](https://ai.google.dev/models/gemini).\n",
"\n",
"You can configure the model parameters such as ***temperature*** or ***top_p***, by passing the appropriate values when initializing the `ChatGoogleGenerativeAI` LLM. To learn more about the parameters and their uses, read Google AI's [concepts guide](https://ai.google.dev/docs/concepts#model_parameters)."
]
Expand Down Expand Up @@ -516,7 +516,7 @@
"# the chain.\n",
"# 3. The `context` and `question` are then passed to the prompt where they\n",
"# are populated in the respective variables.\n",
"# 4. This prompt is then passed to the LLM (`gemini-2.0-flash`).\n",
"# 4. This prompt is then passed to the LLM (`gemini-2.5-flash`).\n",
"# 5. Output from the LLM is passed through an output parser\n",
"# to structure the model's response.\n",
"rag_chain = (\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/langchain/Gemini_LangChain_QA_Pinecone_WebLoad.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -484,7 +484,7 @@
"### Initialize Gemini\n",
"\n",
"You must import `ChatGoogleGenerativeAI` from LangChain to initialize your model.\n",
" In this example, you will use **gemini-2.0-flash**, as it supports text summarization. To know more about the text model, read Google AI's [language documentation](https://ai.google.dev/models/gemini).\n",
" In this example, you will use **gemini-2.5-flash**, as it supports text summarization. To know more about the text model, read Google AI's [language documentation](https://ai.google.dev/models/gemini).\n",
"\n",
"You can configure the model parameters such as ***temperature*** or ***top_p***, by passing the appropriate values when initializing the `ChatGoogleGenerativeAI` LLM. To learn more about the parameters and their uses, read Google AI's [concepts guide](https://ai.google.dev/docs/concepts#model_parameters)."
]
Expand Down Expand Up @@ -592,7 +592,7 @@
"# 2. Use the `RunnablePassthrough` option to provide question during invoke.\n",
"# 3. The `context` and `question` are then passed to the prompt and\n",
"# input variables in the prompt are populated.\n",
"# 4. The prompt is then passed to the LLM (`gemini-2.0-flash`).\n",
"# 4. The prompt is then passed to the LLM (`gemini-2.5-flash`).\n",
"# 5. Output from the LLM is passed through an output parser\n",
"# to structure the model response.\n",
"rag_chain = (\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@
"### Initialize Gemini\n",
"\n",
"You must import `Gemini` from LlamaIndex to initialize your model.\n",
" In this example, you will use **gemini-2.0-flash**, as it supports text summarization. To know more about the text model, read Google AI's [model documentation](https://ai.google.dev/models/gemini).\n",
" In this example, you will use **gemini-2.5-flash**, as it supports text summarization. To know more about the text model, read Google AI's [model documentation](https://ai.google.dev/models/gemini).\n",
"\n",
"You can configure the model parameters such as ***temperature*** or ***top_p***, using the ***generation_config*** parameter when initializing the `Gemini` LLM. To learn more about the model parameters and their uses, read Google AI's [concepts guide](https://ai.google.dev/docs/concepts#model_parameters)."
]
Expand Down
4 changes: 2 additions & 2 deletions examples/prompting/Providing_base_cases.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@
},
"outputs": [],
"source": [
"model = genai.GenerativeModel(model_name='gemini-2.0-flash', system_instruction=instructions)"
"model = genai.GenerativeModel(model_name='gemini-2.5-flash', system_instruction=instructions)"
]
},
{
Expand Down Expand Up @@ -213,7 +213,7 @@
},
"outputs": [],
"source": [
"model = genai.GenerativeModel(model_name='gemini-2.0-flash', system_instruction=instructions)"
"model = genai.GenerativeModel(model_name='gemini-2.5-flash', system_instruction=instructions)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/qdrant/Movie_Recommendation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -661,7 +661,7 @@
"import time\n",
"from google.api_core import exceptions, retry\n",
"\n",
"MODEL_FOR_EMBEDDING = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"embedding-001\", \"text-embedding-004\"] {\"allow-input\":true, isTemplate: true}\n",
"MODEL_FOR_EMBEDDING = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"embedding-001\", \"gemini-embedding-001\"] {\"allow-input\":true, isTemplate: true}\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". Please remove the duplicate to avoid confusion.

MODEL_FOR_EMBEDDING = "gemini-embedding-001" # @param ["gemini-embedding-001", "embedding-001"] {"allow-input":true, isTemplate: true}

"\n",
"BATCH_SIZE = 25\n",
"QDRANT_BATCH_SIZE = 3072\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/qdrant/Qdrant_similarity_search.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@
"from google.genai import types\n",
"\n",
"# Select embedding model\n",
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"text-embedding-004\"] {\"allow-input\": true, \"isTemplate\": true}\n",
"MODEL_ID = \"gemini-embedding-001\" # @param [\"gemini-embedding-001\", \"gemini-embedding-001\"] {\"allow-input\": true, \"isTemplate\": true}\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The list of models in the @param decorator contains a duplicate entry: "gemini-embedding-001". This is redundant. Please remove the duplicate.

MODEL_ID = "gemini-embedding-001"  # @param ["gemini-embedding-001"] {"allow-input": true, "isTemplate": true}

"\n",
"\n",
"# Function to convert text to embeddings\n",
Expand Down
Loading
Loading