-
Notifications
You must be signed in to change notification settings - Fork 6.4k
Open
Labels
bugSomething isn't workingSomething isn't workingtriageIssue needs to be triaged/prioritizedIssue needs to be triaged/prioritized
Description
Bug Description

I'm fairly new to LlamaIndex, so forgive me if this isn't actually a bug, but the error message doesn't make sense. I disabled llm as a parameter (llm=None), yet it's giving me the same error when trying to initialize the query engine.
Version
0.14.4 (Latest)
Steps to Reproduce
Chunking
from llama_index.core.node_parser import MarkdownNodeParser
# Chunk and parse with MarkdownNodeParser
parser = MarkdownNodeParser(
include_metadata=True, # Attach section header path as metadata
include_prev_next_rel=True, # Good for context awareness
header_path_separator=" > ", # Separator for hierarchy in metadata
)
chunked_documents = parser.get_nodes_from_documents(documents)
Local server
# Dev server
from llama_index.vector_stores.chroma import ChromaVectorStore
from llama_index.core import StorageContext
client = chromadb.HttpClient(host="localhost", port=8000)
collection = client.get_or_create_collection("confidential")
local_vector_store = ChromaVectorStore(
chroma_collection=collection
)
storage_context = StorageContext.from_defaults(vector_store=local_vector_store)
Indexing
from llama_index.core import VectorStoreIndex
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
embed_model = HuggingFaceEmbedding("all-MiniLM-L6-v2")
index = VectorStoreIndex(chunked_documents, storage_context=storage_context, embed_model=embed_model)
Initialize query engine based on indexed vector store
# this is where the bug is
query_engine = index.as_query_engine(llm=None)
Relevant Logs/Tracbacks
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~\AppData\Roaming\Python\Python313\site-packages\llama_index\core\llms\utils.py:42, in resolve_llm(llm, callback_manager)
41 llm = OpenAI()
---> 42 validate_openai_api_key(llm.api_key) # type: ignore
43 except ImportError:
File ~\AppData\Roaming\Python\Python313\site-packages\llama_index\llms\openai\utils.py:821, in validate_openai_api_key(api_key)
820 if not openai_api_key:
--> 821 raise ValueError(MISSING_API_KEY_ERROR_MESSAGE)
ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[42], line 1
----> 1 query_engine = index.as_query_engine(llm=None)
File ~\AppData\Roaming\Python\Python313\site-packages\llama_index\core\indices\base.py:509, in BaseIndex.as_query_engine(self, llm, **kwargs)
501 from llama_index.core.query_engine.retriever_query_engine import (
502 RetrieverQueryEngine,
503 )
505 retriever = self.as_retriever(**kwargs)
506 llm = (
507 resolve_llm(llm, callback_manager=self._callback_manager)
508 if llm
--> 509 else Settings.llm
510 )
512 return RetrieverQueryEngine.from_args(
513 retriever,
514 llm=llm,
515 **kwargs,
516 )
File ~\AppData\Roaming\Python\Python313\site-packages\llama_index\core\settings.py:36, in _Settings.llm(self)
34 """Get the LLM."""
35 if self._llm is None:
---> 36 self._llm = resolve_llm("default")
38 if self._callback_manager is not None:
39 self._llm.callback_manager = self._callback_manager
File ~\AppData\Roaming\Python\Python313\site-packages\llama_index\core\llms\utils.py:49, in resolve_llm(llm, callback_manager)
44 raise ImportError(
45 "`llama-index-llms-openai` package not found, "
46 "please run `pip install llama-index-llms-openai`"
47 )
48 except ValueError as e:
---> 49 raise ValueError(
50 "\n******\n"
51 "Could not load OpenAI model. "
52 "If you intended to use OpenAI, please check your OPENAI_API_KEY.\n"
53 "Original error:\n"
54 f"{e!s}"
55 "\nTo disable the LLM entirely, set llm=None."
56 "\n******"
57 )
59 if isinstance(llm, str):
60 splits = llm.split(":", 1)
ValueError:
******
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
To disable the LLM entirely, set llm=None.
******
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingtriageIssue needs to be triaged/prioritizedIssue needs to be triaged/prioritized