Skip to content
This repository was archived by the owner on May 27, 2025. It is now read-only.
This repository was archived by the owner on May 27, 2025. It is now read-only.

[BUG]Auto-Template Generation fails #285

@Ieremchuk

Description

@Ieremchuk

Describe the bug
I'm currently working through the Advanced_Getting_Started guide. The deployment completed successfully.

I'm encountering an issue with "Auto-Template Generation fails." While the API error mentions "Error generating prompts for data in '{container_name}'. Please try a lower limit." the trace log in AppInsight indicates a problem with resolving the LLM model.

Traceback (most recent call last):
File "/backend/graphrag_app/api/prompt_tuning.py", line 60, in generate_prompts
prompts: tuple[str, str, str] = await api.generate_indexing_prompts(
File "/usr/local/lib/python3.10/site-packages/pydantic/_internal/_validate_call.py", line 34, in wrapper_function
return await wrapper(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/graphrag/api/prompt_tune.py", line 107, in generate_indexing_prompts
domain = await generate_domain(llm, doc_list)
File "/usr/local/lib/python3.10/site-packages/graphrag/prompt_tune/generator/domain.py", line 26, in generate_domain
response = await llm(domain_prompt)
File "/usr/local/lib/python3.10/site-packages/fnllm/openai/llm/chat.py", line 83, in call
return await self._text_chat_llm(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/openai/llm/features/tools_parsing.py", line 120, in call
return await self._delegate(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/base/base.py", line 112, in call
return await self._invoke(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/base/base.py", line 128, in _invoke
return await self._decorated_target(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/services/json.py", line 71, in invoke
return await delegate(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/services/retryer.py", line 109, in invoke
result = await execute_with_retry()
File "/usr/local/lib/python3.10/site-packages/fnllm/services/retryer.py", line 93, in execute_with_retry
async for a in AsyncRetrying(
File "/usr/local/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 166, in anext
do = await self.iter(retry_state=self._retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 153, in iter
result = await action(retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner
return call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/tenacity/init.py", line 400, in
self._add_action_func(lambda rs: rs.outcome.result())
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/site-packages/fnllm/services/retryer.py", line 101, in execute_with_retry
return await attempt()
File "/usr/local/lib/python3.10/site-packages/fnllm/services/retryer.py", line 78, in attempt
return await delegate(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/services/rate_limiter.py", line 70, in invoke
result = await delegate(prompt, **args)
File "/usr/local/lib/python3.10/site-packages/fnllm/services/json.py", line 71, in invoke
return await delegate(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/base/base.py", line 152, in _decorator_target
output = await self._execute_llm(prompt, **kwargs)
File "/usr/local/lib/python3.10/site-packages/fnllm/openai/llm/chat_text.py", line 155, in _execute_llm
completion = await self._call_completion_or_cache(
File "/usr/local/lib/python3.10/site-packages/fnllm/openai/llm/chat_text.py", line 127, in _call_completion_or_cache
return await self._cache.get_or_insert(
File "/usr/local/lib/python3.10/site-packages/fnllm/services/cache_interactor.py", line 41, in get_or_insert
return await func()
File "/usr/local/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py", line 2000, in create
return await self._post(
File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1767, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1461, in request
return await self._request(
File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1562, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

Additional context
Here is the deployment configuration

{
"LOCATION": "eastus2",
"RESOURCE_GROUP": "AGraphRag",
"GRAPHRAG_LLM_MODEL": "gpt-4",
"GRAPHRAG_LLM_MODEL_VERSION": "turbo-2024-04-09",
"GRAPHRAG_LLM_DEPLOYMENT_NAME": "gpt-4"
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions