Skip to content

403 Forbidden triggered by prompt content #1459

@sourav-saha1997

Description

@sourav-saha1997

What Happened?

Hi Portkey Team,

We are encountering a 403 Forbidden error when sending specific prompts through the Portkey Gateway. The same prompt works when we reprocess the file or run it offline.

For a single file, we have two different sets of fields and we call two APIs using the same configuration. One works, while the other intermittently fails with a 403. However, when we reprocess the file, the previously failing request also succeeds. We did not face this issue with the same prompt previously, but recently we are receiving 403 errors continuously even after adding a retry mechanism.

We would like to understand why this behavior is occurring, so that we can take appropriate recovery actions. This is critical for improving the robustness and reliability of our application.

Incident Details:

  • Trace ID: e0eb6a07-c0fe-49fd-9d22-eb64948b94b4
  • Timestamp: 2025-11-28 05:21:05 UTC
  • Status: 403 Forbidden
  • Server Header: Cloudflare

We have attached:

  • the failed request payload
  • the successful request payload
  • the corresponding error logs of failed request payload
  • the original file
Image [failed_request_payload.ipynb](https://github.com/user-attachments/files/23816600/failed_request_payload.ipynb) [file_403.pdf](https://github.com/user-attachments/files/23816601/file_403.pdf) [success_request_payload.ipynb](https://github.com/user-attachments/files/23816599/success_request_payload.ipynb)

What Should Have Happened?

No response

Relevant Code Snippet

from portkey_ai import Portkey,PORTKEY_GATEWAY_URL
from openai import OpenAI
import instructor
import os

def get_portkey_instructor_client_openai_flavour() -> Any:
openai_client = OpenAI(base_url=PORTKEY_GATEWAY_URL,api_key="dummy-key")
return instructor.patch(openai_client)

llm_provider='google'
llm_config_name_mapping = {
"google":os.environ["PORTKEY_GOOGLE_CONFIG_ID"],
"openai":os.environ["PORTKEY_OPENAI_CONFIG_ID"],
}
config_name = llm_config_name_mapping[llm_provider]
client = get_portkey_instructor_client_openai_flavour()
metadata={'vdoc_trace_id':'8d11326e7cb10f705f9e655ff516f77a'}
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": content_items}],
max_retries=0,
extra_headers={
"x-portkey-config": config_name,
"x-portkey-metadata": json.dumps(metadata),
"x-portkey-api-key": os.environ["PORTKEY_API_KEY"]
},
temperature=0.0,
seed=10,
response_model=response_model
)

Your Twitter/LinkedIn

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtriage

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions