Skip to content

LLM Ignores Instruction to not answer if information is not in context #5

Open
@HenrikGudmundsson

Description

@HenrikGudmundsson

Description

The unit test Speculative Answer Generation Chain › Simple RAG › should refuse to answer if information is not in context in the file src/modules/agent/chains/speculative-answer-generation.chain.test.ts fails because the language model (I'm using OpenAI) used in the implementation is ignoring the instruction to not answer the question if the answer is not provided in the context during the unit test execution.

Steps to Reproduce

Run the unit tests using npm run test speculative-answer-generation.chain.test.ts
Observe that the test Speculative Answer Generation Chain › Simple RAG › should refuse to answer if information is not in context fails.

Expected Behavior

The test should pass by confirming that the language model refuses to answer questions when the required information is not present in the provided context.

Actual Behavior

The language model attempts to answer the question even when the necessary information is not included in the context, leading to the test failure.

Additional remark

I have just passed the Answer Generation Chain step in your GraphAcademy course. Filing this issue assuming there is no future step in the course, that will make the test pass later on.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions