You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`--json_schema`| File path to json file for schema for structured output | ❌ |
48
-
** One of either prompt, prompt_custom, or prompt_text must be selected.
47
+
** One of either `--prompt`or `--prompt_text` must be selected.
49
48
50
49
## Scope
51
50
The program supports three scopes: code or text or image. Depending on which is selected, the program supports different models and prompts tailored for each option.
@@ -67,8 +66,15 @@ The user can also explicitly specify the submission type using the `--submission
67
66
Currently, jupyter notebook, pdf, and python assignments are supported.
68
67
69
68
## Prompts
70
-
The user can use this argument to specify which predefined prompt they wish the model to use.
71
-
To view the predefined prompts, navigate to the ai_feedback/data/prompts/user folder. Each prompt is stored as a markdown file that can contain template placeholders with the following structure:
69
+
The `--prompt` argument accepts either pre-defined prompt names or custom file paths:
70
+
71
+
### Pre-defined Prompts
72
+
To use pre-defined prompts, specify the prompt name (without extension). Pre-defined prompts are stored as markdown (.md) files in the `ai_feedback/data/prompts/user/` directory.
73
+
74
+
### Custom Prompt Files
75
+
To use custom prompt files, specify the file path to your custom prompt. The file should be a markdown (.md) file.
76
+
77
+
Prompt files can contain template placeholders with the following structure:
72
78
73
79
```markdown
74
80
Consider this question:
@@ -86,7 +92,7 @@ Prompt Naming Conventions:
86
92
- Prompts to be used when --scope image is selected are prefixed with image_{}.md
87
93
- Prompts to be used when --scope text is selected are prefixed with text_{}.md
88
94
89
-
If the --scope argument is provided and its value does not match the prefix of the selected --prompt, an error message will be displayed.
95
+
Scope validation (prefix matching) only applies to pre-defined prompts. Custom prompt files can be used with any scope.
90
96
91
97
All prompts are treated as templates that can contain special placeholder blocks, the following template placeholders are automatically replaced:
92
98
-`{context}` - Question context
@@ -123,8 +129,16 @@ All prompts are treated as templates that can contain special placeholder blocks
123
129
## Prompt_text
124
130
Additonally, the user can pass in a string through the --prompt_text argument. This will either be concatenated to the prompt if --prompt is used or fed in as the only prompt if --prompt is not used.
125
131
126
-
## Prompt_custom
127
-
The user can pass in their own custom prompt file and use the --prompt_custom argument to flag that the model should use the custom prompt. This can be used instead of choosing one of the predefined prompts.
132
+
## System Prompts
133
+
The `--system_prompt` argument accepts either pre-defined system prompt names or custom file paths:
134
+
135
+
### Pre-defined System Prompts
136
+
To use pre-defined system prompts, specify the system prompt name (without extension). Pre-defined system prompts are stored as markdown (.md) files in the `ai_feedback/data/prompts/system/` directory.
137
+
138
+
### Custom System Prompt Files
139
+
To use custom system prompt files, specify the file path to your custom system prompt. The file should be a markdown (.md) file.
140
+
141
+
System prompts define the AI model's behavior, tone, and approach to providing feedback. They are used to set the context and personality of the AI assistant.
128
142
129
143
## Models
130
144
The models used can be seen under the ai_feedback/models folder.
- python_tester_llm_pdf.py: Runs LLM on any pdf assignment (solution file and submission file) uploaded to the autotester. Creates general feedback about whether the student's written responses matches the instructors feedback. Dislayed in test outputs and overall comments.
347
367
- custom_tester_llm_code.sh: Runs LLM on assignments (solution file, submission file, test output file) uploaded to the custom autotester. Currently, supports jupyter notebook files uploaded. Can specify prompt and model used in the script. Displays in overall comments and in test outputs. Can optionally uncomment the annotations section to display annotations, however the annotations will display on the .txt version of the file uploaded by the student, not the .ipynb file.
348
368
349
-
<<<<<<< Updated upstream
350
-
351
369
#### Python AutoTester Usage
352
370
##### Code Scope
353
371
1. Ensure the student has submitted a submission file (_submission suffixed).
@@ -412,7 +430,7 @@ Also pip install other packages that the submission or solution file uses.
NOTE: if the LLM Test Group appears to be blank/does not turn green, try increasing the timeout.
415
-
=======
433
+
416
434
#### Custom Tester
417
435
- custom_tester_llm_code.sh: Runs LLM on any assignment (solution file, submission file, test output file) uploaded to the autotester. Can specify prompt and model used in the script. Displays in overall comments and in test outputs.
0 commit comments