From c2ec8a9c60ac38d152ed48ba8c7c067c8d2c9859 Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Sat, 1 Jul 2023 08:50:39 -0700 Subject: [PATCH] First attempt at internal API docs, refs #65 --- docs/index.md | 1 + docs/python-api.md | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) create mode 100644 docs/python-api.md diff --git a/docs/index.md b/docs/index.md index 10620c45..46a32b42 100644 --- a/docs/index.md +++ b/docs/index.md @@ -29,6 +29,7 @@ maxdepth: 3 --- setup usage +python-api templates logging plugins diff --git a/docs/python-api.md b/docs/python-api.md new file mode 100644 index 00000000..6ab88247 --- /dev/null +++ b/docs/python-api.md @@ -0,0 +1,40 @@ +# Python API + +LLM provides a Python API for executing prompts, in addition to the command-line interface. + +Understanding this API is also important for writing plugins. + +The API consists of the following key classes: + +- `Model` - represents a language model against which prompts can be executed +- `Prompt` - a prompt that can be prepared and then executed against a model +- `Response` - the response executing a prompt against a model +- `Template` - a reusable template for generating prompts + +## Prompt + +A prompt object represents all of the information needed to be passed to the LLM. This could be a single prompt string, but it might also include a separate system prompt, various settings (for temperature etc) or even a JSON array of previous messages. + +## Model + +The `Model` class is an abstract base class that needs to be subclassed to provide a concrete implementation. Different LLMs will use different implementations of this class. + +Model instances provide the following methods: + +- `prompt(prompt: str, ...options) -> Prompt` - a convenience wrapper which creates a `Prompt` instance and then executes it. This is the most common way to use LLM models. +- `stream(prompt: str) -> Response` - a convenient wrapper for `.execute(..., stream=True)`. +- `execute(prompt: Prompt, stream: bool) -> Response` - execute a prepared Prompt instance against the model and return a `Response`, streaming or not-streaming. + +Models usually return subclasses of `Response` that are specific to that model. + +## Response + +The response from an LLM. This could encapusulate a string of text, but for streaming APIs this class will be iterable, with each iteration yielding a short string of text as it is generated. + +Calling `.text()` will return the full text of the response, waiting for the stream to stop executing if necessary. + +The `.debug()` method, once the stream has finished, will return a dictionary of debug information about the response. This varies between different models. + +## Template + +Templates are reusable objects that can be used to generate prompts. They are used by the {ref}`prompt-templates` feature.