diff --git a/examples/lemonade/notebooks/lemonade_model_validation.ipynb b/examples/lemonade/notebooks/lemonade_model_validation.ipynb new file mode 100644 index 00000000..ea4f4821 --- /dev/null +++ b/examples/lemonade/notebooks/lemonade_model_validation.ipynb @@ -0,0 +1,597 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "08b63c37", + "metadata": {}, + "source": [ + "# Lemonade Tools Tutorial\n", + "\n", + "This notebook is a tutorial for: how to measure an LLM's performance, memory usage, accuracy, and subjective quality on Ryzen AI hardware using Lemonade (LLM-Aide) tools.\n", + "\n", + "The tutortial follows this flow:\n", + "1. Lemonade Overview.\n", + " 1. Command syntax.\n", + " 2. Installing Lemonade Tools and Ryzen AI SW support.\n", + " 3. Choosing your LLM-under-test.\n", + "3. Benchmarking.\n", + " 1. Benchmark the LLM's performance.\n", + " 2. Interpreting time to first token (TTFT), tokens/second (TPS), and memory usage data.\n", + "4. Subjective Quality Testing.\n", + " 1. Prompt the LLM using its chat template.\n", + " 2. How to assess the response as a human judge.\n", + " 3. How to automatically assess the response using an LLM judge.\n", + "5. Objective Quality Testing.\n", + " 1. Overview of the LM Evaluation Harness tool.\n", + " 2. Measuring log-probability accuracy with `MMLU`.\n", + " 3. Measuring generation accuracy with `GSM8k`.\n", + "\n", + "## Lemonade Overview\n", + "\n", + "Lemonade (LLM-Aide) is a software development kit (SDK) that expedites measurement, validation, and deployment of LLMs. It primarily supports OnnxRuntime-GenAI (OGA)-based LLMs but also provides support for llama.cpp and Hugging Face PyTorch LLMs as performance and accuracy baselines.\n", + "\n", + "This tutorial will focus on measurement and validation tasks, using the `lemonade` command line interface (CLI).\n", + "\n", + "### Command Syntax\n", + "\n", + "The `lemonade` CLI uses a unique command syntax that enables convenient interoperability between models, frameworks, devices, accuracy tests, and deployment options.\n", + "\n", + "Each unit of functionality (e.g., loading a model, running a test, deploying a server, etc.) is called a `Tool`, and a single call to `lemonade` can invoke any number of `Tools`. Each `Tool` will perform its functionality, then pass its state to the next `Tool` in the command.\n", + "\n", + "You can read each command out loud to understand what it is doing. For example, a command like this:\n", + "\n", + "```bash\n", + "lemonade -i microsoft/Phi-3-mini-4k-instruct oga-load --device igpu --dtype int4 llm-prompt -p \"Hello, my thoughts are\"\n", + "```\n", + "\n", + "Can be read like this:\n", + "\n", + "> Run `lemonade` on the input `(-i)` checkpoint `microsoft/Phi-3-mini-4k-instruct`. First, load it in the OnnxRuntime GenAI framework (`oga-load`), onto the integrated GPU device (`--device igpu`) in the int4 data type (`--dtype int4`). Then, pass the OGA model to the prompting tool (`llm-prompt`) with the prompt (`-p`) \"Hello, my thoughts are\" and print the response.\n", + "\n", + "The `lemonade -h` command will show you which options and Tools are available, and `lemonade TOOL -h` will tell you more about that specific Tool." + ] + }, + { + "cell_type": "markdown", + "id": "71563fbd", + "metadata": {}, + "source": [ + "### Installation\n", + "\n", + "Before running any cell in this notebook, the following setup steps are required:\n", + "1. Install Conda (we suggest the [Miniforge](https://github.com/conda-forge/miniforge) flavor of Conda):\n", + " 1. Download: https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Windows-x86_64.exe\n", + " 2. Double click to install. Make sure to install for `Just Me` (not `All Users`).\n", + " 3. Open the `Miniforge Prompt` app that was just installed, and use that to run the following commands.\n", + "1. Create and activate a Python 3.10 environment. For example:\n", + " ```bash\n", + " conda create -n hybrid python=3.10\n", + " conda activate hybrid\n", + " ```\n", + "1. Install Lemonade + Hybrid:\n", + "\n", + " ```bash\n", + " pip install turnkeyml[llm-oga-hybrid]\n", + " lemonade-install --ryzenai hybrid\n", + " ```\n", + "1. Configure this Jupyter notebook to use the `hybrid` environment as its Python kernel.\n", + "\n", + "Additional backends, such as CPU-only and DirectML-only, are also available: https://github.com/onnx/turnkeyml/blob/main/docs/lemonade/README.md#installing-from-pypi\n", + "\n", + "### Environment Configuration\n", + "\n", + "These commands customize the Lemonade environment for the experiments we will run in this notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7ee3afa0", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "# This improves the signal/noise ratio of Lemonade outputs in the notebook\n", + "os.environ[\"TURNKEY_BUILD_MONITOR\"] = \"False\"\n", + "# This places all output data in the same directory as this notebook\n", + "# It can be customized on a per-experiment basis:\n", + "# For example, this would put the data in a new folder called `my-experiment-100`:\n", + "# cache_dir = \"./my-experiment-100\"\n", + "cache_dir = \"./tutorial-cache\"\n", + "os.environ[\"LEMONADE_CACHE_DIR\"] = cache_dir" + ] + }, + { + "cell_type": "markdown", + "id": "07ac153c", + "metadata": {}, + "source": [ + "### Choosing an LLM-Under-Test\n", + "\n", + "Use the following code block to customize which LLM and device will be used for this tutorial. The table below links device names to Hugging Face model collections filled with compatible models. All of these models use the int4 data type.\n", + "\n", + "| Device | Collection |\n", + "| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- |\n", + "| `hybrid` | [Hybrid Collection](https://huggingface.co/collections/amd/ryzenai-14-llm-hybrid-models-67da31231bba0f733750a99c) |\n", + "| `npu` | [NPU Collection](https://huggingface.co/collections/amd/ryzenai-14-llm-npu-models-67da3494ec327bd3aa3c83d7) |\n", + "| `cpu` | [CPU Collection](https://huggingface.co/collections/amd/oga-cpu-llm-collection-6808280dc18d268d57353be8) |\n", + "| `igpu` | [GPU Collection](https://huggingface.co/collections/amd/ryzenai-oga-dml-models-67f940914eee51cbd794b95b) |\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "530e7393", + "metadata": {}, + "outputs": [], + "source": [ + "checkpoint = \"amd/Llama-3.2-1B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid\"\n", + "device = \"hybrid\"\n", + "DTYPE = \"int4\"" + ] + }, + { + "cell_type": "markdown", + "id": "b38b07fd", + "metadata": {}, + "source": [ + "## Benchmarking\n", + "\n", + "This section shows you how to benchmark your LLM. Our goal is to measure the following 3 properties of the LLM-under-test:\n", + "1. Time to first token (TTFT): the amount of time the user has to wait for the LLM to prefill the prompt, before returning the first response token.\n", + "2. Tokens per second (TPS): the number of tokens the LLM outputs to the user each second, after the first token.\n", + "3. Memory utilization (GB): the amount of RAM required to hold the LLM in memory and calculate the response to the prompt.\n", + "\n", + "This section will leverage the following Lemonade commands:\n", + "1. `oga-load`: load an OGA LLM into memory.\n", + "2. `oga-bench`: benchmark an OGA LLM.\n", + "3. `report`: print the outcome of the experiment to the screen and save it to disk in a CSV file.\n", + "\n", + "### Benchmark Command\n", + "\n", + "Benchmarking an OGA LLM requires the `oga-load` and `oga-bench` commands. Both of these commands are configurable.\n", + "\n", + "#### `oga-load` Configuration\n", + "The `oga-load` tool has settings to help you load your target model:\n", + "- The `-i` (input) argument from the `lemonade` command is passed into `oga-load`, and determines which LLM to load. We will pass the name of a Hugging Face checkpoint for a pre-optimized OGA LLM.\n", + "- `--device`: which device (e.g., `hybrid`, `cpu`) to load the LLM on (each OGA checkpoint is only compatible with one device).\n", + "- `--dtype`: the data type of the LLM's weights in memory (each OGA checkpoint only supports one data type).\n", + "\n", + "#### `oga-bench` Configuration\n", + "The `oga-bench` tool has settings to customize the benchmarking experiment:\n", + "- `input_sequence_lengths`: list of input sizes to the model. Performance data will be collected for each item in the list.\n", + "- `output_sequence_length`: number of output tokens to produce during generation.\n", + "- `iterations`: how many times to repeat the experiment. Higher values take longer to run, but produce a more accurate result.\n", + "- `warmup-iterations`: iterations that are not counted towards the experimental data. Used to warm up the system, such as the cache.\n", + "\n", + "> Note: we're setting warmup to 0 in this tutorial to save demonstration time, but a typical value would be 5-10." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5115c523", + "metadata": {}, + "outputs": [], + "source": [ + "input_sequence_lengths = \"256 512 1024 2048\"\n", + "output_sequence_length = 64\n", + "iterations = 5\n", + "warmup = 0\n", + "\n", + "!lemonade -i {checkpoint} oga-load \\\n", + " --device {device} \\\n", + " --dtype {DTYPE} \\\n", + " oga-bench \\\n", + " --prompts {input_sequence_lengths} \\\n", + " --output-tokens {output_sequence_length} \\\n", + " --iterations {iterations} \\\n", + " --warmup-iterations {warmup}\n", + " " + ] + }, + { + "cell_type": "markdown", + "id": "8e9c9a1d", + "metadata": {}, + "source": [ + "### Interpreting the Results\n", + "\n", + "The benchmark command outputs a lot of data to the terminal and also saves that data to the Lemonade Cache on disk.\n", + "\n", + "The `report` command can help us visualize the data in a table format. We are customizing the `report` tool with these settings:\n", + "- `-i`: which Lemonade Cache to report on. We will use the tutorial cache directory we set up at the start of the notebook.\n", + "- `--no-save`: tells the `report` tool to not save a CSV file of the data to disk (since we are not using it in this tutorial).\n", + "- `--perf`: include performance information in the table printed to the screen.\n", + "- `--lean`: include minimal other information in the table printed to the screen, for a cleaner presentation.\n", + "\n", + "> Note: a lot of additional data is saved to the cache, such as system information, that is not printed here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "93099693", + "metadata": {}, + "outputs": [], + "source": [ + "!lemonade report -i {cache_dir} --no-save --perf --lean" + ] + }, + { + "cell_type": "markdown", + "id": "5be7f649", + "metadata": {}, + "source": [ + "## Subjective Quality Testing\n", + "\n", + "This section shows you how to prompt the LLM-under-test and get the response. It is complementary to objective quality testing (the next section of this tutorial) because it lets us see how the LLM will react to basic real-world scenarios.\n", + "\n", + "Subjective testing helps us quickly identify undesirable behaviors such as rambling responses, responses that erroneously include special tokens, etc.\n", + "\n", + "We will also cover how to use a local LLM judge to automatically assess whether the model's response is reasonable.\n", + "\n", + "### Prompt Command\n", + "\n", + "The `llm-prompt` command sends your prompt to the LLM under test and then prints the response to the screen.\n", + "\n", + "Options:\n", + "- Feel free to customize the `prompt` variable to anything you like.\n", + "- `--template` applies the model's chat template to the prompt, which improves output quality.\n", + "- `--max-new-tokens` limits the amount of output the LLM is allowed to produce.\n", + "\n", + "> Note: The output you see will include the full prompt the LLM sees, including all special template tokens. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "94c39884", + "metadata": {}, + "outputs": [], + "source": [ + "prompt = \"What is the capital of France?\"\n", + "\n", + "prompt_cmd_output = !lemonade -i {checkpoint} \\\n", + " oga-load --device {device} --dtype {DTYPE} \\\n", + " llm-prompt --template --max-new-tokens 64 -p \"{prompt}\"\n", + "\n", + "print(prompt_cmd_output.n)" + ] + }, + { + "cell_type": "markdown", + "id": "ece03214", + "metadata": {}, + "source": [ + "### Subjective Response Validation\n", + "\n", + "One of the easiest ways to validate an LLM is to make sure that it responds to simple questions in a clear and concise way.\n", + "\n", + "- In the previous code block, we prompted: \"What is the capital of France?\"\n", + "\n", + "- The response should be something like: \"The capital of France is Paris.\"\n", + "\n", + "If you got a clear and concise response like that, consider the response a success ✅. If the response is rambling or nonsensical, consider it a fail ❌.\n", + "\n", + "### Automatic Response Validation\n", + "\n", + "If you are validating LLMs at scale, you may want to automatically assess the response quality. The following code extracts the prompt and response from the previous cell and feeds it into an LLM judge, which assesses the response quality.\n", + "\n", + "#### Accessing the Lemonade Database\n", + "The `Stats` class is useful for extracting experimental data from a specific `lemonade` command. In this case, we will use it to programmatically access the `prompt` and `response` from the last command. Later, we will also use `Stats` to store our analysis back to the database.\n", + "\n", + "The `Stats` class requires the cache directory and build name of the command in question, which we will obtain by parsing the output of the last command. From there, we can load up the `Stats` from that command and access the `prompt` and `response` values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "671670a3", + "metadata": {}, + "outputs": [], + "source": [ + "from turnkeyml.common.filesystem import Stats\n", + "\n", + "# Parse the output of the last cell to get the build directory\n", + "full_cache_dir, build_name = (\n", + " next(l for l in prompt_cmd_output if \"Build dir:\" in l) # find the line\n", + " .split(\"Build dir:\")[1].strip() # drop the label\n", + " .rsplit(\"\\\\builds\\\\\",1) # split off last segment\n", + ")\n", + "\n", + "# Load up the stats dictionary from the prompt command so that we can access them\n", + "stats_handle = Stats(cache_dir=full_cache_dir, build_name=build_name)\n", + "stats_dict = stats_handle.stats\n", + "\n", + "prompt = stats_dict[\"prompt\"]\n", + "response = stats_dict[\"response\"]\n", + "\n", + "# Print the values to make sure we captured them correctly\n", + "print(\"Prompt:\\n\", prompt)\n", + "print(\"Response:\\n\", response)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "id": "12aa4ff7", + "metadata": {}, + "source": [ + "#### Starting Lemonade Server\n", + "\n", + "Lemonade Server is a tool that loads an LLM into a separate process and allows us to interact with it over a high-level OpenAI `Chat Completions` API.\n", + "\n", + "Let's break that down a little:\n", + "- The OpenAI API is an industry standard protocol for communicating with server processes.\n", + "- The `openai.OpenAI` class is a convenient Python client for formatting requests and parsing the responses.\n", + "- The [Chat Completions API](https://platform.openai.com/docs/guides/text?api-mode=chat) allows for back-and-forth messaging with the served LLM.\n", + "\n", + "We will use the server to load up our LLM judge, `Llama-3.1-8B-Instruct-Hybrid`, and interact with it.\n", + "\n", + "The next cell will start Lemonade Server as a subprocess that will run alongside this notebook, until we shut it down. Note that the last cell in the notebook runs `!lemonade-server-dev stop`, which shuts down this process. Make sure to run that cell when you are done with the notebook!\n", + "\n", + "> Note: OpenAI API was originally invented for communication with LLM servers in the datacenter, but it has since been adopted by the community for server processes that run right on the local PC as well." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "838fa680", + "metadata": {}, + "outputs": [], + "source": [ + "import subprocess\n", + "import time\n", + "from lemonade_server.cli import status\n", + "\n", + "# Start the lemonade-server-dev serve command in a non-blocking manner\n", + "subprocess.Popen(['lemonade-server-dev', 'serve'])\n", + "\n", + "# Wait until the server process is ready\n", + "while not status()[0]:\n", + " time.sleep(5)" + ] + }, + { + "cell_type": "markdown", + "id": "9c53028e", + "metadata": {}, + "source": [ + "#### Seeking Judgement\n", + "\n", + "Now that Lemonade Server is available, we can ask it to judge our LLM-under-test's response.\n", + "\n", + "We will use a `system prompt` to give specific instructions to the LLM judge, ensuring that it returns its response in the form of a JSON object. This approach will make it easy to parse the judgement data and save it back to our database later.\n", + "\n", + "Then, we will send our request for judgement as an OpenAI API chat completions request using the `OpenAI` Python library and parse the response.\n", + "\n", + "> Note: if you already have the `Llama-3.1-8B-Instruct-Hybrid` model downloaded on your system, this step should only take 5-10 seconds. However, if you don't already have the model this step will download it for you (~8GB), which can take a few minutes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fa990cc2", + "metadata": {}, + "outputs": [], + "source": [ + "# Provide a system prompt that will help the LLM judge give us an easy-to-parse response\n", + "system_prompt = \"\"\"\n", + " You are a judge that evaluates prompt/response pairs from an LLM under test. Your job is to determine if the response is reasonable, concise, and appropriate for a small local LLM.\n", + "Specifically, check for:\n", + "\n", + "1. No insertion of special tokens or role markers (e.g., \"assistant\", \"<|start_header_id|>\", etc.) in the response.\n", + "2. The reply should not ramble or continue after giving a direct answer.\n", + "3. The LLM must not have a conversation with itself, repeat roles, or include multiple \"assistant\" tokens or role markers in the response.\n", + "4. The response should directly answer the prompt and not include unnecessary information.\n", + "\n", + "If any of these issues are present, mark the response as invalid and explain the reason.\n", + "\n", + "Return your answer as a JSON object: {\"valid\": bool, \"detail\": str} \n", + "\"\"\"\n", + "\n", + "# Provide the prompt/response pair to the judge\n", + "user_prompt = f\"\"\"\n", + "LLM's Prompt: {prompt}\n", + "\n", + "LLM's Response: {response}\n", + "\n", + "\"\"\"\n", + "\n", + "messages = [\n", + " {\"role\":\"system\", \"content\":system_prompt},\n", + " {\"role\":\"user\", \"content\":user_prompt},\n", + "]\n", + "\n", + "# Use the OpenAI API to send the messages to the LLM judge\n", + "from openai import OpenAI\n", + "\n", + "base_url = f\"http://localhost:8000/api/v0\"\n", + "\n", + "client = OpenAI(\n", + " base_url=base_url,\n", + " api_key=\"lemonade\", # required, but unused\n", + ")\n", + "\n", + "completion = client.chat.completions.create(\n", + " model=\"Llama-3.1-8B-Instruct-Hybrid\",\n", + " messages=messages,\n", + " max_completion_tokens=128,\n", + ")\n", + "\n", + "judgement = completion.choices[0].message.content\n", + "\n", + "# Print the response (which should be a JSON object)\n", + "print(judgement)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "829d4e72", + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "\n", + "# Extract the assistant's response\n", + "decoded_json = json.loads(judgement)\n", + "\n", + "# Print the parsed values\n", + "print(\"The local LLM judge says the response is reasonable:\", decoded_json[\"valid\"])\n", + "print(\"and offers this explanation:\", decoded_json[\"detail\"])\n", + "\n", + "# Save the results to the stats option, so that they become part of the results database\n", + "stats_handle.save_stat(\"llm_judgement\",decoded_json[\"valid\"])\n", + "stats_handle.save_stat(\"llm_judgement_reason\",decoded_json[\"detail\"])" + ] + }, + { + "cell_type": "markdown", + "id": "c74c329d", + "metadata": {}, + "source": [ + "## Objective Quality Testing\n", + "\n", + "Our final set of experiments for our LLM-under-test is objective accuracy testing.\n", + "\n", + "We will use [LM-Evaluation-Harness](https://github.com/EleutherAI/lm-evaluation-harness) (often called `lm-eval`), an open-source framework for evaluating language models across a wide variety of tasks and benchmarks. Developed by EleutherAI, it has become a standard tool in the AI research community for consistent evaluation of language model capabilities.\n", + "\n", + "`lm-eval` works with OpenAI API-compatible servers, like the Lemonade Server we started in the last section.\n", + "\n", + "### Loading the LLM-Under-Test\n", + "\n", + "In this section, we'll load our LLM-under-test onto the Lemonade Server process we started in the last section.\n", + "\n", + "To accomplish this, we'll use Lemoande Server's `load` endpoint, which we will access using the Python `requests` library. More documentation about Lemonade Server endpoints is available [here](https://github.com/onnx/turnkeyml/blob/main/docs/lemonade/server_spec.md).\n", + "\n", + "The `load` endpoint allows any checkpoint to be loaded into the server; we just have to provide Lemonade Server with a `recipe` that lets it know which framework and device to use. Lemonade Recipe documentation is available [here](https://github.com/onnx/turnkeyml/blob/main/docs/lemonade/lemonade_api.md)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d8823712", + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import json\n", + "\n", + "# Note: change this to `hf-{device}` if you are using Hugging Face as your framework \n", + "recipe = f\"oga-{device}\"\n", + "payload = {\"checkpoint\": checkpoint, \"recipe\": recipe}\n", + "\n", + "response = requests.post(\n", + " \"http://localhost:8000/api/v0/load\",\n", + " headers={\"Content-Type\": \"application/json\"},\n", + " data=json.dumps(payload)\n", + ")\n", + "\n", + "# Make sure the correct model loaded\n", + "print(response.text)" + ] + }, + { + "cell_type": "markdown", + "id": "e31a11a2", + "metadata": {}, + "source": [ + "### Log Probability Testing with MMLU\n", + "\n", + "These tests evaluate a model's ability to assign probabilities to different possible answers. The model predicts which answer is most likely based on conditional probabilities.\n", + "\n", + "In MMLU (Massive Multitask Language Understanding), the model is given a multiple-choice question and must assign probabilities to each answer choice. The model's performance is measured by how often it assigns the highest probability to the correct answer.\n", + "\n", + "Options:\n", + "- `model`: `local-completions` means to use a local LLM server, like Lemonade Server.\n", + "- `model_args`: this points to our Lemonade Server process and tells `lm-eval` which model we have loaded up, and how to access it.\n", + "- `tasks`: these are the accuracy tests that will be run. Right now we just have one MMLU subject, `mmlu_abstract_algebra`.\n", + "- `limit`: run only the first N tasks in the test.\n", + "\n", + "> Note: this test takes about 2 minutes to run with a limit of 5. In a real-world testing scenario, we would remove the `limit` argument, which would run all questions in the subject(s). We would also suggest running multiple MMLU subjects besides `mmlu_abstract_algebra` to gather more accuracy data.\n", + "\n", + "This command will print out a table of accuracy results. Since we are running a small amount of questions (for the same of demonstration time) we may not see a very high accuracy score." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "620b776f", + "metadata": {}, + "outputs": [], + "source": [ + "model_args = f\"model={checkpoint},base_url=http://localhost:8000/api/v0/completions,num_concurrent=1,max_retries=0,tokenized_requests=False\"\n", + "\n", + "!lm_eval \\\n", + " --model local-completions \\\n", + " --model_args {model_args} \\\n", + " --tasks mmlu_abstract_algebra \\\n", + " --limit 5" + ] + }, + { + "cell_type": "markdown", + "id": "97b4e877", + "metadata": {}, + "source": [ + "### Generation Testing with GSM8k\n", + "\n", + "These tests evaluate a model's ability to generate full responses to prompts. The model generates text that is then evaluated against reference answers or using specific metrics.\n", + "\n", + "In GSM8K (Grade School Math), the model is given a math problem and must generate a step-by-step solution. Performance is measured by whether the final answer is correct.\n", + "\n", + "> Note: this test takes about 2 minutes to run with a limit of 5. In a real-world testing scenario, we would remove the `limit` argument, which would run all questions in the test.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8946bcf6", + "metadata": {}, + "outputs": [], + "source": [ + "!lm_eval \\\n", + " --model local-completions \\\n", + " --model_args {model_args} \\\n", + " --tasks gsm8k \\\n", + " --limit 5" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2ede3b62", + "metadata": {}, + "outputs": [], + "source": [ + "# Stop the Lemonade Server process that we started earlier in the notebook\n", + "!lemonade-server-dev stop" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "hybrid", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.16" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/src/lemonade_install/server_models.json b/src/lemonade_install/server_models.json deleted file mode 100644 index b2c076c8..00000000 --- a/src/lemonade_install/server_models.json +++ /dev/null @@ -1,80 +0,0 @@ -{ - "Qwen2.5-0.5B-Instruct-CPU": { - "checkpoint": "amd/Qwen2.5-0.5B-Instruct-quantized_int4-float16-cpu-onnx", - "recipe": "oga-cpu", - "reasoning": false, - "suggested": true - }, - "Llama-3.2-1B-Instruct-Hybrid": { - "checkpoint": "amd/Llama-3.2-1B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": false, - "max_prompt_length": 3000, - "suggested": true - }, - "Llama-3.2-3B-Instruct-Hybrid": { - "checkpoint": "amd/Llama-3.2-3B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": false, - "max_prompt_length": 2000, - "suggested": true - }, - "Phi-3-Mini-Instruct-Hybrid": { - "checkpoint": "amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-fp16-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": false, - "max_prompt_length": 2000, - "suggested": true - }, - "Phi-3.5-Mini-Instruct-Hybrid": { - "checkpoint": "amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": false, - "suggested": false - }, - "Qwen-1.5-7B-Chat-Hybrid": { - "checkpoint": "amd/Qwen1.5-7B-Chat-awq-g128-int4-asym-fp16-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": false, - "max_prompt_length": 3000, - "suggested": true - }, - "DeepSeek-R1-Distill-Llama-8B-Hybrid": { - "checkpoint": "amd/DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": true, - "max_prompt_length": 2000, - "suggested": true - }, - "DeepSeek-R1-Distill-Qwen-7B-Hybrid": { - "checkpoint": "amd/DeepSeek-R1-Distill-Qwen-7B-awq-asym-uint4-g128-lmhead-onnx-hybrid", - "recipe": "oga-hybrid", - "reasoning": true, - "max_prompt_length": 2000, - "suggested": true - }, - "Llama-3.2-1B-Instruct-DirectML": { - "checkpoint": "amd/Llama-3.2-1B-Instruct-dml-int4-awq-block-128-directml", - "recipe": "oga-igpu", - "reasoning": false, - "suggested": false - }, - "Llama-3.2-3B-Instruct-DirectML": { - "checkpoint": "amd/Llama-3.2-3B-Instruct-dml-int4-awq-block-128-directml", - "recipe": "oga-igpu", - "reasoning": false, - "suggested": false - }, - "Phi-3.5-Mini-Instruct-DirectML": { - "checkpoint": "amd/phi3.5-mini-instruct-int4-awq-block-128-directml", - "recipe": "oga-igpu", - "reasoning": false, - "suggested": false - }, - "Qwen-1.5-7B-Chat-DirectML": { - "checkpoint": "amd/Qwen1.5-7B-Chat-dml-int4-awq-block-128-directml", - "recipe": "oga-igpu", - "reasoning": false, - "suggested": false - } -} \ No newline at end of file