Skip to content

Commit d8f1e7b

Browse files
committed
Add backup playbooks
1 parent 95e65db commit d8f1e7b

19 files changed

Lines changed: 133 additions & 15 deletions

File tree

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Build Node Based AI Agents and RAG Workflows with Dify
2+
3+
<!-- Playbook content goes here -->
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{
2+
"id": "dify-ai-agents",
3+
"title": "Build Node Based AI Agents and RAG Workflows with Dify",
4+
"description": "Create AI agents and RAG workflows using Dify's visual node editor with llama.cpp on your STX Halo™",
5+
"time": 60,
6+
"platforms": ["linux", "windows"],
7+
"difficulty": "intermediate",
8+
"isNew": false,
9+
"isFeatured": false,
10+
"published": true,
11+
"tags": ["dify", "agents", "rag", "llamacpp", "workflows"]
12+
}

playbooks/backup/dreambooth-lora-finetuning/playbook.json

Lines changed: 0 additions & 13 deletions
This file was deleted.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Quantize and Export Models to GGUF
2+
3+
<!-- Playbook content goes here -->
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{
2+
"id": "gguf-quantization-export",
3+
"title": "Quantize and Export Models to GGUF",
4+
"description": "Learn how to quantize and export models to GGUF format using llama.cpp on your STX Halo™",
5+
"time": 60,
6+
"platforms": ["linux", "windows"],
7+
"difficulty": "intermediate",
8+
"isNew": false,
9+
"isFeatured": false,
10+
"published": true,
11+
"tags": ["gguf", "quantization", "llamacpp", "model-export"]
12+
}
Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
# DreamBooth Fine-tuning with LoRA
1+
# Local Foundry
22

33
<!-- Playbook content goes here -->
4-
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{
2+
"id": "local-foundry",
3+
"title": "Local Foundry",
4+
"description": "Set up and use Local Foundry on your STX Halo™",
5+
"time": 60,
6+
"platforms": ["linux", "windows"],
7+
"difficulty": "intermediate",
8+
"isNew": true,
9+
"isFeatured": false,
10+
"published": true,
11+
"tags": ["foundry", "local", "llm"]
12+
}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Building a Research Agent Using MCP
2+
3+
<!-- Playbook content goes here -->
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
{
2+
"id": "mcp-research-agent",
3+
"title": "Building a Research Agent Using MCP",
4+
"description": "Build a research agent using the Model Context Protocol (MCP) on your STX Halo™",
5+
"time": 60,
6+
"platforms": ["linux", "windows"],
7+
"difficulty": "intermediate",
8+
"isNew": true,
9+
"isFeatured": false,
10+
"published": true,
11+
"tags": ["mcp", "agents", "research", "llm"]
12+
}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Getting Started with Ollama
2+
3+
<!-- Playbook content goes here -->

0 commit comments

Comments
 (0)