-
-
Notifications
You must be signed in to change notification settings - Fork 50
Smart note UI design #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| notes/ |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,32 @@ | ||
| # Smart Notes – Landing Page UI Design | ||
|
|
||
| ## Overview | ||
| This folder contains the UI/UX design exploration for the Smart Notes | ||
| landing page. The goal is to visually communicate the app’s privacy-first, | ||
| offline-by-default philosophy through a clean and focused interface. | ||
|
|
||
| ## Scope | ||
| - Landing page UI design | ||
| - No functional implementation included | ||
| - Design-first contribution | ||
|
|
||
| ## Screens Included | ||
| - Landing Page (Desktop) | ||
|
|
||
| ## Design Goals | ||
| - Clear value proposition | ||
| - Calm, distraction-free layout | ||
| - Emphasis on privacy and offline usage | ||
| - Developer-friendly design for easy implementation | ||
|
|
||
| ## Design Decisions | ||
| - Minimal color palette | ||
| - Bento-style feature cards | ||
| - Strong visual hierarchy | ||
| - Simple and focused navigation | ||
|
|
||
| ## Assets | ||
| - `design/landing-page.png` – Landing page UI mockup | ||
|
|
||
| ## Status | ||
| Initial design mock submitted for feedback and iteration. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,84 @@ | ||
| # Smart Notes – Local Q&A (RAG MVP) | ||
|
|
||
| This is a minimal, local-first MVP that allows users to ask natural-language questions over their markdown notes. | ||
|
|
||
| ## Features (Current MVP) | ||
|
|
||
| - Loads markdown files from a local `notes/` directory | ||
| - Supports natural-language questions (e.g., "what is AI", "where is AI used") | ||
| - Returns sentence-level answers from notes | ||
| - Shows the source note filename | ||
| - Interactive CLI loop (type `exit` to quit) | ||
|
|
||
| This is a starter implementation intended to be extended with embeddings and vector search in future iterations. | ||
|
|
||
| --- | ||
|
|
||
| ## How it works | ||
|
|
||
| 1. Notes are loaded from the local `notes/` directory. | ||
| 2. Question words (what, where, who, when, etc.) are filtered. | ||
| 3. Notes are split into sentences. | ||
| 4. Relevant sentences are returned based on keyword matching. | ||
|
|
||
| --- | ||
|
|
||
| ## How to run | ||
|
|
||
| ```bash | ||
| python smart-notes/rag_mvp/qa_cli.py | ||
|
|
||
|
|
||
|
|
||
| >> what is AI | ||
|
|
||
| [1] From test.md: | ||
| Artificial Intelligence (AI) is the simulation of human intelligence in machines. | ||
|
|
||
|
|
||
| >> what is machine learning | ||
| how is machine learning used | ||
| difference between AI and ML | ||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
| # Smart Notes – RAG MVP (Embeddings & FAISS) | ||
|
|
||
| This project is a simple **Retrieval-Augmented Generation (RAG)** pipeline for Smart Notes. | ||
| It allows users to store notes, convert them into embeddings, and search relevant notes using vector similarity. | ||
|
|
||
| --- | ||
|
|
||
| ## 🚀 Features | ||
|
|
||
| - Convert notes into embeddings using Sentence Transformers | ||
| - Store and search embeddings using FAISS (CPU) | ||
| - CLI tool to ask questions about your notes | ||
| - Simple chunking for text files | ||
| - Works fully offline after model download | ||
|
|
||
| --- | ||
|
|
||
| ## 🧠 Tech Stack | ||
|
|
||
| - Python 3.10+ | ||
| - sentence-transformers | ||
| - FAISS (faiss-cpu) | ||
| - HuggingFace Transformers | ||
|
|
||
| --- | ||
|
|
||
| ## 📁 Project Structure | ||
|
|
||
| ```bash | ||
| smart-notes/ | ||
| ├── rag_mvp/ | ||
| │ ├── embed.py # Embedding logic | ||
| │ ├── index.py # FAISS index creation | ||
| │ ├── qa_cli.py # CLI for asking questions | ||
| │ └── utils.py # Helper functions | ||
| ├── notes/ # Put your .txt notes here | ||
| ├── requirements.txt | ||
| └── README.md | ||
|
Comment on lines
+75
to
+84
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Project structure doesn't match actual file names and the code block is unclosed.
🤖 Prompt for AI Agents |
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,31 @@ | ||
| """ | ||
| Chunking utilities for splitting long notes into overlapping chunks. | ||
| This helps embeddings capture local context. | ||
| """ | ||
|
|
||
| from typing import List | ||
|
|
||
|
|
||
| def chunk_text(text: str, max_length: int = 500, overlap: int = 50) -> List[str]: | ||
| if not text: | ||
| return [] | ||
|
|
||
| chunks = [] | ||
| start = 0 | ||
| text = text.strip() | ||
|
|
||
| while start < len(text): | ||
| end = start + max_length | ||
| chunk = text[start:end].strip() | ||
|
|
||
| if chunk: | ||
| chunks.append(chunk) | ||
|
|
||
| if end >= len(text): | ||
| break | ||
|
|
||
| start = end - overlap | ||
| if start < 0: | ||
| start = 0 | ||
|
Comment on lines
+9
to
+29
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Infinite loop when If Proposed fix def chunk_text(text: str, max_length: int = 500, overlap: int = 50) -> List[str]:
if not text:
return []
+ if overlap >= max_length:
+ raise ValueError("overlap must be less than max_length")
chunks = []🤖 Prompt for AI Agents |
||
|
|
||
| return chunks | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,30 @@ | ||||||||||||||||||
| """ | ||||||||||||||||||
| Embedding wrapper for converting text chunks into vectors. | ||||||||||||||||||
| Supports pluggable embedding backends later (Ollama, OpenAI, SentenceTransformers). | ||||||||||||||||||
| """ | ||||||||||||||||||
|
|
||||||||||||||||||
| from typing import List | ||||||||||||||||||
| import numpy as np | ||||||||||||||||||
|
|
||||||||||||||||||
| try: | ||||||||||||||||||
| from sentence_transformers import SentenceTransformer | ||||||||||||||||||
| except ImportError: | ||||||||||||||||||
| SentenceTransformer = None | ||||||||||||||||||
|
|
||||||||||||||||||
|
|
||||||||||||||||||
| class Embedder: | ||||||||||||||||||
| def __init__(self, model_name: str = "all-MiniLM-L6-v2"): | ||||||||||||||||||
| if SentenceTransformer is None: | ||||||||||||||||||
| raise ImportError( | ||||||||||||||||||
| "sentence-transformers not installed. Run: pip install sentence-transformers" | ||||||||||||||||||
| ) | ||||||||||||||||||
|
|
||||||||||||||||||
| self.model_name = model_name | ||||||||||||||||||
| self.model = SentenceTransformer(model_name) | ||||||||||||||||||
|
|
||||||||||||||||||
| def embed(self, texts: List[str]) -> np.ndarray: | ||||||||||||||||||
| if not texts: | ||||||||||||||||||
| return np.array([]) | ||||||||||||||||||
|
Comment on lines
+25
to
+27
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Empty-input return shape is 1-D, but callers likely expect 2-D.
Proposed fix def embed(self, texts: List[str]) -> np.ndarray:
if not texts:
- return np.array([])
+ return np.empty((0, self.model.get_sentence_embedding_dimension()), dtype=np.float32)
embeddings = self.model.encode(texts, convert_to_numpy=True)📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||
|
|
||||||||||||||||||
| embeddings = self.model.encode(texts, convert_to_numpy=True) | ||||||||||||||||||
| return embeddings | ||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,41 @@ | ||
| """ | ||
| Simple vector indexer using FAISS for similarity search. | ||
| """ | ||
|
|
||
| from typing import List | ||
| import numpy as np | ||
|
|
||
| try: | ||
| import faiss | ||
| except ImportError: | ||
| faiss = None | ||
|
|
||
|
|
||
| class VectorIndexer: | ||
| def __init__(self, dim: int): | ||
| if faiss is None: | ||
| raise ImportError("faiss not installed. Run: pip install faiss-cpu") | ||
|
|
||
| self.dim = dim | ||
| self.index = faiss.IndexFlatL2(dim) | ||
| self.texts: List[str] = [] | ||
|
|
||
| def add(self, embeddings: np.ndarray, chunks: List[str]): | ||
| if len(embeddings) == 0: | ||
| return | ||
|
|
||
| self.index.add(embeddings) | ||
| self.texts.extend(chunks) | ||
|
|
||
| def search(self, query_embedding: np.ndarray, k: int = 3): | ||
| if self.index.ntotal == 0: | ||
| return [] | ||
|
|
||
| distances, indices = self.index.search(query_embedding.reshape(1, -1), k) | ||
| results = [] | ||
|
|
||
| for idx in indices[0]: | ||
| if idx < len(self.texts): | ||
| results.append(self.texts[idx]) | ||
|
Comment on lines
+37
to
+39
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Bug: FAISS returns When fewer than Proposed fix for idx in indices[0]:
- if idx < len(self.texts):
+ if 0 <= idx < len(self.texts):
results.append(self.texts[idx])🤖 Prompt for AI Agents |
||
|
|
||
| return results | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,47 @@ | ||||||||||||||
| # rag_mvp/pipelines/embedding_pipeline.py | ||||||||||||||
|
|
||||||||||||||
| from sentence_transformers import SentenceTransformer | ||||||||||||||
| import faiss | ||||||||||||||
| import numpy as np | ||||||||||||||
|
|
||||||||||||||
|
|
||||||||||||||
| class EmbeddingPipeline: | ||||||||||||||
| def __init__(self, model_name="all-MiniLM-L6-v2"): | ||||||||||||||
| self.model = SentenceTransformer(model_name, cache_folder="D:/models_cache") | ||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hardcoded Windows-specific cache path will break on all other environments.
Proposed fix- self.model = SentenceTransformer(model_name, cache_folder="D:/models_cache")
+ self.model = SentenceTransformer(model_name)📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||
| self.index = None | ||||||||||||||
| self.chunks = [] | ||||||||||||||
|
|
||||||||||||||
| def chunk_text(self, text, max_length=300, overlap=50): | ||||||||||||||
| chunks = [] | ||||||||||||||
| start = 0 | ||||||||||||||
|
|
||||||||||||||
| while start < len(text): | ||||||||||||||
| end = start + max_length | ||||||||||||||
| chunk = text[start:end] | ||||||||||||||
| chunks.append(chunk) | ||||||||||||||
| start = end - overlap | ||||||||||||||
|
|
||||||||||||||
| return chunks | ||||||||||||||
|
|
||||||||||||||
| def build_index(self, chunks): | ||||||||||||||
| embeddings = self.model.encode(chunks) | ||||||||||||||
| embeddings = np.array(embeddings).astype("float32") | ||||||||||||||
|
|
||||||||||||||
| dim = embeddings.shape[1] | ||||||||||||||
| self.index = faiss.IndexFlatL2(dim) | ||||||||||||||
| self.index.add(embeddings) | ||||||||||||||
|
|
||||||||||||||
| return embeddings | ||||||||||||||
|
|
||||||||||||||
| def process_notes(self, text): | ||||||||||||||
| self.chunks = self.chunk_text(text) | ||||||||||||||
| embeddings = self.build_index(self.chunks) | ||||||||||||||
| return self.chunks, embeddings | ||||||||||||||
|
|
||||||||||||||
| def semantic_search(self, query, top_k=3): | ||||||||||||||
| query_vec = self.model.encode([query]) | ||||||||||||||
| query_vec = np.array(query_vec).astype("float32") | ||||||||||||||
|
|
||||||||||||||
| distances, indices = self.index.search(query_vec, top_k) | ||||||||||||||
| results = [self.chunks[i] for i in indices[0]] | ||||||||||||||
|
Comment on lines
+8
to
+46
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion | 🟠 Major
This class re-implements chunking (vs Sketch of a composed pipeline-from sentence_transformers import SentenceTransformer
-import faiss
-import numpy as np
+from rag_mvp.embeddings.chunker import chunk_text
+from rag_mvp.embeddings.embedder import Embedder
+from rag_mvp.embeddings.indexer import VectorIndexer
class EmbeddingPipeline:
def __init__(self, model_name="all-MiniLM-L6-v2"):
- self.model = SentenceTransformer(model_name, cache_folder="D:/models_cache")
- self.index = None
+ self.embedder = Embedder(model_name)
+ self.indexer = None
self.chunks = []
- def chunk_text(self, text, max_length=300, overlap=50):
- ...
-
def build_index(self, chunks):
- embeddings = self.model.encode(chunks)
- ...
+ embeddings = self.embedder.embed(chunks)
+ self.indexer = VectorIndexer(embeddings.shape[1])
+ self.indexer.add(embeddings, chunks)
+ return embeddings
def process_notes(self, text):
- self.chunks = self.chunk_text(text)
+ self.chunks = chunk_text(text)
embeddings = self.build_index(self.chunks)
return self.chunks, embeddings
def semantic_search(self, query, top_k=3):
- query_vec = self.model.encode([query])
- ...
+ query_vec = self.embedder.embed([query])
+ return self.indexer.search(query_vec[0], k=top_k)🧰 Tools🪛 Ruff (0.15.0)[warning] 45-45: Unpacked variable Prefix it with an underscore or any other dummy variable pattern (RUF059) 🤖 Prompt for AI Agents
Comment on lines
+44
to
+46
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Proposed fix- distances, indices = self.index.search(query_vec, top_k)
- results = [self.chunks[i] for i in indices[0]]
+ _distances, indices = self.index.search(query_vec, top_k)
+ results = [self.chunks[i] for i in indices[0] if 0 <= i < len(self.chunks)]
return results📝 Committable suggestion
Suggested change
🧰 Tools🪛 Ruff (0.15.0)[warning] 45-45: Unpacked variable Prefix it with an underscore or any other dummy variable pattern (RUF059) 🤖 Prompt for AI Agents |
||||||||||||||
| return results | ||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,109 @@ | ||||||||||||||||||||||
| import os | ||||||||||||||||||||||
| import re | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| #-------------------emedding-pipeline-chunking concept | ||||||||||||||||||||||
| from rag_mvp.pipelines.embedding_pipeline import EmbeddingPipeline | ||||||||||||||||||||||
|
Comment on lines
+4
to
+5
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Typo: "emedding" → "embedding". -#-------------------emedding-pipeline-chunking concept
+#-------------------embedding-pipeline-chunking concept🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def demo_embeddings_pipeline(): | ||||||||||||||||||||||
| pipeline = EmbeddingPipeline() | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| note_text = """ | ||||||||||||||||||||||
| Python is a programming language. | ||||||||||||||||||||||
| It is widely used in AI and machine learning projects. | ||||||||||||||||||||||
| Smart Notes helps users organize knowledge using embeddings. | ||||||||||||||||||||||
| """ | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| chunks, embeddings = pipeline.process_notes(note_text) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| print("\n--- Chunks Created ---") | ||||||||||||||||||||||
| for i, c in enumerate(chunks): | ||||||||||||||||||||||
| print(f"[{i}] {c}") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| query = "What is Python used for?" | ||||||||||||||||||||||
| results = pipeline.semantic_search(query) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| print("\n--- Search Results ---") | ||||||||||||||||||||||
| for r in results: | ||||||||||||||||||||||
| print("-", r) | ||||||||||||||||||||||
| #------------------------------------------------- | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| QUESTION_WORDS = { | ||||||||||||||||||||||
| "what", "where", "who", "when", "which", | ||||||||||||||||||||||
| "is", "are", "was", "were", "the", "a", "an", | ||||||||||||||||||||||
| "of", "to", "in", "on", "for" | ||||||||||||||||||||||
| } | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| NOTES_DIR = "notes" | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def load_notes(): | ||||||||||||||||||||||
| notes = [] | ||||||||||||||||||||||
| if not os.path.exists(NOTES_DIR): | ||||||||||||||||||||||
| print(f"Notes directory '{NOTES_DIR}' not found.") | ||||||||||||||||||||||
| return notes | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| for file in os.listdir(NOTES_DIR): | ||||||||||||||||||||||
| if file.endswith(".md"): | ||||||||||||||||||||||
| path = os.path.join(NOTES_DIR, file) | ||||||||||||||||||||||
| with open(path, "r", encoding="utf-8") as f: | ||||||||||||||||||||||
| notes.append({ | ||||||||||||||||||||||
| "filename": file, | ||||||||||||||||||||||
| "content": f.read() | ||||||||||||||||||||||
| }) | ||||||||||||||||||||||
| return notes | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def split_sentences(text): | ||||||||||||||||||||||
| return re.split(r'(?<=[.!?])\s+', text) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def search_notes(query, notes): | ||||||||||||||||||||||
| results = [] | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| query_words = [ | ||||||||||||||||||||||
| word.lower() | ||||||||||||||||||||||
| for word in query.split() | ||||||||||||||||||||||
| if word.lower() not in QUESTION_WORDS | ||||||||||||||||||||||
| ] | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| for note in notes: | ||||||||||||||||||||||
| sentences = split_sentences(note["content"]) | ||||||||||||||||||||||
| for sentence in sentences: | ||||||||||||||||||||||
| sentence_lower = sentence.lower() | ||||||||||||||||||||||
| if any(word in sentence_lower for word in query_words): | ||||||||||||||||||||||
| results.append({ | ||||||||||||||||||||||
| "filename": note["filename"], | ||||||||||||||||||||||
| "sentence": sentence.strip() | ||||||||||||||||||||||
| }) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| return results | ||||||||||||||||||||||
|
Comment on lines
+63
to
+82
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Substring matching produces false positives on partial words.
Proposed fix using word boundaries+import re
+
def search_notes(query, notes):
results = []
query_words = [
word.lower()
for word in query.split()
if word.lower() not in QUESTION_WORDS
]
for note in notes:
sentences = split_sentences(note["content"])
for sentence in sentences:
sentence_lower = sentence.lower()
- if any(word in sentence_lower for word in query_words):
+ if any(re.search(r'\b' + re.escape(word) + r'\b', sentence_lower) for word in query_words):
results.append({
"filename": note["filename"],
"sentence": sentence.strip()
})
return results🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if __name__ == "__main__": | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| demo_embeddings_pipeline() # Temporary demo for embeddings pipeline | ||||||||||||||||||||||
|
Comment on lines
+85
to
+87
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If Proposed fix if __name__ == "__main__":
-
- demo_embeddings_pipeline() # Temporary demo for embeddings pipeline
+ try:
+ demo_embeddings_pipeline() # Temporary demo for embeddings pipeline
+ except (ImportError, Exception) as e:
+ print(f"Embedding demo skipped: {e}")
notes = load_notes()📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| notes = load_notes() | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| print("Ask questions about your notes (type 'exit' to quit)\n") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| while True: | ||||||||||||||||||||||
| query = input(">> ").strip() | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if query.lower() == "exit": | ||||||||||||||||||||||
| print("Goodbye 👋") | ||||||||||||||||||||||
| break | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| matches = search_notes(query, notes) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not matches: | ||||||||||||||||||||||
| print("No relevant notes found.\n") | ||||||||||||||||||||||
| else: | ||||||||||||||||||||||
| print("\n--- Answers ---\n") | ||||||||||||||||||||||
| for i, m in enumerate(matches, 1): | ||||||||||||||||||||||
| print(f"[{i}] From {m['filename']}:") | ||||||||||||||||||||||
| print(m["sentence"]) | ||||||||||||||||||||||
| print() | ||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unclosed code block causes the rest of the README to render as a code literal.
The fenced code block opened at line 28 is never closed. Everything after line 29 (including the "How to run" examples and the second project section) will render as preformatted text. Add the closing
```after the example output.🤖 Prompt for AI Agents