Skip to content

Commit f15aa97

Browse files
committed
clean up and reorganize
1 parent 1020db1 commit f15aa97

File tree

3 files changed

+77
-1
lines changed

3 files changed

+77
-1
lines changed

Lab_5_Foundry_Agents/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ In this lab, you'll build your first intelligent agents using the **Microsoft Ag
44

55
## What is the Microsoft Agent Framework?
66

7-
The [Microsoft Agent Framework](https://github.com/microsoft/agent-framework) is a production-ready framework for building AI agents that can reason, use tools, and maintain context across conversations. It provides:
7+
The [Microsoft Agent Framework](https://github.com/microsoft/agent-framework) is a production-ready framework for building AI agents that can reason, use tools, and maintain context across conversations. At its core, it solves a fundamental problem: LLMs are stateless. Every time you send a message, the model has no memory of previous interactions and no access to your data. The agent framework bridges this gap by orchestrating a lifecycle around each LLM invocation — injecting relevant knowledge before the call (via context providers), giving the model the ability to take actions (via tools), and extracting useful information from the response afterward. This turns a bare LLM into an agent that can retrieve, reason, and act.
8+
9+
The framework provides:
810

911
- **Agents** — AI systems that receive instructions, use tools, and generate responses via an LLM
1012
- **Tools** — Python functions the agent can decide to call based on the user's query

Lab_6_Context_Providers/README.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,46 @@
22

33
In this lab, you'll learn how to use **context providers** with the Microsoft Agent Framework (MAF) to automatically inject knowledge graph context into agent conversations. Instead of defining tools that the agent must explicitly call, context providers run automatically before each agent invocation, enriching the LLM with relevant information from your Neo4j knowledge graph.
44

5+
## What is Neo4jContextProvider?
6+
7+
The notebooks in this lab use `Neo4jContextProvider` from the [`agent-framework-neo4j`](https://github.com/neo4j-labs/agent-framework-neo4j) package. It's a MAF context provider that connects your agent to a Neo4j knowledge graph, automatically searching for relevant content and injecting it into the LLM's context window before every invocation.
8+
9+
### How It Works
10+
11+
When the agent receives a query, the provider's `before_run()` hook:
12+
13+
1. Takes the most recent messages from the conversation (configurable via `message_history_count`, default 10)
14+
2. Concatenates the message text into a single search query
15+
3. Executes a search against a Neo4j index (vector, fulltext, or hybrid)
16+
4. Formats the results with scores and metadata (e.g., `[Score: 0.892] [company: Apple Inc]`)
17+
5. Injects the formatted context into the agent's session via `context.extend_messages()`
18+
19+
The LLM then sees this context alongside the user's question and can ground its answer in real knowledge graph data.
20+
21+
### Search Modes
22+
23+
The provider supports three search modes via the `index_type` parameter:
24+
25+
| Mode | How It Works | Best For |
26+
|------|-------------|----------|
27+
| **`vector`** | Converts query to an embedding, searches a vector index by cosine similarity | Finding conceptually related content even when keywords don't match |
28+
| **`fulltext`** | Tokenizes query, searches a fulltext index using BM25 scoring | Finding content with specific terms and exact phrases |
29+
| **`hybrid`** | Runs both vector and fulltext searches, combines scores | Comprehensive retrieval combining semantic understanding and keyword matching |
30+
31+
### Graph Enrichment
32+
33+
The provider's most powerful feature is **graph enrichment** via the `retrieval_query` parameter. After the initial index search finds matching nodes, a custom Cypher query traverses the graph to pull in related entities — company names, products, risk factors, executives — giving the LLM much richer context than the matched text alone.
34+
35+
The provider automatically selects the right underlying retriever based on your configuration:
36+
37+
| index_type | retrieval_query | Retriever Used |
38+
|------------|-----------------|----------------|
39+
| `vector` | Not set | `VectorRetriever` |
40+
| `vector` | Set | `VectorCypherRetriever` |
41+
| `fulltext` | Any | `FulltextRetriever` |
42+
| `hybrid` | Not set | `HybridRetriever` |
43+
| `hybrid` | Set | `HybridCypherRetriever` |
44+
545
## Prerequisites
646

747
Before starting, make sure you have:

Lab_7_Agent_Memory/README.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,40 @@
22

33
In this lab, you'll learn how to give agents **persistent memory** using the `neo4j-agent-memory` package with the Microsoft Agent Framework. Unlike the knowledge graph context providers in Lab 6 that retrieve from a static knowledge base, agent memory enables agents to remember conversations, learn user preferences, extract entities, and recall similar past interactions — all stored in Neo4j.
44

5+
## What is Neo4j Agent Memory?
6+
7+
The notebooks in this lab use the [`neo4j-agent-memory`](https://github.com/neo4j-labs/agent-memory) package — a graph-native memory system that gives AI agents persistent, searchable memory stored in Neo4j. While the knowledge graph context providers in Lab 6 retrieve from a static knowledge base you built, agent memory is dynamic: it grows with every conversation as the agent learns user preferences, extracts entities, and records its own reasoning.
8+
9+
### Three Memory Types
10+
11+
**Short-Term Memory** stores conversation history as `Message` nodes with embeddings. This lets the agent semantically search past messages — not just replay them in order, but find the most relevant past exchanges for the current question. Messages are grouped into conversations by session, and entities are automatically extracted during ingestion.
12+
13+
**Long-Term Memory** stores structured knowledge as entities, facts, and preferences. Entities (people, organizations, locations, etc.) are deduplicated using configurable strategies (exact, fuzzy, semantic, or composite matching). Facts are stored as Subject-Predicate-Object triples (e.g., "Apple → manufactures → iPhone"). Preferences capture user-specific information with category and context.
14+
15+
**Reasoning Memory** captures traces of past agent behavior — what tasks were attempted, what tools were called, whether they succeeded or failed, and how long they took. When the agent encounters a similar task later, it can retrieve these traces to learn from its own experience.
16+
17+
### Memory Context Provider
18+
19+
The package provides its own `Neo4jContextProvider` (distinct from the one in Lab 6) that integrates with MAF:
20+
21+
- **`before_run()`** retrieves context from all three memory types: recent conversation history, semantically relevant past messages, matching preferences, related entities, and similar reasoning traces
22+
- **`after_run()`** stores the new messages and automatically extracts entities from the conversation
23+
24+
### Memory Tools
25+
26+
Beyond automatic context injection, the package provides six callable tools via `create_memory_tools()` that give the agent explicit control over its memory:
27+
28+
| Tool | Purpose |
29+
|------|---------|
30+
| `search_memory` | Search across all memory types (messages, entities, preferences) |
31+
| `remember_preference` | Save a user preference with category and context |
32+
| `recall_preferences` | Retrieve saved preferences by topic |
33+
| `search_knowledge` | Query the knowledge graph for entities by type |
34+
| `remember_fact` | Store a factual relationship as a Subject-Predicate-Object triple |
35+
| `find_similar_tasks` | Retrieve similar past reasoning traces to learn from experience |
36+
37+
In Notebook 01, you'll use the **context provider** for automatic memory. In Notebook 02, you'll combine context providers with **memory tools** so the agent can both passively recall and actively manage its memory.
38+
539
## Prerequisites
640

741
Before starting, make sure you have:

0 commit comments

Comments
 (0)