Skip to content

Commit dc382a2

Browse files
committed
Enhance documentation formatting and structure for clarity
- Added detailed question formatting guidelines to AGENTS.md for main section and FAQ questions. - Updated FAQs section headers in multiple talk documents to ensure consistent markdown usage. - Improved overall organization and readability of the documentation, aligning with established formatting standards.
1 parent 947aee5 commit dc382a2

File tree

3 files changed

+29
-2
lines changed

3 files changed

+29
-2
lines changed

docs/talks/AGENTS.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,31 @@ All talk titles follow a **catchy, conversational format** designed to grab atte
4040
- **Code examples**: Where applicable, include implementation details
4141
- **Performance metrics**: Specific numbers and improvements mentioned
4242
43+
## Question Formatting Guidelines
44+
**Main Section Questions**: Use proper markdown headers for navigable sections:
45+
```markdown
46+
## Why is accurate document parsing so critical for AI applications?
47+
## How should you evaluate document parsing performance?
48+
## What are the most challenging document elements to parse correctly?
49+
```
50+
51+
**FAQ Section Questions**: Use bold emphasis within FAQ content:
52+
```markdown
53+
## FAQs
54+
55+
**What is document ingestion in the context of AI applications?**
56+
57+
Document ingestion refers to the process of extracting...
58+
59+
**Why is accurate document parsing so important for AI applications?**
60+
61+
Accurate parsing is critical because...
62+
```
63+
64+
**Key Distinction**:
65+
- `## Question?` = Main section headers (navigable, structured content)
66+
- `**Question?**` = FAQ emphasis (within content sections only)
67+
4368
## Key Topics Covered
4469
- **"Why I Stopped Using RAG for Coding Agents (And You Should Too)"** - Nik Pash (Cline)
4570
- **"The RAG Mistakes That Are Killing Your AI (Lessons from Google & LinkedIn)"** - Skylar Payne
@@ -63,6 +88,8 @@ All talk titles follow a **catchy, conversational format** designed to grab atte
6388
## Formatting Standards
6489
- **Consistent H1 titles**: Match YAML frontmatter exactly
6590
- **Proper markdown structure**: Use ## for main sections, ### for subsections
91+
- **Question headers**: Use `## Question?` format for main section questions (NOT `**Question?**`)
92+
- **FAQ sections**: Use `**Question?**` for emphasis within FAQ content sections
6693
- **Bold key takeaways**: `**Key Takeaway:**` format for main insights
6794
- **Blockquotes for quotes**: Use `>` for speaker quotes
6895
- **Bullet points**: Use `-` for lists with **bold** labels

docs/talks/extend-eli-document-workflows.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ As AI capabilities continue to advance, the companies that will benefit most are
189189

190190
---
191191

192-
# FAQs
192+
## FAQs
193193

194194
### What is document automation and why is it important?
195195

docs/talks/fine-tuning-rerankers-embeddings-ayush-lancedb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,7 +168,7 @@ This is why having a range of options - from lightweight models like MiniLM to m
168168

169169
**Key Takeaway:** Model selection should be driven by your specific constraints and requirements, not just raw performance numbers. Consider the entire system when making these decisions.
170170

171-
## FAQs:
171+
## FAQs
172172

173173
## What are re-rankers and why should I use them?
174174
Re-rankers are models that improve retrieval quality by reordering documents after they've been retrieved from a database. They fit into your pipeline after retrieval and before the context is provided to an LLM, helping to ensure the most relevant documents appear at the top. Re-rankers are particularly valuable because they don't disrupt your existing pipeline—you don't need to re-ingest your entire dataset, making them a low-hanging fruit for improving retrieval performance.

0 commit comments

Comments
 (0)