You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><ahref="#best-practices">6. Best Practices</a></li>
210
+
<li><ahref="#glossary">7. Glossary</a></li>
211
+
</ul>
212
+
</nav>
213
+
</aside>
214
+
215
+
<!-- Main Content -->
216
+
<mainclass="doc-content">
217
+
218
+
<!-- 1. Introduction -->
219
+
<sectionid="introduction">
220
+
<h2>1. Introduction</h2>
221
+
<h3>What is Zentaxa?</h3>
222
+
<p>Zentaxa is a complete observability platform designed specifically for AI applications. Think of it as a "flight recorder" for your AI agents. It tracks every action, decision, conversation, and error in real-time, giving you full visibility into how your AI systems are performing.</p>
223
+
224
+
<h3>Who is it for?</h3>
225
+
<p>Zentaxa is built for:</p>
226
+
<ul>
227
+
<li><strong>Startups</strong> building AI-native products who need to move fast without breaking things.</li>
228
+
<li><strong>Enterprises</strong> deploying AI agents who need compliance, cost control, and reliability.</li>
229
+
<li><strong>AI Engineering Teams</strong> who need to debug complex interactions between multiple AI agents.</li>
230
+
</ul>
231
+
232
+
<h3>The Problem It Solves</h3>
233
+
<p>AI agents can often be "black boxes"—it is difficult to know why they failed, why they gave a specific answer, or why they are costing so much money. Zentaxa solves this by providing transparency. It answers questions like:</p>
234
+
<ul>
235
+
<li>"Why did the agent fail to complete the task?"</li>
236
+
<li>"How much did this specific conversation cost?"</li>
237
+
<li>"Is the new model faster or slower than the old one?"</li>
238
+
</ul>
239
+
</section>
240
+
241
+
<!-- 2. Key Features -->
242
+
<sectionid="features">
243
+
<h2>2. Key Features</h2>
244
+
<divclass="feature-grid">
245
+
<divclass="feature-card">
246
+
<h4>Run History</h4>
247
+
<p>A complete log of every session your agents have run. View past conversations and tasks to understand historical performance.</p>
248
+
</div>
249
+
<divclass="feature-card">
250
+
<h4>Event Timeline</h4>
251
+
<p>A visual step-by-step timeline showing exactly what happened during an agent's execution, including tool use and reasoning.</p>
252
+
</div>
253
+
<divclass="feature-card">
254
+
<h4>Token & Cost Tracking</h4>
255
+
<p>Monitors usage in real-time. See exactly how much you are spending on OpenAI, Anthropic, or other providers per request.</p>
256
+
</div>
257
+
<divclass="feature-card">
258
+
<h4>Error Detection</h4>
259
+
<p>Automatically highlights failed requests, crashed agents, or API timeouts so you can fix them immediately.</p>
260
+
</div>
261
+
<divclass="feature-card">
262
+
<h4>Latency Insights</h4>
263
+
<p>Shows how long each step takes. Identify which part of your AI workflow is slowing down the user experience.</p>
264
+
</div>
265
+
<divclass="feature-card">
266
+
<h4>Multi-agent Observability</h4>
267
+
<p>Designed for complex systems where multiple agents talk to each other. Track the flow of information between them.</p>
268
+
</div>
269
+
</div>
270
+
</section>
271
+
272
+
<!-- 3. Getting Started -->
273
+
<sectionid="getting-started">
274
+
<h2>3. Getting Started</h2>
275
+
<p>Follow these simple steps to start using the Zentaxa dashboard.</p>
276
+
277
+
<ulclass="step-list">
278
+
<li>
279
+
<strong>Sign In / Open Dashboard</strong>
280
+
<p>Open your web browser and navigate to your Zentaxa instance URL (e.g., <code>http://localhost:5173</code>). No complex setup is required for the viewer.</p>
281
+
</li>
282
+
<li>
283
+
<strong>Connect Agents</strong>
284
+
<p>Your engineering team will integrate the Zentaxa SDK into your AI agents. Once connected, data will start appearing in the dashboard automatically.</p>
285
+
</li>
286
+
<li>
287
+
<strong>View Incoming Logs</strong>
288
+
<p>Click on the <strong>"Pipeline Explorer"</strong> tab in the sidebar. You should see a list of recent activities populating in real-time.</p>
289
+
</li>
290
+
<li>
291
+
<strong>Read Basic Metrics</strong>
292
+
<p>On the main <strong>Dashboard</strong> page, look at the top cards. <strong>Success Rate</strong> should ideally be high (green). <strong>Total Cost</strong> shows your daily spend.</p>
293
+
</li>
294
+
</ul>
295
+
</section>
296
+
297
+
<!-- 4. How to Use Each Module -->
298
+
<sectionid="modules">
299
+
<h2>4. How to Use Each Module</h2>
300
+
301
+
<h3>Dashboard (The Command Center)</h3>
302
+
<p><strong>Purpose:</strong> High-level health check of your entire AI system.</p>
303
+
<p><strong>What you see:</strong> Graphs showing cost trends, success vs. failure rates, and a list of currently active agents.</p>
304
+
<p><strong>Typical Use Case:</strong> Check this page first thing in the morning to ensure no critical failures occurred overnight and that costs are within budget.</p>
305
+
306
+
<h3>Pipeline Explorer (The Detective Tool)</h3>
307
+
<p><strong>Purpose:</strong> Deep dive into specific actions and requests.</p>
308
+
<p><strong>What you see:</strong> A detailed table of every single interaction with an AI model. You can filter by date, agent name, or status (Success/Error).</p>
309
+
<p><strong>Typical Use Case:</strong> A user reports a "weird answer" from the chatbot. You search here for the specific conversation to see exactly what the AI was asked and how it responded.</p>
310
+
311
+
<h3>Agent Runs (The Storyteller)</h3>
312
+
<p><strong>Purpose:</strong> Understand full workflows and multi-step tasks.</p>
313
+
<p><strong>What you see:</strong> Grouped actions that belong to a single task. For example, a "Research Task" might involve 5 separate AI calls. This view groups them together.</p>
314
+
<p><strong>Typical Use Case:</strong> Debugging a complex agent that got stuck in a loop while trying to search the web.</p>
<p>Check the time filter in the top right corner. It might be set to a time range with no activity. Try selecting "Last 24 Hours". Also, confirm with your IT team that the backend server is running.</p>
<p>Ensure the specific model pricing is configured in the settings. If you are using a brand new model, the system might need a manual price entry update.</p>
<p>This usually happens if a run started but didn't complete properly or send intermediate steps due to a network error. Check the "Errors" tab for connection issues.</p>
334
+
</div>
335
+
</section>
336
+
337
+
<!-- 6. Best Practices -->
338
+
<sectionid="best-practices">
339
+
<h2>6. Best Practices</h2>
340
+
<ul>
341
+
<li><strong>Analyze Behavior:</strong> Regularly review "long-running" agents. If an agent takes 5 minutes for a simple task, it might be getting stuck or looping inefficiently.</li>
342
+
<li><strong>Manage Cost:</strong> Check the "Cost" widget daily. If you see a sudden spike, investigate the "Pipeline Explorer" to see which agent is consuming the most tokens.</li>
343
+
<li><strong>Compare Prompts:</strong> When your team changes an agent's instructions (prompts), note the time. Compare the "Success Rate" before and after the change to see if it improved performance.</li>
344
+
<li><strong>Monitor Reliability:</strong> Aim for a success rate above 95%. If it drops, use the "Error Detection" features to identify the root cause (e.g., API timeouts or bad inputs).</li>
0 commit comments