You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're exploring an idea that could significantly expand the role of Fumadocs beyond documentation publishing and into the domain of measurable knowledge consumption.
The aim is to solve a growing problem in AI-native engineering teams: we can publish infinite documentation, but we can't tell if anyone actually reads, understands, or applies it.
This discussion is to explore whether Fumadocs could evolve to support these capabilities, either natively or via extensions.
Why This Matters
In modern dev environments, especially where AI-generated content is exploding - teams face a subtle but substantial problem:
Reading is assumed, not verified
Comprehension is unmeasured
Code examples may be executed, but no signals indicate by whom or with what result
Feedback loops are fragmented or nonexistent
This means organizations cannot answer foundational questions such as:
“Who actually read the instructions?”
“Did they understand them?”
“Where are people confused?”
“Which examples were executed and which were ignored?”
Core Idea
Could Fumadocs support lightweight interaction and tracking elements that signal comprehension and reading progress?
Here are initial interaction concepts we are exploring:
Reading Tracking
Feature
Description
Page view tracking
Record when users open a page
Time on page
Track how long users spend reading (per section, not just page-level)
Scroll depth
Measure how far users scroll through content
Reading status
Simple state model: unread → in-progress → completed
Comprehension Signals
Feature
Description
Emoji reactions
Lightweight feedback per section (👍 👎 ✅ ❓)
Micro-prompts
"Was this clear?" / "Need more examples?" - one-click buttons
Inline questions
Allow readers to ask questions directly next to content
Code Execution Tracking
Feature
Description
Example interaction
Track which code blocks users clicked/copied
Execution logs
Record whether code examples were run (with optional runner integration)
Success tracking
Log whether executed examples succeeded or failed
Analytics Dashboard
Feature
Description
Individual progress
What has each user read and understood?
Team analytics
Who is up to date? Who is behind? Where are knowledge gaps?
Content insights
Which pages have low completion? Which sections confuse readers?
Reading assignments
Allow admins to assign reading bundles to users/teams
Questions for the Community
We would love thoughts on:
Is this direction aligned with the vision for Fumadocs, or out of scope?
Should this evolve as:
A core enhancement
A first-party extension
A plugin ecosystem?
Are there existing architectural hooks for event tracking or UI extensions?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We're exploring an idea that could significantly expand the role of Fumadocs beyond documentation publishing and into the domain of measurable knowledge consumption.
The aim is to solve a growing problem in AI-native engineering teams: we can publish infinite documentation, but we can't tell if anyone actually reads, understands, or applies it.
This discussion is to explore whether Fumadocs could evolve to support these capabilities, either natively or via extensions.
Why This Matters
In modern dev environments, especially where AI-generated content is exploding - teams face a subtle but substantial problem:
This means organizations cannot answer foundational questions such as:
Core Idea
Could Fumadocs support lightweight interaction and tracking elements that signal comprehension and reading progress?
Here are initial interaction concepts we are exploring:
Reading Tracking
unread→in-progress→completedComprehension Signals
Code Execution Tracking
Analytics Dashboard
Questions for the Community
We would love thoughts on:
Beta Was this translation helpful? Give feedback.
All reactions