Let's discuss the examples in the README. In the SQLFS example, the logs table is represented as a directory, while executable actions are represented as files within that directory. The example includes schema and query files, and there might also be others like insert.
If I were to map a REST API to a file system—for example, a CRM—and I wanted to query a lead, I would execute cat /crm/leads/20251211-openai-inquiry/info. In this case, 20251211-openai-inquiry is the name of the lead. Similarly, valid actions would need to be defined as files under this directory, such as info, edit, etc.
What about adding a note to a lead? Would it be echo note.json > /crm/leads/20251211-openai-inquiry/notes/create?
Mapping actions to files feels unintuitive. I haven't yet figured out how to map only entities to files while mapping all actions to standard shell tools.
And Gemini give me some suggestion as below. But we need a general implementation for all REST API as RESTFS.
A quick thought on your design dilemma
You have hit on a classic problem in Virtual File System (VFS) design: The Semantic Gap.
Standard shell tools only have a few "verbs" (read, write, create via touch/mkdir, delete via rm). APIs usually have many more verbs (POST, PUT, PATCH, custom RPCs).
If you want to map actions purely to shell tools without "action files," you usually have to rely on context-aware writes:
- Create:
mkdir /crm/leads/new-lead-name (Triggers a POST)
- Update:
echo "new content" > /crm/leads/lead-id/description (Triggers a PATCH)
- Delete:
rm -rf /crm/leads/lead-id (Triggers a DELETE)
The difficulty arises with complex actions (like "sending an email" or "archiving"). For those, developers often resort to the magic_file approach (like writing to a specialized file) because standard cp or mv commands don't carry enough semantic meaning to trigger a specific API business logic.
Let's discuss the examples in the README. In the SQLFS example, the
logstable is represented as a directory, while executable actions are represented as files within that directory. The example includesschemaandqueryfiles, and there might also be others likeinsert.If I were to map a REST API to a file system—for example, a CRM—and I wanted to query a lead, I would execute
cat /crm/leads/20251211-openai-inquiry/info. In this case,20251211-openai-inquiryis the name of the lead. Similarly, valid actions would need to be defined as files under this directory, such asinfo,edit, etc.What about adding a note to a lead? Would it be
echo note.json > /crm/leads/20251211-openai-inquiry/notes/create?Mapping actions to files feels unintuitive. I haven't yet figured out how to map only entities to files while mapping all actions to standard shell tools.
And Gemini give me some suggestion as below. But we need a general implementation for all REST API as RESTFS.
A quick thought on your design dilemma
You have hit on a classic problem in Virtual File System (VFS) design: The Semantic Gap.
Standard shell tools only have a few "verbs" (
read,write,createvia touch/mkdir,deletevia rm). APIs usually have many more verbs (POST, PUT, PATCH, custom RPCs).If you want to map actions purely to shell tools without "action files," you usually have to rely on context-aware writes:
mkdir /crm/leads/new-lead-name(Triggers a POST)echo "new content" > /crm/leads/lead-id/description(Triggers a PATCH)rm -rf /crm/leads/lead-id(Triggers a DELETE)The difficulty arises with complex actions (like "sending an email" or "archiving"). For those, developers often resort to the
magic_fileapproach (like writing to a specialized file) because standardcpormvcommands don't carry enough semantic meaning to trigger a specific API business logic.