Skip to content
This repository was archived by the owner on Nov 7, 2025. It is now read-only.

Commit 241b935

Browse files
authored
Move docs to the main repository (#906)
I've archived the old repo.
1 parent bc20672 commit 241b935

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+5695
-1
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,6 @@ quesma/config.yaml
2525
quesma/.installation_id
2626
examples/kibana-sample-data/quesma/logs/*
2727
bin/.running-docker-compose
28+
docs/public/node_modules
29+
docs/public/docs/.vitepress/cache/deps
30+
docs/public/docs/.vitepress/dist

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ Once it's running, you can access:
7272

7373
### Development
7474

75-
Developer documentation is available in the [docs](docs/DEVELOPMENT.MD) directory.
75+
Developer documentation is available in the [docs](docs/dev/DEVELOPMENT.MD) directory.
7676

7777
### License
7878
[Elastic License 2.0](https://github.com/QuesmaOrg/quesma/blob/main/LICENSE.MD)
File renamed without changes.

docs/dev/README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
Quesma development documentation
2+
===============================
3+
4+
This folder contains loosely organized development documentation/notes, which - while being public on GitHub - are not intended for publishing to the main documentation site.
-561 KB
Binary file not shown.

docs/public/README.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# Quesma EAP documentation
2+
3+
This folder contains our EAP documentation available at https://eap.quesma.com.
4+
These docs are just static files generated with [Vitepress](https://vitepress.dev) and published via CloudFlare Pages.
5+
6+
7+
### Contribute
8+
9+
Install Vitepress first:
10+
```shell
11+
npm add -D vitepress
12+
```
13+
14+
Preview docs locally while editing:
15+
```shell
16+
npm run docs:dev
17+
```
18+
19+
20+
## Build locally & publish
21+
22+
Build the docs:
23+
```shell
24+
npm run docs:build
25+
```
26+
Above will build all the HTML assets in in `docs/.vitepress/dist`
27+
28+
You can preview what you've built with:
29+
```shell
30+
npm run docs:preview
31+
```
32+
33+
And submit the PR :muscle:
34+
CloudFlare Pages will pick up the PR and build a preview version of your changes.
35+
36+
Once merged, the changes will be automatically deployed to CloudFlare Pages (there's an integration set up which
37+
deploys from `main` branch automatically).
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
import { defineConfig } from 'vitepress'
2+
import { withMermaid } from "vitepress-plugin-mermaid";
3+
4+
// https://vitepress.dev/reference/site-config
5+
export default defineConfig({
6+
title: "Quesma EAP",
7+
description: "Quesma Database Gateway Early Access Program",
8+
head: [['link', { rel: 'icon', href: 'favicon.ico' }]],
9+
themeConfig: {
10+
// https://vitepress.dev/reference/default-theme-config
11+
logo: {
12+
light: '/logo/quesma-logo-black-full-svg.svg',
13+
dark: '/logo/quesma-logo-white-full-svg.svg'
14+
},
15+
siteTitle: 'EAP',
16+
nav: [
17+
{ text: 'Home', link: '/' },
18+
{ text: 'Getting started', link: '/eap-docs' },
19+
{ text: 'Back to home page', link: 'https://quesma.com' }
20+
21+
],
22+
23+
sidebar: [
24+
{
25+
items: [
26+
{ text: 'Getting started', link: '/eap-docs',
27+
items: [
28+
{ text: 'What is Quesma?', link: '/eap-docs' },
29+
{ text: 'Quick start demo', link: '/quick-start' },
30+
],
31+
},
32+
{ text: 'Installation guide', link: '/installation',
33+
items: [
34+
{ text: 'Transparent Elasticsearch proxy', link: '/example-1' },
35+
{ text: 'Adding ClickHouse tables to existing Kibana/Elasticsearch ecosystem', link: '/example-2-1',
36+
items: [
37+
{text: 'Adding Hydrolix tables to existing Kibana/Elasticsearch ecosystem', link: '/example-2-1-hydro-specific'}
38+
] },
39+
{ text: 'Query ClickHouse tables as Elasticsearch indices', link: '/example-2-0-clickhouse-specific',
40+
items: [
41+
{ text: 'Query Hydrolix tables as Elasticsearch indices', link: '/example-2-0'}
42+
]
43+
},
44+
//{ text: 'Scenario I', link: '/scenario-1' },
45+
//{ text: 'Reference Docker compose configurations', link: '/reference-conf' }
46+
],
47+
},
48+
{ text: 'Advanced configuration',
49+
items: [
50+
{ text: 'Configuration primer', link: '/config-primer'},
51+
{ text: 'Ingest', link: '/ingest' },
52+
],
53+
},
54+
{ text: 'Known limitations or unsupported functionalities', link: '/limitations' },
55+
{ text: 'Miscellaneous', link: '/misc',
56+
items: [
57+
{ text: 'Creating Kibana Data Views', link: '/adding-kibana-dataviews' }
58+
]
59+
}
60+
]
61+
}
62+
],
63+
64+
socialLinks: [
65+
{ icon: 'github', link: 'https://github.com/QuesmaOrg' },
66+
{ icon: 'youtube', link: 'https://www.youtube.com/@QuesmaOrg' }
67+
],
68+
69+
search: {
70+
provider: 'local'
71+
}
72+
},
73+
ignoreDeadLinks: [
74+
// ignore exact url
75+
'/2024-04-24/reference.tgz',
76+
'/2024-04-25/reference.tgz',
77+
'/2024-05-10/reference.tgz',
78+
'/2024-06-05/reference.tgz',
79+
'/2024-07-05/reference.tgz',
80+
// ignore all localhost links
81+
/^https?:\/\/localhost/
82+
],
83+
// Integrate Mermaid plugin configuration
84+
...withMermaid({
85+
mermaid: {
86+
// Mermaid configuration options
87+
// Refer https://mermaid.js.org/config/setup/modules/mermaidAPI.html#mermaidapi-configuration-defaults
88+
},
89+
mermaidPlugin: {
90+
class: "mermaid my-class", // Additional CSS classes for the parent container
91+
},
92+
}),
93+
})
94+
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Data Views creation guide
2+
3+
This guide will help you create Data Views for Hydrolix/ClickHouse tables in Kibana.
4+
5+
1. Open Kibana in your browser and navigate to the **Stack Management** section.
6+
![An image](./public/kibana-dvs/dv1.jpg)
7+
2. Select **Data Views** section and click on the **Create data view** button.
8+
![An image](./public/kibana-dvs/dv2.jpg)
9+
3. In th **Create data view** dialog, you should already see your tables represented as Elasticsarch Data streams.
10+
![An image](./public/kibana-dvs/dv3.jpg)
11+
4. In th **Create data view** form, make sure to fill:
12+
* Data view name
13+
* Index pattern, in this case - for `siem` table we'll use `si*`
14+
* Choose a timestamp field from the dropdown menu (if available - it will enable histogram and time picker in Discover tab)
15+
![An image](./public/kibana-dvs/dv4.jpg)
16+
5. Navigate to `Discover` tab, where you should be able to see query your data
17+
![An image](./public/kibana-dvs/dv5.jpg)

docs/public/docs/config-primer.md

Lines changed: 201 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,201 @@
1+
# Configuration primer
2+
3+
## Configuration overview
4+
5+
### Pipelines
6+
7+
Conceptually, Quesma is built as a set of **pipelines** for incoming requests processing. A pipeline consists of:
8+
* **Frontend connector** - responsible for receiving incoming requests and properly responding to them. Frontend
9+
connectors define the API which Quesma exposes - for example, Elasticsearch REST API.
10+
* **Processors** - responsible for processing incoming data, e.g. translating incoming Elasticsearch Query DSL to SQL.
11+
* **Backend connector** - responsible for sending processed data to the backend. Specific types of backend connectors are required by the processors.
12+
13+
An example diagram of the Quesma architecture, where query pipeline retrieves data from both Elasticsearch and ClikHouse while ingest pipeline sends data to ClickHouse only, is shown below:
14+
```mermaid
15+
flowchart LR
16+
subgraph Quesma
17+
direction TB
18+
subgraph Query Pipeline
19+
direction LR
20+
subgraph Frontend Connector
21+
direction LR
22+
i1[Incoming traffic, e.g. Kibana]
23+
end
24+
subgraph query-procs[Processors]
25+
quesma-v1-processor-query
26+
end
27+
subgraph Backend Connectors
28+
elasticConn[Elasticsearch backend connector]
29+
chConn[ClickHouse backend connector]
30+
end
31+
end
32+
subgraph Ingest Pipeline
33+
direction LR
34+
subgraph Frontend Connector
35+
direction LR
36+
i2[Incoming traffic, e.g. Kibana]
37+
end
38+
subgraph ingest-procs[Processors]
39+
quesma-v1-processor-ingest
40+
end
41+
subgraph Backend Connectors
42+
clickHouseConn[ClickHouse backend connector]
43+
end
44+
end
45+
end
46+
Queries[Incoming queries,\ne.g. kibana dashboard] --> i1 -->
47+
quesma-v1-processor-query --> elasticConn --> Elasticsearch[(Elasticsearch)]
48+
quesma-v1-processor-query --> chConn --> ClickHouse
49+
Ingest[Data ingestion,\ne.g.filebeat] --> i2 --> quesma-v1-processor-ingest --> clickHouseConn --> ClickHouse[(ClickHouse)]
50+
```
51+
52+
### Connectors
53+
54+
#### Frontend connectors
55+
56+
Frontend connector has to have a `name`, `type` and `config` fields.
57+
* `name` is a unique identifier for the connector
58+
* `type` specifies the type of the connector.\
59+
At this moment, only two frontend connector types are allowed: `elasticsearch-fe-query` and `elasticsearch-fe-ingest`.
60+
* `config` is a set of configuration options for the connector. Specifying `listenPort` is mandatory, as this is the port on which Quesma is going to listen for incoming requests. **Due to current limitations, all frontend connectors have to listen on the same port.**
61+
```yaml
62+
frontendConnectors:
63+
- name: elastic-ingest
64+
type: elasticsearch-fe-ingest
65+
config:
66+
listenPort: 8080
67+
- name: elastic-query
68+
type: elasticsearch-fe-query
69+
config:
70+
listenPort: 8080
71+
```
72+
73+
#### Backend connectors
74+
75+
Backend connector has to have a `name`, `type` and `config` fields.
76+
* `name` is a unique identifier for the connector
77+
* `type` specifies the type of the connector.\
78+
At this moment, only three backend connector types are allowed: `elasticsearch`, `clickhouse` (used for ClickHouse Cloud SaaS service, `clickhouse-os` and `hydrolix`.
79+
* `config` is a set of configuration options for the connector.
80+
```yaml
81+
backendConnectors:
82+
- name: my-minimal-elasticsearch
83+
type: elasticsearch
84+
config:
85+
user: "elastic"
86+
password: "change-me"
87+
url: "http://elasticsearch:9200"
88+
- name: my-clickhouse-data-source
89+
type: clickhouse-os
90+
config:
91+
user: "username"
92+
password: "username-is-password"
93+
database: "dbname"
94+
url: "clickhouse://clickhouse:9000"
95+
```
96+
**WARNING:** When connecting to ClickHouse or Hydrolix, only the native protocol connection (`clickhouse://`) is supported.
97+
98+
### Processors
99+
100+
At this moment there are three types of processors: `quesma-v1-processor-query`, `quesma-v1-processor-ingest` and `quesma-v1-processor-noop`.
101+
102+
```yaml
103+
processors:
104+
- name: my-query-processor
105+
type: quesma-v1-processor-query
106+
config:
107+
indexes:
108+
kibana_sample_data_ecommerce:
109+
target: [ backend-clickhouse ]
110+
schemaOverrides:
111+
fields:
112+
"geoip.location":
113+
type: geo_point
114+
"products.product_name":
115+
type: text
116+
"*":
117+
target: [ backend-elastic ]
118+
```
119+
120+
To get more specific information on processor configuration, please refer to [Processor configuration](#processor-configuration) section.
121+
122+
::: info Note
123+
* There's a special kind of processor type - `quesma-v1-processor-noop`. It can be used in both query and ingest pipelines. In conjunction with Elasticsearch frontend and Elasticsearch backend connector, it simply transparently routes the traffic 'as is'.
124+
:::
125+
126+
## Pipeline configuration
127+
128+
Pipeline configuration is an entity linking frontend connectors, processors and backend connectors. Pipeline must also have unique name.
129+
When referring to processors or connectors in the pipeline configuration, use their `name` field.
130+
Currently Quesma only supports configurations with either a single pipeline or two pipelines.
131+
132+
Example:
133+
```yaml
134+
pipelines:
135+
- name: my-pipeline-elasticsearch-query-clickhouse
136+
frontendConnectors: [ elastic-query ]
137+
processors: [ my-query-processor ]
138+
backendConnectors: [ my-minimal-elasticsearch, my-clickhouse-data-source ]
139+
- name: my-pipeline-elasticsearch-ingest-to-clickhouse
140+
frontendConnectors: [ elastic-ingest ]
141+
processors: [ my-ingest-processor ]
142+
backendConnectors: [ my-minimal-elasticsearch, my-clickhouse-data-source ]
143+
```
144+
145+
## Processor configuration
146+
147+
Currently, both `quesma-v1-processor-query` and `quesma-v1-processor-ingest` processors have the same configuration options. `quesma-v1-processor-noop` doesn't have any configuration options.
148+
149+
For query and ingest processors you can configure specific indexes via the `indexes` dictionary:
150+
```yaml
151+
processors:
152+
- name: my-query-processor
153+
type: quesma-v1-processor-query
154+
config:
155+
indexes:
156+
kibana_sample_data_logs:
157+
target: [ backend-elasticsearch ]
158+
kibana_sample_data_ecommerce:
159+
target: [ backend-clickhouse ]
160+
schemaOverrides:
161+
fields:
162+
"geoip.location":
163+
type: geo_point
164+
"*": # Always required
165+
target: [ backend-elasticsearch ]
166+
```
167+
168+
### Index configuration
169+
170+
`indexes` configuration is a dictionary of configurations for specific indexes. In the example above, the configuration sets up Quesma's behavior for `kibana_sample_data_logs`, `kibana_sample_data_ecommerce` indexes (visible as Elastic indexes), as well as a mandatory field for the default behavior for all other indexes (`*` entry).
171+
172+
The configuration for an index consists of the following configuration options:
173+
- `target` (required): a list of backend connectors that will handle the request. For example the following configuration in the ingest processor:
174+
```yaml
175+
my_index:
176+
target: [ backend-elasticsearch, backend-clickhouse ]
177+
```
178+
will dual write ingest requests to `my_index` to both ElasticSearch and ClickHouse.
179+
Note that ElasticSearch/OpenSearch is the only supported backend for the `*` entry.
180+
If no targets are provided (example: `target: []`) in the configuration of an index in the ingest processor, ingest for that index will be disabled and incoming data will be dropped.
181+
- `override` (optional): override the name of table in Hydrolix/ClickHouse (by default Quesma uses the same table name as the index name)
182+
- `useCommonTable` (optional): if enabled, Quesma will store data in a single Hydrolix/ClickHouse table named `quesma_common_table`. See [ingest documentation](/ingest.md) for more details.
183+
184+
## Optional configuration options
185+
186+
### Quesma licensing configuration
187+
188+
In order to be able to use `hydrolix` or `clickhouse` backend connectors, one needs to supply `licenseKey` in the configuration file. Contact us at [email protected] if you need one.
189+
```yaml
190+
licenseKey: ZXlKcGJuTjBZV3hzWVhScG...
191+
```
192+
193+
### Quesma logging configuration
194+
```yaml
195+
logging:
196+
path: "/mnt/logs"
197+
level: "debug"
198+
disableFileLogging: false
199+
```
200+
By default, Quesma container logs only to stdout. If you want to log to a file, set `disableFileLogging` to `false` and provide a path to the log file.
201+
Make sure the path is writable by the container and is also volume-mounted.

0 commit comments

Comments
 (0)