Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@ Trino supports [reproducible builds](https://reproducible-builds.org) as of vers

* Mac OS X or Linux
* Note that some npm packages used to build the web UI are only available
for x86 architectures, so if you're building on Apple Silicon, you need
to have Rosetta 2 installed
for x86 architectures, so if you're building on Apple Silicon, you need
to have Rosetta 2 installed.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arguably this sucks and ideally we get rid of this restriction .. not even sure which packages they are actually .. do you know if this is still true @koszti ?

* Java 25.0.1+, 64-bit
* Docker
* Turn SELinux or other systems disabling write access to the local checkout
Expand Down Expand Up @@ -74,11 +74,11 @@ locally for the areas of code that you change.
After building Trino for the first time, you can load the project into your IDE
and run the server. We recommend using
[IntelliJ IDEA](http://www.jetbrains.com/idea/). Because Trino is a standard
Maven project, you easily can import it into your IDE. In IntelliJ, choose
Maven project, you can easily import it into your IDE. In IntelliJ, choose
*Open Project* from the *Quick Start* box or choose *Open*
from the *File* menu and select the root `pom.xml` file.

After opening the project in IntelliJ, double check that the Java SDK is
After opening the project in IntelliJ, double-check that the Java SDK is
properly configured for the project:

* Open the File menu and select Project Structure
Expand All @@ -91,9 +91,9 @@ The simplest way to run Trino for development is to run the `TpchQueryRunner`
class. It will start a development version of the server that is configured with
the TPCH connector. You can then use the CLI to execute queries against this
server. Many other connectors have their own `*QueryRunner` class that you can
use when working on a specific connector. The generally required VM option
here is `--add-modules jdk.incubator.vector` but various `*QueryRunner` classes
might require additional options (if necessary, check the `air.test.jvm.additional-arguments`
use when working on a specific connector. The VM option generally required here
is `--add-modules jdk.incubator.vector`, but various `*QueryRunner` classes
might require additional options (if necessary, check the `air.test.jvm.additional-arguments`
property in the `pom.xml` file of the module from which the runner comes).

### Running tests from the IDE
Expand Down
6 changes: 3 additions & 3 deletions core/docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ docker exec -it trino trino --catalog tpch --schema sf1

## Configuration

Configuration is expected to be mounted `/etc/trino`. If it is not mounted
Configuration is expected to be mounted at `/etc/trino`. If it is not mounted
then the default single node configuration will be used.

### Specific Config Options
Expand All @@ -59,8 +59,8 @@ across all worker nodes if desired. Additionally this has the added benefit of
#### `node.data-dir`

The default configuration uses `/data/trino` as the default for
`node.data-dir`. Thus if using the default configuration and a mounted volume
is desired for the data directory it should be mounted to `/data/trino`.
`node.data-dir`. If you use the default configuration and want the data
directory on a mounted volume, mount it at `/data/trino`.

## Building a custom Docker image

Expand Down
38 changes: 19 additions & 19 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ new documentation:
- [Present tense](https://developers.google.com/style/tense)

The Google guidelines include more material than listed here, and are used as a
guide that enable easy decision-making about proposed doc changes. Changes to
guide that enables easy decision-making about proposed doc changes. Changes to
existing documentation to follow these guidelines are underway.

As a specific style note, because different readers may perceive the phrases "a
Expand All @@ -50,7 +50,7 @@ Other useful resources:
## Tools

Documentation source files can be found in [Myst Markdown](https://mystmd.org/)
(`.md`) format in `src/main/sphinx` and sub-folders. Refer to the [Myst
(`.md`) format in `src/main/sphinx` and subdirectories. Refer to the [Myst
guide](https://mystmd.org/guide) and the existing documentation for more
information about how to write and format the documentation source.

Expand Down Expand Up @@ -146,8 +146,8 @@ re-run the ``build`` command and refresh the browser.

## Versioning

The version displayed in the resulting HTML is read by default from the top level Maven
`pom.xml` file `version` field.
The version displayed in the resulting HTML is read by default from the top-level Maven
`pom.xml` file's `version` field.

To deploy a specific documentation set (such as a SNAPSHOT version) as the release
version you must override the pom version with the `TRINO_VERSION`
Expand All @@ -157,7 +157,7 @@ environment variable.
TRINO_VERSION=355 docs/build
```

If you work on the docs for more than one invocation, you can export the
If you work on the docs across multiple builds, you can export the
variable and use it with Sphinx.

```bash
Expand All @@ -170,11 +170,11 @@ Maven pom has already moved to the next SNAPSHOT version.

## Style check

The project contains a configured setup for [Vale](https://vale.sh) and the
The project contains a configuration for [Vale](https://vale.sh) and the
Google developer documentation style. Vale is a command-line tool to check for
editorial style issues of a document or a set of documents.

Install vale with brew on macOS or follow the instructions on the website.
Install Vale with Homebrew on macOS or follow the instructions on the website.

```
brew install vale
Expand All @@ -184,28 +184,30 @@ The `docs` folder contains the necessary configuration to use vale for any
document in the repository:

* `.vale` directory with Google style setup
* `.vale/Vocab/Base/accept.txt` file for additional approved words and spelling
* `.vale.ini` configuration file configured for rst and md files
* `.vale/config/vocabularies/Base/accept.txt` file for additional approved
words and spelling
* `.vale.ini` configuration file configured for Markdown and reStructuredText
files

With this setup you can validate an individual file from the root by specifying
the path:
With this setup you can validate an individual file from the repository root by
specifying the path:

```
vale src/main/sphinx/overview/use-cases.md
vale docs/src/main/sphinx/overview/use-cases.md
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was not using the docs folder since the readme is already in that folder but I guess this works too

```

You can also use directory paths and all files within.

Treat all output from vale as another help towards better docs. Fixing any
issues is not required, but can help with learning more about the [Google style
Treat all output from Vale as another aid for improving the docs. Fixing any
issues is not required, but it can help you learn more about the [Google style
guide](https://developers.google.com/style) that we try to follow.

## Contribution requirements


To contribute corrections or new explanations to the Trino documentation requires
only a willingness to help and submission of your [Contributor License
Agreement](https://github.com/trinodb/cla) (CLA).
Contributing corrections or new explanations to the Trino documentation
requires only a willingness to help and submission of your [Contributor
License Agreement](https://github.com/trinodb/cla) (CLA).

## Workflow

Expand Down Expand Up @@ -287,5 +289,3 @@ Example PRs:

* https://github.com/trinodb/trino/pull/17778
* https://github.com/trinodb/trino/pull/13225


4 changes: 2 additions & 2 deletions docs/src/main/sphinx/admin/graceful-shutdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,6 @@ Once the API is called, the worker performs the following steps:
: - After this, the coordinator is aware of the shutdown and stops sending
tasks to the worker.
- Block until all active tasks are complete.
- Sleep for the grace period again in order to ensure the coordinator sees
all tasks are complete.
- Sleep for the grace period again to ensure that the coordinator sees that all
tasks are complete.
- Shutdown the application.
6 changes: 3 additions & 3 deletions docs/src/main/sphinx/admin/properties-task.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,6 @@ expression library.
- **Minimum value:** `1m`
- **Default value:** `2m`

The interval of Trino checks for splits that have processing time exceeding
`task.interrupt-stuck-split-tasks-timeout`. Only applies to threads that are blocked
by the third-party Joni regular expression library.
The interval at which Trino checks for splits whose processing time exceeds
`task.interrupt-stuck-split-tasks-timeout`. Only applies to threads that are
blocked by the third-party Joni regular expression library.
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/admin/spill.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ process it later.
In practice, when the cluster is idle, and all memory is available, a memory
intensive query may use all the memory in the cluster. On the other hand,
when the cluster does not have much free memory, the same query may be forced to
use disk as storage for intermediate data. A query, that is forced to spill to
disk, may have a longer execution time by orders of magnitude than a query that
use disk as storage for intermediate data. A query that is forced to spill to
disk may have a longer execution time by orders of magnitude than a query that
runs completely in memory.

Please note that enabling spill-to-disk does not guarantee execution of all
Expand Down
3 changes: 2 additions & 1 deletion docs/src/main/sphinx/client/jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ may need to manually register and configure the driver.
## Registering and configuring the driver

Drivers are commonly loaded automatically by applications once they are added to
its classpath. If your application does not, such as is the case for some
the application classpath. If your application does not, such as is the case
for some
GUI-based SQL editors, read this section. The steps to register the JDBC driver
in a UI or on the command line depend upon the specific application you are
using. Please check your application's documentation.
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/delta-lake.md
Original file line number Diff line number Diff line change
Expand Up @@ -474,8 +474,8 @@ SELECT *
FROM example.testdb.customer_orders FOR TIMESTAMP AS OF TIMESTAMP '2022-03-23 09:59:29.803 America/Los_Angeles';
```

You can use a date to specify a point a time in the past for using a snapshot of a table in a query.
Assuming that the session time zone is `America/Los_Angeles` the following queries are equivalent:
You can use a date to specify a point in time in the past for querying a table snapshot.
Assuming that the session time zone is `America/Los_Angeles`, the following queries are equivalent:

```sql
SELECT *
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/hive.md
Original file line number Diff line number Diff line change
Expand Up @@ -819,12 +819,12 @@ Newly added/renamed fields *must* have a default value in the Avro schema file.
The schema evolution behavior is as follows:

- Column added in new schema:
Data created with an older schema produces a *default* value when table is using the new schema.
Data created with an older schema produces a *default* value when the table is using the new schema.
- Column removed in new schema:
Data created with an older schema no longer outputs the data from the column that was removed.
- Column is renamed in the new schema:
This is equivalent to removing the column and adding a new one, and data created with an older schema
produces a *default* value when table is using the new schema.
produces a *default* value when the table is using the new schema.
- Changing type of column in the new schema:
If the type coercion is supported by Avro or the Hive connector, then the conversion happens.
An error is thrown for incompatible types.
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/iceberg.md
Original file line number Diff line number Diff line change
Expand Up @@ -1957,8 +1957,8 @@ SELECT *
FROM example.testdb.customer_orders FOR TIMESTAMP AS OF TIMESTAMP '2022-03-23 09:59:29.803 Europe/Vienna';
```

You can use a date to specify a point a time in the past for using a snapshot of a table in a query.
Assuming that the session time zone is `Europe/Vienna` the following queries are equivalent:
You can use a date to specify a point in time in the past for querying a table snapshot.
Assuming that the session time zone is `Europe/Vienna`, the following queries are equivalent:

```sql
SELECT *
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -1305,7 +1305,7 @@ The schema evolution behavior is as follows:
Data created with an older schema no longer outputs the data from the column that was removed.
- Column is renamed in the new schema:
This is equivalent to removing the column and adding a new one, and data created with an older schema
produces a *default* value when table is using the new schema.
produces a *default* value when the table is using the new schema.
- Changing type of column in the new schema:
If the type coercion is supported by Avro, then the conversion happens. An
error is thrown for incompatible types.
Expand Down Expand Up @@ -1415,7 +1415,7 @@ The schema evolution behavior is as follows:
Data created with an older schema no longer outputs the data from the column that was removed.
- Column is renamed in the new schema:
This is equivalent to removing the column and adding a new one, and data created with an older schema
produces a *default* value when table is using the new schema.
produces a *default* value when the table is using the new schema.
- Changing type of column in the new schema:
If the type coercion is supported by Protobuf, then the conversion happens. An error is thrown for incompatible types.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/develop/client-protocol.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The REST API allows clients to submit SQL queries to Trino and receive the
results. Clients include the CLI, the JDBC driver, and others provided by
the community. The preferred method to interact with Trino is using these
the community. The preferred method to interact with Trino is to use these
existing clients. This document provides details about the API for reference.
It can also be used to implement your own client, if necessary.

Expand Down
6 changes: 3 additions & 3 deletions docs/src/main/sphinx/develop/insert.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ To support `INSERT`, a connector must implement:

When executing an `INSERT` statement, the engine calls the `beginInsert()`
method in the connector, which receives a table handle and a list of columns.
It should return a `ConnectorInsertTableHandle`, that can carry any
connector specific information, and it's passed to the page sink provider.
The `PageSinkProvider` creates a page sink, that accepts `Page` objects.
It should return a `ConnectorInsertTableHandle` that can carry any
connector-specific information and is passed to the page sink provider.
The `PageSinkProvider` creates a page sink that accepts `Page` objects.

When all the pages for a specific split have been processed, Trino calls
`ConnectorPageSink.finish()`, which returns a `Collection<Slice>`
Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/installation/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ this by running the commands generated upon installation.
4. Once you are done with your exploration, enter the `quit` command in the
CLI.

5. Kill the tunnel to the coordinator pod. The is only available while the
5. Kill the tunnel to the coordinator pod. This is only available while the
`kubectl` process is running, so you can just kill the `kubectl` process
that's forwarding the port. In most cases that means pressing `CTRL` +
`C` in the terminal where the port-forward command is running.
Expand Down
8 changes: 4 additions & 4 deletions docs/src/main/sphinx/object-storage/file-system-cache.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,14 +152,14 @@ The cache code uses [OpenTelemetry tracing](/admin/opentelemetry).
## Recommendations

The speed of the local cache storage is crucial to the performance of the cache.
The most common and cost-efficient approach is to attach high performance SSD
disk or equivalents. Fast cache performance can be also be achieved with a RAM
disk used as in-memory cache.
The most common and cost-efficient approach is to attach high-performance SSD
disks or equivalent storage. Fast cache performance can also be achieved with a
RAM disk used as an in-memory cache.

In all cases, avoid using the root partition and disk of the node. Instead
attach one or more dedicated storage devices for the cache on each node. Storage
should be local, dedicated on each node, and not shared.

Your deployment method for Trino decides how to attach storage and create the
directories for caching. Typically you need to connect a fast storage system,
like an SSD drive, and ensure that is it mounted on the configured path.
like an SSD drive, and ensure that it is mounted on the configured path.
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/object-storage/file-system-hdfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,8 +119,8 @@ executed as the OS user who runs the Trino process, regardless of which user
submits the query.

Before running any `CREATE TABLE` or `CREATE TABLE AS` statements for Hive
tables in Trino, you must check that the user Trino is using to access HDFS has
access to the Hive warehouse directory. The Hive warehouse directory is
tables in Trino, you must check that the user that Trino uses to access HDFS
has access to the Hive warehouse directory. The Hive warehouse directory is
specified by the configuration variable `hive.metastore.warehouse.dir` in
`hive-site.xml`, and the default value is `/user/hive/warehouse`.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/object-storage/file-system-local.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ support:

The following example displays the related section from a
`etc/catalog/example.properties` catalog configuration using the Hive connector.
The coordinator and all workers nodes have an external storage mounted at
The coordinator and all worker nodes have an external storage mounted at
`/storage/datalake`, resulting in the location `local:///storage/datalake`.

```properties
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/security/file-system-access-control.md
Original file line number Diff line number Diff line change
Expand Up @@ -773,8 +773,8 @@ When these rules are present, the authorization is based on the first matching
rule, processed from top to bottom. If no rules match, the authorization is
denied.

Notice that in order to execute `ALTER` command on schema, table or view user requires `OWNERSHIP`
privilege.
To execute an `ALTER` command on a schema, table, or view, the user requires
the `OWNERSHIP` privilege.

Each authorization rule is composed of the following fields:

Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/security/oauth2.md
Original file line number Diff line number Diff line change
Expand Up @@ -229,11 +229,11 @@ The following configuration properties are available:
maximum session time for an OAuth2-authenticated client with refresh tokens
enabled. For more details, see [](trino-oauth2-troubleshooting).
* - `http-server.authentication.oauth2.refresh-tokens.issued-token.issuer`
- Issuer representing the coordinator instance, that is referenced in the
- Issuer representing the coordinator instance that is referenced in the
issued token, defaults to `Trino_coordinator`. The current Trino version is
appended to the value. This is mainly used for debugging purposes.
* - `http-server.authentication.oauth2.refresh-tokens.issued-token.audience`
- Audience representing this coordinator instance, that is used in the
- Audience representing this coordinator instance that is used in the
issued token. Defaults to `Trino_coordinator`.
* - `http-server.authentication.oauth2.refresh-tokens.secret-key`
- Base64-encoded secret key used to encrypt the generated token. By default
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/security/salesforce.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ for clients, such as the CLI, or the JDBC and ODBC drivers. The username and
password (or password and [security token](#security-token) concatenation) are
validated by having the Trino coordinator perform a login to Salesforce.

This allows you to enable users to authenticate to Trino via their Salesforce
basic credentials. This can also be used to secure the {ref}`Web UI
This allows users to authenticate to Trino with their Salesforce credentials.
This can also be used to secure the {ref}`Web UI
<web-ui-authentication>`.

:::{note}
Expand Down
4 changes: 2 additions & 2 deletions plugin/trino-delta-lake/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The Delta Lake connector can be used to interact with [Delta Lake](https://delta.io/) tables.

Trino has product tests in place for testing its compatibility with the
Trino has product tests in place for testing its compatibility with the
following Delta Lake implementations:

- Delta Lake OSS
Expand All @@ -23,7 +23,7 @@ testing/bin/ptl env up --environment singlenode-delta-lake-oss

At the time of this writing, Databricks Delta Lake and OSS Delta Lake differ in functionality provided.

In order to setup a Databricks testing environment there are several steps to be performed.
To set up a Databricks testing environment, perform the following steps.

### Delta Lake Databricks on AWS

Expand Down
4 changes: 2 additions & 2 deletions plugin/trino-http-server-event-listener/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Trino HTTP server event listener plugin

The HTTP server event listener plugin is optional and and therefore not included
The HTTP server event listener plugin is optional and therefore not included
in the default tarball and the default Docker image.

Follow the [plugin installation instructions](https://trino.io/docs/current/installation/plugins.html)
and optionally use the [trino-packages project](https://github.com/trinodb/trino-packages)
or manually [download the plugin archive](https://central.sonatype.com/artifact/io.trino/trino-http-server-event-listener)
for your installation and version.
for your installation and version.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😕

Loading