From b1cbe9ded774f7d2a5b87b5e0e8ccfd96794bdac Mon Sep 17 00:00:00 2001 From: David Phillips Date: Mon, 6 Apr 2026 12:51:51 -0700 Subject: [PATCH] Improve documentation wording and fix typos --- README.md | 14 +++---- core/docker/README.md | 6 +-- docs/README.md | 38 +++++++++---------- .../main/sphinx/admin/graceful-shutdown.md | 4 +- docs/src/main/sphinx/admin/properties-task.md | 6 +-- docs/src/main/sphinx/admin/spill.md | 4 +- docs/src/main/sphinx/client/jdbc.md | 3 +- docs/src/main/sphinx/connector/delta-lake.md | 4 +- docs/src/main/sphinx/connector/hive.md | 4 +- docs/src/main/sphinx/connector/iceberg.md | 4 +- docs/src/main/sphinx/connector/kafka.md | 4 +- .../main/sphinx/develop/client-protocol.md | 2 +- docs/src/main/sphinx/develop/insert.md | 6 +-- .../main/sphinx/installation/kubernetes.md | 2 +- .../object-storage/file-system-cache.md | 8 ++-- .../sphinx/object-storage/file-system-hdfs.md | 4 +- .../object-storage/file-system-local.md | 2 +- .../security/file-system-access-control.md | 4 +- docs/src/main/sphinx/security/oauth2.md | 4 +- docs/src/main/sphinx/security/salesforce.md | 4 +- plugin/trino-delta-lake/README.md | 4 +- .../README.md | 4 +- testing/trino-product-tests/README.md | 8 ++-- 23 files changed, 73 insertions(+), 70 deletions(-) diff --git a/README.md b/README.md index 70bdb18431b2..81692cf1ae91 100644 --- a/README.md +++ b/README.md @@ -44,8 +44,8 @@ Trino supports [reproducible builds](https://reproducible-builds.org) as of vers * Mac OS X or Linux * Note that some npm packages used to build the web UI are only available - for x86 architectures, so if you're building on Apple Silicon, you need - to have Rosetta 2 installed + for x86 architectures, so if you're building on Apple Silicon, you need + to have Rosetta 2 installed. * Java 25.0.1+, 64-bit * Docker * Turn SELinux or other systems disabling write access to the local checkout @@ -74,11 +74,11 @@ locally for the areas of code that you change. After building Trino for the first time, you can load the project into your IDE and run the server. We recommend using [IntelliJ IDEA](http://www.jetbrains.com/idea/). Because Trino is a standard -Maven project, you easily can import it into your IDE. In IntelliJ, choose +Maven project, you can easily import it into your IDE. In IntelliJ, choose *Open Project* from the *Quick Start* box or choose *Open* from the *File* menu and select the root `pom.xml` file. -After opening the project in IntelliJ, double check that the Java SDK is +After opening the project in IntelliJ, double-check that the Java SDK is properly configured for the project: * Open the File menu and select Project Structure @@ -91,9 +91,9 @@ The simplest way to run Trino for development is to run the `TpchQueryRunner` class. It will start a development version of the server that is configured with the TPCH connector. You can then use the CLI to execute queries against this server. Many other connectors have their own `*QueryRunner` class that you can -use when working on a specific connector. The generally required VM option -here is `--add-modules jdk.incubator.vector` but various `*QueryRunner` classes -might require additional options (if necessary, check the `air.test.jvm.additional-arguments` +use when working on a specific connector. The VM option generally required here +is `--add-modules jdk.incubator.vector`, but various `*QueryRunner` classes +might require additional options (if necessary, check the `air.test.jvm.additional-arguments` property in the `pom.xml` file of the module from which the runner comes). ### Running tests from the IDE diff --git a/core/docker/README.md b/core/docker/README.md index eabf921d0b17..b01f9ce903af 100644 --- a/core/docker/README.md +++ b/core/docker/README.md @@ -43,7 +43,7 @@ docker exec -it trino trino --catalog tpch --schema sf1 ## Configuration -Configuration is expected to be mounted `/etc/trino`. If it is not mounted +Configuration is expected to be mounted at `/etc/trino`. If it is not mounted then the default single node configuration will be used. ### Specific Config Options @@ -59,8 +59,8 @@ across all worker nodes if desired. Additionally this has the added benefit of #### `node.data-dir` The default configuration uses `/data/trino` as the default for -`node.data-dir`. Thus if using the default configuration and a mounted volume -is desired for the data directory it should be mounted to `/data/trino`. +`node.data-dir`. If you use the default configuration and want the data +directory on a mounted volume, mount it at `/data/trino`. ## Building a custom Docker image diff --git a/docs/README.md b/docs/README.md index a832d708fc7d..aa8c90b7354a 100644 --- a/docs/README.md +++ b/docs/README.md @@ -32,7 +32,7 @@ new documentation: - [Present tense](https://developers.google.com/style/tense) The Google guidelines include more material than listed here, and are used as a -guide that enable easy decision-making about proposed doc changes. Changes to +guide that enables easy decision-making about proposed doc changes. Changes to existing documentation to follow these guidelines are underway. As a specific style note, because different readers may perceive the phrases "a @@ -50,7 +50,7 @@ Other useful resources: ## Tools Documentation source files can be found in [Myst Markdown](https://mystmd.org/) -(`.md`) format in `src/main/sphinx` and sub-folders. Refer to the [Myst +(`.md`) format in `src/main/sphinx` and subdirectories. Refer to the [Myst guide](https://mystmd.org/guide) and the existing documentation for more information about how to write and format the documentation source. @@ -146,8 +146,8 @@ re-run the ``build`` command and refresh the browser. ## Versioning -The version displayed in the resulting HTML is read by default from the top level Maven -`pom.xml` file `version` field. +The version displayed in the resulting HTML is read by default from the top-level Maven +`pom.xml` file's `version` field. To deploy a specific documentation set (such as a SNAPSHOT version) as the release version you must override the pom version with the `TRINO_VERSION` @@ -157,7 +157,7 @@ environment variable. TRINO_VERSION=355 docs/build ``` -If you work on the docs for more than one invocation, you can export the +If you work on the docs across multiple builds, you can export the variable and use it with Sphinx. ```bash @@ -170,11 +170,11 @@ Maven pom has already moved to the next SNAPSHOT version. ## Style check -The project contains a configured setup for [Vale](https://vale.sh) and the +The project contains a configuration for [Vale](https://vale.sh) and the Google developer documentation style. Vale is a command-line tool to check for editorial style issues of a document or a set of documents. -Install vale with brew on macOS or follow the instructions on the website. +Install Vale with Homebrew on macOS or follow the instructions on the website. ``` brew install vale @@ -184,28 +184,30 @@ The `docs` folder contains the necessary configuration to use vale for any document in the repository: * `.vale` directory with Google style setup -* `.vale/Vocab/Base/accept.txt` file for additional approved words and spelling -* `.vale.ini` configuration file configured for rst and md files +* `.vale/config/vocabularies/Base/accept.txt` file for additional approved + words and spelling +* `.vale.ini` configuration file configured for Markdown and reStructuredText + files -With this setup you can validate an individual file from the root by specifying -the path: +With this setup you can validate an individual file from the repository root by +specifying the path: ``` -vale src/main/sphinx/overview/use-cases.md +vale docs/src/main/sphinx/overview/use-cases.md ``` You can also use directory paths and all files within. -Treat all output from vale as another help towards better docs. Fixing any -issues is not required, but can help with learning more about the [Google style +Treat all output from Vale as another aid for improving the docs. Fixing any +issues is not required, but it can help you learn more about the [Google style guide](https://developers.google.com/style) that we try to follow. ## Contribution requirements -To contribute corrections or new explanations to the Trino documentation requires -only a willingness to help and submission of your [Contributor License -Agreement](https://github.com/trinodb/cla) (CLA). +Contributing corrections or new explanations to the Trino documentation +requires only a willingness to help and submission of your [Contributor +License Agreement](https://github.com/trinodb/cla) (CLA). ## Workflow @@ -287,5 +289,3 @@ Example PRs: * https://github.com/trinodb/trino/pull/17778 * https://github.com/trinodb/trino/pull/13225 - - diff --git a/docs/src/main/sphinx/admin/graceful-shutdown.md b/docs/src/main/sphinx/admin/graceful-shutdown.md index cd884199ed14..96d369c8a1ad 100644 --- a/docs/src/main/sphinx/admin/graceful-shutdown.md +++ b/docs/src/main/sphinx/admin/graceful-shutdown.md @@ -45,6 +45,6 @@ Once the API is called, the worker performs the following steps: : - After this, the coordinator is aware of the shutdown and stops sending tasks to the worker. - Block until all active tasks are complete. -- Sleep for the grace period again in order to ensure the coordinator sees - all tasks are complete. +- Sleep for the grace period again to ensure that the coordinator sees that all + tasks are complete. - Shutdown the application. diff --git a/docs/src/main/sphinx/admin/properties-task.md b/docs/src/main/sphinx/admin/properties-task.md index 40af54dfc766..efe915fda828 100644 --- a/docs/src/main/sphinx/admin/properties-task.md +++ b/docs/src/main/sphinx/admin/properties-task.md @@ -177,6 +177,6 @@ expression library. - **Minimum value:** `1m` - **Default value:** `2m` -The interval of Trino checks for splits that have processing time exceeding -`task.interrupt-stuck-split-tasks-timeout`. Only applies to threads that are blocked -by the third-party Joni regular expression library. +The interval at which Trino checks for splits whose processing time exceeds +`task.interrupt-stuck-split-tasks-timeout`. Only applies to threads that are +blocked by the third-party Joni regular expression library. diff --git a/docs/src/main/sphinx/admin/spill.md b/docs/src/main/sphinx/admin/spill.md index 2687ef602709..ba0d82fe7102 100644 --- a/docs/src/main/sphinx/admin/spill.md +++ b/docs/src/main/sphinx/admin/spill.md @@ -36,8 +36,8 @@ process it later. In practice, when the cluster is idle, and all memory is available, a memory intensive query may use all the memory in the cluster. On the other hand, when the cluster does not have much free memory, the same query may be forced to -use disk as storage for intermediate data. A query, that is forced to spill to -disk, may have a longer execution time by orders of magnitude than a query that +use disk as storage for intermediate data. A query that is forced to spill to +disk may have a longer execution time by orders of magnitude than a query that runs completely in memory. Please note that enabling spill-to-disk does not guarantee execution of all diff --git a/docs/src/main/sphinx/client/jdbc.md b/docs/src/main/sphinx/client/jdbc.md index a085a63c77cb..64f01665fc90 100644 --- a/docs/src/main/sphinx/client/jdbc.md +++ b/docs/src/main/sphinx/client/jdbc.md @@ -55,7 +55,8 @@ may need to manually register and configure the driver. ## Registering and configuring the driver Drivers are commonly loaded automatically by applications once they are added to -its classpath. If your application does not, such as is the case for some +the application classpath. If your application does not, such as is the case +for some GUI-based SQL editors, read this section. The steps to register the JDBC driver in a UI or on the command line depend upon the specific application you are using. Please check your application's documentation. diff --git a/docs/src/main/sphinx/connector/delta-lake.md b/docs/src/main/sphinx/connector/delta-lake.md index efaee7acfe08..67ab2168dd1e 100644 --- a/docs/src/main/sphinx/connector/delta-lake.md +++ b/docs/src/main/sphinx/connector/delta-lake.md @@ -474,8 +474,8 @@ SELECT * FROM example.testdb.customer_orders FOR TIMESTAMP AS OF TIMESTAMP '2022-03-23 09:59:29.803 America/Los_Angeles'; ``` -You can use a date to specify a point a time in the past for using a snapshot of a table in a query. -Assuming that the session time zone is `America/Los_Angeles` the following queries are equivalent: +You can use a date to specify a point in time in the past for querying a table snapshot. +Assuming that the session time zone is `America/Los_Angeles`, the following queries are equivalent: ```sql SELECT * diff --git a/docs/src/main/sphinx/connector/hive.md b/docs/src/main/sphinx/connector/hive.md index d86491f5692c..0aca26b265a7 100644 --- a/docs/src/main/sphinx/connector/hive.md +++ b/docs/src/main/sphinx/connector/hive.md @@ -819,12 +819,12 @@ Newly added/renamed fields *must* have a default value in the Avro schema file. The schema evolution behavior is as follows: - Column added in new schema: - Data created with an older schema produces a *default* value when table is using the new schema. + Data created with an older schema produces a *default* value when the table is using the new schema. - Column removed in new schema: Data created with an older schema no longer outputs the data from the column that was removed. - Column is renamed in the new schema: This is equivalent to removing the column and adding a new one, and data created with an older schema - produces a *default* value when table is using the new schema. + produces a *default* value when the table is using the new schema. - Changing type of column in the new schema: If the type coercion is supported by Avro or the Hive connector, then the conversion happens. An error is thrown for incompatible types. diff --git a/docs/src/main/sphinx/connector/iceberg.md b/docs/src/main/sphinx/connector/iceberg.md index 736c45cc2066..fa28fb6254d4 100644 --- a/docs/src/main/sphinx/connector/iceberg.md +++ b/docs/src/main/sphinx/connector/iceberg.md @@ -1957,8 +1957,8 @@ SELECT * FROM example.testdb.customer_orders FOR TIMESTAMP AS OF TIMESTAMP '2022-03-23 09:59:29.803 Europe/Vienna'; ``` -You can use a date to specify a point a time in the past for using a snapshot of a table in a query. -Assuming that the session time zone is `Europe/Vienna` the following queries are equivalent: +You can use a date to specify a point in time in the past for querying a table snapshot. +Assuming that the session time zone is `Europe/Vienna`, the following queries are equivalent: ```sql SELECT * diff --git a/docs/src/main/sphinx/connector/kafka.md b/docs/src/main/sphinx/connector/kafka.md index 81e6a6abe4ac..df5651394dab 100644 --- a/docs/src/main/sphinx/connector/kafka.md +++ b/docs/src/main/sphinx/connector/kafka.md @@ -1305,7 +1305,7 @@ The schema evolution behavior is as follows: Data created with an older schema no longer outputs the data from the column that was removed. - Column is renamed in the new schema: This is equivalent to removing the column and adding a new one, and data created with an older schema - produces a *default* value when table is using the new schema. + produces a *default* value when the table is using the new schema. - Changing type of column in the new schema: If the type coercion is supported by Avro, then the conversion happens. An error is thrown for incompatible types. @@ -1415,7 +1415,7 @@ The schema evolution behavior is as follows: Data created with an older schema no longer outputs the data from the column that was removed. - Column is renamed in the new schema: This is equivalent to removing the column and adding a new one, and data created with an older schema - produces a *default* value when table is using the new schema. + produces a *default* value when the table is using the new schema. - Changing type of column in the new schema: If the type coercion is supported by Protobuf, then the conversion happens. An error is thrown for incompatible types. diff --git a/docs/src/main/sphinx/develop/client-protocol.md b/docs/src/main/sphinx/develop/client-protocol.md index d33fdf8e4b68..c1dd111082a6 100644 --- a/docs/src/main/sphinx/develop/client-protocol.md +++ b/docs/src/main/sphinx/develop/client-protocol.md @@ -2,7 +2,7 @@ The REST API allows clients to submit SQL queries to Trino and receive the results. Clients include the CLI, the JDBC driver, and others provided by -the community. The preferred method to interact with Trino is using these +the community. The preferred method to interact with Trino is to use these existing clients. This document provides details about the API for reference. It can also be used to implement your own client, if necessary. diff --git a/docs/src/main/sphinx/develop/insert.md b/docs/src/main/sphinx/develop/insert.md index cab389ee48a3..d398de0abcee 100644 --- a/docs/src/main/sphinx/develop/insert.md +++ b/docs/src/main/sphinx/develop/insert.md @@ -9,9 +9,9 @@ To support `INSERT`, a connector must implement: When executing an `INSERT` statement, the engine calls the `beginInsert()` method in the connector, which receives a table handle and a list of columns. -It should return a `ConnectorInsertTableHandle`, that can carry any -connector specific information, and it's passed to the page sink provider. -The `PageSinkProvider` creates a page sink, that accepts `Page` objects. +It should return a `ConnectorInsertTableHandle` that can carry any +connector-specific information and is passed to the page sink provider. +The `PageSinkProvider` creates a page sink that accepts `Page` objects. When all the pages for a specific split have been processed, Trino calls `ConnectorPageSink.finish()`, which returns a `Collection` diff --git a/docs/src/main/sphinx/installation/kubernetes.md b/docs/src/main/sphinx/installation/kubernetes.md index 1d7689b11fbb..4f26c0cbba20 100644 --- a/docs/src/main/sphinx/installation/kubernetes.md +++ b/docs/src/main/sphinx/installation/kubernetes.md @@ -196,7 +196,7 @@ this by running the commands generated upon installation. 4. Once you are done with your exploration, enter the `quit` command in the CLI. -5. Kill the tunnel to the coordinator pod. The is only available while the +5. Kill the tunnel to the coordinator pod. This is only available while the `kubectl` process is running, so you can just kill the `kubectl` process that's forwarding the port. In most cases that means pressing `CTRL` + `C` in the terminal where the port-forward command is running. diff --git a/docs/src/main/sphinx/object-storage/file-system-cache.md b/docs/src/main/sphinx/object-storage/file-system-cache.md index ddf76f9da479..533369c066f6 100644 --- a/docs/src/main/sphinx/object-storage/file-system-cache.md +++ b/docs/src/main/sphinx/object-storage/file-system-cache.md @@ -152,9 +152,9 @@ The cache code uses [OpenTelemetry tracing](/admin/opentelemetry). ## Recommendations The speed of the local cache storage is crucial to the performance of the cache. -The most common and cost-efficient approach is to attach high performance SSD -disk or equivalents. Fast cache performance can be also be achieved with a RAM -disk used as in-memory cache. +The most common and cost-efficient approach is to attach high-performance SSD +disks or equivalent storage. Fast cache performance can also be achieved with a +RAM disk used as an in-memory cache. In all cases, avoid using the root partition and disk of the node. Instead attach one or more dedicated storage devices for the cache on each node. Storage @@ -162,4 +162,4 @@ should be local, dedicated on each node, and not shared. Your deployment method for Trino decides how to attach storage and create the directories for caching. Typically you need to connect a fast storage system, -like an SSD drive, and ensure that is it mounted on the configured path. +like an SSD drive, and ensure that it is mounted on the configured path. diff --git a/docs/src/main/sphinx/object-storage/file-system-hdfs.md b/docs/src/main/sphinx/object-storage/file-system-hdfs.md index ba589628d490..e1f1a7c99663 100644 --- a/docs/src/main/sphinx/object-storage/file-system-hdfs.md +++ b/docs/src/main/sphinx/object-storage/file-system-hdfs.md @@ -119,8 +119,8 @@ executed as the OS user who runs the Trino process, regardless of which user submits the query. Before running any `CREATE TABLE` or `CREATE TABLE AS` statements for Hive -tables in Trino, you must check that the user Trino is using to access HDFS has -access to the Hive warehouse directory. The Hive warehouse directory is +tables in Trino, you must check that the user that Trino uses to access HDFS +has access to the Hive warehouse directory. The Hive warehouse directory is specified by the configuration variable `hive.metastore.warehouse.dir` in `hive-site.xml`, and the default value is `/user/hive/warehouse`. diff --git a/docs/src/main/sphinx/object-storage/file-system-local.md b/docs/src/main/sphinx/object-storage/file-system-local.md index 3ea2492925f7..fb88868ef596 100644 --- a/docs/src/main/sphinx/object-storage/file-system-local.md +++ b/docs/src/main/sphinx/object-storage/file-system-local.md @@ -30,7 +30,7 @@ support: The following example displays the related section from a `etc/catalog/example.properties` catalog configuration using the Hive connector. -The coordinator and all workers nodes have an external storage mounted at +The coordinator and all worker nodes have an external storage mounted at `/storage/datalake`, resulting in the location `local:///storage/datalake`. ```properties diff --git a/docs/src/main/sphinx/security/file-system-access-control.md b/docs/src/main/sphinx/security/file-system-access-control.md index fce65d652422..6d3cf8290fe4 100644 --- a/docs/src/main/sphinx/security/file-system-access-control.md +++ b/docs/src/main/sphinx/security/file-system-access-control.md @@ -773,8 +773,8 @@ When these rules are present, the authorization is based on the first matching rule, processed from top to bottom. If no rules match, the authorization is denied. -Notice that in order to execute `ALTER` command on schema, table or view user requires `OWNERSHIP` -privilege. +To execute an `ALTER` command on a schema, table, or view, the user requires +the `OWNERSHIP` privilege. Each authorization rule is composed of the following fields: diff --git a/docs/src/main/sphinx/security/oauth2.md b/docs/src/main/sphinx/security/oauth2.md index 58fdd328448b..eb9a6e99f429 100644 --- a/docs/src/main/sphinx/security/oauth2.md +++ b/docs/src/main/sphinx/security/oauth2.md @@ -229,11 +229,11 @@ The following configuration properties are available: maximum session time for an OAuth2-authenticated client with refresh tokens enabled. For more details, see [](trino-oauth2-troubleshooting). * - `http-server.authentication.oauth2.refresh-tokens.issued-token.issuer` - - Issuer representing the coordinator instance, that is referenced in the + - Issuer representing the coordinator instance that is referenced in the issued token, defaults to `Trino_coordinator`. The current Trino version is appended to the value. This is mainly used for debugging purposes. * - `http-server.authentication.oauth2.refresh-tokens.issued-token.audience` - - Audience representing this coordinator instance, that is used in the + - Audience representing this coordinator instance that is used in the issued token. Defaults to `Trino_coordinator`. * - `http-server.authentication.oauth2.refresh-tokens.secret-key` - Base64-encoded secret key used to encrypt the generated token. By default diff --git a/docs/src/main/sphinx/security/salesforce.md b/docs/src/main/sphinx/security/salesforce.md index 156603f9c7c2..f76597739390 100644 --- a/docs/src/main/sphinx/security/salesforce.md +++ b/docs/src/main/sphinx/security/salesforce.md @@ -5,8 +5,8 @@ for clients, such as the CLI, or the JDBC and ODBC drivers. The username and password (or password and [security token](#security-token) concatenation) are validated by having the Trino coordinator perform a login to Salesforce. -This allows you to enable users to authenticate to Trino via their Salesforce -basic credentials. This can also be used to secure the {ref}`Web UI +This allows users to authenticate to Trino with their Salesforce credentials. +This can also be used to secure the {ref}`Web UI `. :::{note} diff --git a/plugin/trino-delta-lake/README.md b/plugin/trino-delta-lake/README.md index 561635493fce..b8ae12c4c195 100644 --- a/plugin/trino-delta-lake/README.md +++ b/plugin/trino-delta-lake/README.md @@ -2,7 +2,7 @@ The Delta Lake connector can be used to interact with [Delta Lake](https://delta.io/) tables. -Trino has product tests in place for testing its compatibility with the +Trino has product tests in place for testing its compatibility with the following Delta Lake implementations: - Delta Lake OSS @@ -23,7 +23,7 @@ testing/bin/ptl env up --environment singlenode-delta-lake-oss At the time of this writing, Databricks Delta Lake and OSS Delta Lake differ in functionality provided. -In order to setup a Databricks testing environment there are several steps to be performed. +To set up a Databricks testing environment, perform the following steps. ### Delta Lake Databricks on AWS diff --git a/plugin/trino-http-server-event-listener/README.md b/plugin/trino-http-server-event-listener/README.md index 70fdecbbfb1c..2b71e1697742 100644 --- a/plugin/trino-http-server-event-listener/README.md +++ b/plugin/trino-http-server-event-listener/README.md @@ -1,9 +1,9 @@ # Trino HTTP server event listener plugin -The HTTP server event listener plugin is optional and and therefore not included +The HTTP server event listener plugin is optional and therefore not included in the default tarball and the default Docker image. Follow the [plugin installation instructions](https://trino.io/docs/current/installation/plugins.html) and optionally use the [trino-packages project](https://github.com/trinodb/trino-packages) or manually [download the plugin archive](https://central.sonatype.com/artifact/io.trino/trino-http-server-event-listener) -for your installation and version. \ No newline at end of file +for your installation and version. diff --git a/testing/trino-product-tests/README.md b/testing/trino-product-tests/README.md index 03a219b44fe2..4572e0ebc623 100644 --- a/testing/trino-product-tests/README.md +++ b/testing/trino-product-tests/README.md @@ -15,7 +15,7 @@ tests are run using the [Tempto](https://github.com/trinodb/tempto) harness. **There is a helper script at `testing/bin/ptl` which calls `testing/trino-product-tests-launcher/bin/run-launcher` and helps you avoid -typing the full path to the launcher everytime. Rest of this document uses +typing the full path to the launcher every time. The rest of this document uses `testing/bin/ptl` to start the launcher but you can use the full path too.** Developers should consider writing product tests in addition to any unit tests @@ -91,7 +91,7 @@ testing/bin/ptl test run --environment \ single node Trino installation, and one with a pseudo-distributed Hadoop installation. - **two-kerberos-hives** - two pseudo-distributed Hadoop installations running on - a single Docker containers. Both Hadoop (Hive) installations are kerberized. + a single Docker container. Both Hadoop (Hive) installations are kerberized. A single node installation of kerberized Trino also running on a single Docker container. @@ -284,7 +284,9 @@ If you see an error similar to Failed on local exception: java.net.SocketException: Malformed reply from SOCKS server; Host Details : local host is [...] ``` Make sure your `/etc/hosts` points to proper IP address (see [Debugging Java based tests](#debugging-java-based-tests), step 3). -Also it's worth confirming that your Hive properties file accounts for the socks proxy used in Hive container (steps 4-5 of [Debugging Java based tests](#debugging-java-based-tests)). +Also, it's worth confirming that your Hive properties file accounts for the +SOCKS proxy used in the Hive container (steps 4-5 of [Debugging Java based +tests](#debugging-java-based-tests)). If `/etc/hosts` entries have changed since the time when Docker containers were provisioned it's worth removing them and re-provisioning. To do so, use `docker rm` on each container used in product tests.