Skip to content

Conversation

@asnare
Copy link
Contributor

@asnare asnare commented Oct 7, 2024

Changes

This PR fixes the documentation for the make_query() fixture: the argument name was incorrect.

@asnare asnare added the documentation Improvements or additions to documentation label Oct 7, 2024
@asnare asnare self-assigned this Oct 7, 2024
@asnare asnare requested a review from nfx as a code owner October 7, 2024 13:32
@asnare asnare changed the title Fix incorrect parameter name in make_query() documentation. Documentation: fix make_query() parameter name Oct 7, 2024
@github-actions
Copy link

github-actions bot commented Oct 7, 2024

This PR breaks backwards compatibility for databrickslabs/blueprint downstream. See build logs for more details.

Running from downstreams #58

@github-actions
Copy link

github-actions bot commented Oct 7, 2024

This PR breaks backwards compatibility for databrickslabs/lsql downstream. See build logs for more details.

Running from downstreams #58

@github-actions
Copy link

github-actions bot commented Oct 7, 2024

✅ 35/35 passed, 3 skipped, 2m20s total

Running from acceptance #99

Copy link
Collaborator

@nfx nfx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@nfx nfx merged commit 53d2a0f into main Oct 8, 2024
7 of 9 checks passed
@nfx nfx deleted the doc/make-query-fix branch October 8, 2024 08:46
nfx added a commit that referenced this pull request Oct 8, 2024
* Documentation: fix `make_query()` parameter name ([#61](#61)). The `make_query()` fixture's documentation has been updated to correct the name of the `query` parameter to `sql_query`. The `sql_query` parameter is used to specify the SQL query stored in the fixture, with the default value being "SELECT \* FROM <newly created random table>". This change aims to enhance clarity and consistency in the naming of the argument, making it easier for users of the `make_query()` fixture to comprehend its purpose and usage. By correcting the parameter name, the documentation provides a clearer and more consistent user experience.
* Removed references to UCX ([#56](#56)). This release includes changes to remove references to UCX in fixture names and descriptions within the testing process. The `create` function in `catalog.py` now appends a random string to "dummy_t", "dummy_s", or `dummy_c` for table, schema, and catalog names, respectively, instead of using "ucx_t", "ucx_S", and "ucx_C". The `test_catalog_fixture` function has also been updated to replace `dummy` with `dummy_c` and `dummy_s` for catalogs and schemas. Additionally, the description of a test query in `redash.py` has been updated to remove the reference to UCX. Lastly, fixture names in the unit tests for a catalog have been updated to use `dummy` instead of "ucx". These changes improve the independence of the testing process by removing technology-specific references, without affecting functionality.
* Store watchdog tags in storage credentials comment ([#57](#57)). In this release, the watchdog's behavior has been modified to retain properly tagged credentials when deleting them, as previously all credentials were removed without discrimination. This change introduces tagging for preserving specific credentials, and the `watchdog_remove_after` fixture has been added to the README file for documentation. The `make_storage_credential` fixture has been updated to include a new parameter, `watchdog_remove_after`, which specifies the time at which the storage credential should be removed by the watchdog. The `create` function has been updated to accept this parameter and adds it as a comment to the storage credential. The `remove` function remains unmodified. The related fixtures section has been updated to include the new `watchdog_remove_after` fixture. This change was co-authored by Eric Vergnaud, but please note that it has not been tested yet.
* [FEATURE] Extend `make_job` to run `SparkPythonTask` ([#60](#60)). The `make_job` fixture has been extended to support running `SparkPythonTask` in addition to notebook tasks. A new `make_workspace_file` fixture has been added to create and manage Python files in the workspace. The `make_job` fixture now supports SQL notebooks and files and includes a `task_type` parameter to specify the type of task to run and an `instance_pool_id` parameter to reuse an instance pool for faster job execution during integration tests. Additionally, unit and integration tests have been added to ensure the proper functioning of the new and modified fixtures. These changes allow for more flexible and efficient testing of Databricks jobs with different task types and configurations. The `make_notebook` fixture has also been updated to accept a `content` parameter for creating notebooks with custom content. The `Language` enum from the `databricks.sdk.service.workspace` module is used to specify the language of a notebook or workspace file.
@nfx nfx mentioned this pull request Oct 8, 2024
nfx added a commit that referenced this pull request Oct 8, 2024
* Documentation: fix `make_query()` parameter name
([#61](#61)). The
`make_query()` fixture's documentation has been updated to correct the
name of the `query` parameter to `sql_query`. The `sql_query` parameter
is used to specify the SQL query stored in the fixture, with the default
value being "SELECT \* FROM <newly created random table>". This change
aims to enhance clarity and consistency in the naming of the argument,
making it easier for users of the `make_query()` fixture to comprehend
its purpose and usage. By correcting the parameter name, the
documentation provides a clearer and more consistent user experience.
* Removed references to UCX
([#56](#56)). This
release includes changes to remove references to UCX in fixture names
and descriptions within the testing process. The `create` function in
`catalog.py` now appends a random string to "dummy_t", "dummy_s", or
`dummy_c` for table, schema, and catalog names, respectively, instead of
using "ucx_t", "ucx_S", and "ucx_C". The `test_catalog_fixture` function
has also been updated to replace `dummy` with `dummy_c` and `dummy_s`
for catalogs and schemas. Additionally, the description of a test query
in `redash.py` has been updated to remove the reference to UCX. Lastly,
fixture names in the unit tests for a catalog have been updated to use
`dummy` instead of "ucx". These changes improve the independence of the
testing process by removing technology-specific references, without
affecting functionality.
* Store watchdog tags in storage credentials comment
([#57](#57)). In this
release, the watchdog's behavior has been modified to retain properly
tagged credentials when deleting them, as previously all credentials
were removed without discrimination. This change introduces tagging for
preserving specific credentials, and the `watchdog_remove_after` fixture
has been added to the README file for documentation. The
`make_storage_credential` fixture has been updated to include a new
parameter, `watchdog_remove_after`, which specifies the time at which
the storage credential should be removed by the watchdog. The `create`
function has been updated to accept this parameter and adds it as a
comment to the storage credential. The `remove` function remains
unmodified. The related fixtures section has been updated to include the
new `watchdog_remove_after` fixture. This change was co-authored by Eric
Vergnaud, but please note that it has not been tested yet.
* [FEATURE] Extend `make_job` to run `SparkPythonTask`
([#60](#60)). The
`make_job` fixture has been extended to support running
`SparkPythonTask` in addition to notebook tasks. A new
`make_workspace_file` fixture has been added to create and manage Python
files in the workspace. The `make_job` fixture now supports SQL
notebooks and files and includes a `task_type` parameter to specify the
type of task to run and an `instance_pool_id` parameter to reuse an
instance pool for faster job execution during integration tests.
Additionally, unit and integration tests have been added to ensure the
proper functioning of the new and modified fixtures. These changes allow
for more flexible and efficient testing of Databricks jobs with
different task types and configurations. The `make_notebook` fixture has
also been updated to accept a `content` parameter for creating notebooks
with custom content. The `Language` enum from the
`databricks.sdk.service.workspace` module is used to specify the
language of a notebook or workspace file.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants