Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.1.0 #38

Merged
merged 1 commit into from
Sep 17, 2024
Merged

Release v0.1.0 #38

merged 1 commit into from
Sep 17, 2024

Conversation

nfx
Copy link
Collaborator

@nfx nfx commented Sep 17, 2024

  • Added Databricks Connect fixture. A new fixture named spark has been added to the codebase, providing a Databricks Connect Spark session for testing purposes. The fixture requires the databricks-connect package to be installed and takes a WorkspaceClient object as an argument. It first checks if a cluster_id is present in the environment, and if not, it skips the test and raises a message. The fixture then ensures that the cluster is running and attempts to import the DatabricksSession class from the databricks.connect module. If the import fails, it skips the test and raises a message. This new fixture enables easier testing of Databricks Connect functionality, reducing boilerplate code required to set up a Spark session within tests. Additionally, a new is_in_debug fixture has been added, although there is no further documentation or usage examples provided for it.
  • Added make_*_permissions fixtures. In this release, we have added new fixtures to the pytester plugin for managing permissions in Databricks. These fixtures include make_alert_permissions, make_authorization_permissions, make_cluster_permissions, make_cluster_policy_permissions, make_dashboard_permissions, make_directory_permissions, make_instance_pool_permissions, make_job_permissions, make_notebook_permissions, make_pipeline_permissions, make_query_permissions, make_registered_model_permissions, make_repository_permissions, make_serving_endpoint_permissions, make_warehouse_permissions, make_workspace_file_permissions, and make_workspace_file_path_permissions. These fixtures allow for easier testing of functionality that requires managing permissions in Databricks, and are used for managing permissions for various Databricks resources such as alerts, authorization, clusters, cluster policies, dashboards, directories, instance pools, jobs, notebooks, pipelines, queries, registered models, repositories, serving endpoints, warehouses, and workspace files. Additionally, a new make_notebook_permissions fixture has been introduced in the test_permissions.py file for integration tests, which allows for more comprehensive testing of the IAM system's behavior when handling notebook permissions.
  • Added make_catalog fixture. A new fixture, make_catalog, has been added to the codebase to facilitate testing with specific catalogs, ensuring isolation and reproducibility. This fixture creates a catalog, returns its information, and removes the catalog after the test is complete. It can be used in conjunction with other fixtures such as ws, sql_backend, and make_random. The fixture is utilized in the updated test_catalog_fixture integration test function, which now includes new arguments make_catalog, make_schema, and make_table. These fixtures create catalog, schema, and table objects, enabling more comprehensive testing of the catalog, schema, and table creation functionality. Please note that catalogs created using this fixture are not currently protected from being deleted by the watchdog.
  • Added make_catalog, make_schema, and make_table fixtures (#33). In this release, we have updated the databricks-labs-blueprint package dependency to databricks-labs-lsql~=0.10 and added several fixtures to the codebase to improve the reliability and maintainability of the test suite. We have introduced three new fixtures make_catalog, make_schema, and make_table that are used for creating and managing test catalogs, schemas, and tables, respectively. These fixtures enable the creation of arbitrary test data and simplify testing by allowing predictable and consistent setup and teardown of test data for integration tests. Additionally, we have added several debugging fixtures, including debug_env_name, debug_env, env_or_skip, and sql_backend, to aid in testing DataBricks features related to SQL, environments, and more. The make_udf fixture has also been added for testing user-defined functions in DataBricks. These new fixtures and methods will assist in testing the project's functionality and ensure that the code is working as intended, making the tests more maintainable and easier to understand.
  • Added make_cluster documentation. The make_cluster fixture has been updated with new functionality and improvements. It now creates a Databricks cluster with specified configurations, waits for it to start, and cleans it up after the test, returning a function to create clusters. The cluster_id attribute is accessible from the returned object. The fixture accepts several keyword arguments: single_node to create a single-node cluster, cluster_name to specify a cluster name, spark_version to set the Spark version, and autotermination_minutes to determine when the cluster should be automatically terminated. The ws and make_random parameters have been removed. The commit also introduces a new test function, test_cluster, that creates a single-node cluster and outputs a message indicating the creation. Documentation for the make_cluster function has been added, and the make_cluster_policy function remains unchanged.
  • Added make_experiment fixture. In this release, we introduce the make_experiment fixture in the databricks.labs.pytester.fixtures.ml module, facilitating the creation and cleanup of Databricks Experiments for testing purposes. This fixture accepts optional path and experiment_name parameters and returns a databricks.sdk.service.ml.CreateExperimentResponse object. Additionally, make_experiment_permissions has been added for managing experiment permissions. In the permissions.py file, the _make_permissions_factory function replaces the previous _make_redash_permissions_factory, enhancing the code's maintainability and extensibility. Furthermore, a make_experiment fixture has been added to the plugin.py file for creating experiments with custom names and descriptions. Lastly, a test_experiments function has been included in the tests/integration/fixtures directory, utilizing make_group, make_experiment, and make_experiment_permissions fixtures to create experiments and assign group permissions.
  • Added make_instance_pool documentation. In this release, the make_instance_pool fixture has been updated with added documentation, and the usage example has been slightly modified. The fixture now accepts optional keyword arguments for the instance pool name and node type ID, with default values set for each. The make_random fixture is still required for generating unique names. Additionally, a new function, log_workspace_link, has been updated to accept a new parameter anchor for controlling the inclusion of an anchor (#) in the generated URL. New test functions test_instance_pool and test_cluster_policy have been added to enhance the integration testing of the compute system, providing more comprehensive coverage for instance pools and cluster policies. Furthermore, documentation has been added for the make_instance_pool fixture. Lastly, three test functions, test_cluster, test_instance_pool, and test_job, have been removed, but the setup functions for these tests are retained, indicating a possible streamlining of the codebase.
  • Added make_job documentation. The make_job fixture has been updated with additional arguments and improved documentation. It now accepts notebook_path, name, spark_conf, and libraries as optional keyword arguments, and can accept any additional arguments to be passed to the WorkspaceClient.jobs.create method. If no notebook_path or tasks argument is provided, a random notebook is created and a single task with a notebook task is run using the latest Spark version and a single worker cluster. The fixture has been improved to manage Databricks jobs and clean them up after testing. Additionally, documentation has been added for the make_job function and the test_job function in the test fixtures file. The test_job function, which created a job and logged its creation, has been removed, and the test_cluster and test_pipeline functions remain unchanged. The os module is no longer imported in this file.
  • Added make_model fixture. A new pytest fixture, make_model, has been added to the codebase for the open-source library. This fixture facilitates the creation and automatic cleanup of Databricks Models during tests, returning a GetModelResponse object. The optional model_name parameter allows for customization, with a default value of dummy-*. The make_model fixture can be utilized in conjunction with other fixtures such as ws, make_random, and make_registered_model_permissions, streamlining the testing of model-related functionality. Additionally, a new test function, test_models, has been introduced, utilizing make_model, make_group, and make_registered_model_permissions fixtures to test model management within the system. This new feature enhances the library's testing capabilities, making it easier to create, configure, and manage models and related resources during test execution.
  • Added make_pipeline fixture. A new fixture named make_pipeline has been added to the project, which facilitates the creation and cleanup of a Delta Live Tables Pipeline after testing. This fixture is added to the compute.py file and takes optional keyword arguments such as name, libraries, and clusters. It generates a random name, creates a disposable notebook with random libraries, and creates a single node cluster with 16GB memory and local disk if these arguments are not provided. The fixture returns a function to create pipelines, resulting in a CreatePipelineResponse instance. Additionally, a new integration test has been added to test the functionality of this fixture, and it logs information about the created pipeline for debugging and inspection purposes. This new fixture improves the testing capabilities of the project, allowing for more robust and flexible tests of pipeline creation and management.
  • Added make_query fixture. In this release, we have added a new fixture called make_query to the plugin module for the Redash integration. This fixture creates a LegacyQuery object for testing query-related functionality in a controlled environment. It can be used in conjunction with the make_user and make_query_permissions fixtures to test query permissions for a specific user. The make_query fixture generates a random query name, creates a table, and uses the ws.queries_legacy.create method to create the query. The query is then deleted using the ws.queries_legacy.delete method after the test is completed. This fixture is utilized in the test_permissions_for_redash function, which creates a user and a query, and then sets the permission level for the query for the created user using the make_query_permissions fixture. This enhancement improves the testing capabilities of the Pytester framework for projects that utilize Redash.
  • Added make_schema fixture. A new make_schema fixture has been added to the open-source library to improve schema management and testing. This fixture creates a schema with an optional catalog name and a schema name, which defaults to a random string. The fixture cleans up the schema after the test is complete and returns an instance of SchemaInfo. It can be used in conjunction with other fixtures such as make_table and make_udf for easier testing and setup of schemas. Additionally, the make_schema fixture includes a new keyword-only argument log_workspace_link to log a link to the created schema in the Databricks workspace. The make_catalog fixture has also been updated to include the log_workspace_link argument for logging links to created catalogs. These changes enhance the testability of the code and provide better catalog and schema management in the Databricks workspace.
  • Added make_serving_endpoint fixture. A new make_serving_endpoint fixture has been added to the codebase, located in baseline.py, ml.py, and plugin.py files, and tests/integration/fixtures/test_ml.py. This fixture enables the creation and deletion of Databricks Serving Endpoints, handling any potential DatabricksError exceptions during teardown. It also creates a model for a small workload size and returns a ServingEndpointDetailed object. The make_serving_endpoint_permissions fixture is introduced as well, creating serving endpoint permissions for a specified object ID, permission level, and group name. New tests have been implemented to demonstrate the usage of these fixtures, showing how to create serving endpoints, grant query permissions to a group, and test the endpoint. Additionally, updates have been made to the README.md file to include documentation for the new fixtures.
  • Added make_storage_credential fixture. In this release, we have added a new fixture called make_storage_credential to our testing utilities. This fixture creates a storage credential with configurable parameters such as credential name, Azure service principal information, AWS IAM role ARN, and read-only status. It can be used to create either an Azure or AWS storage credential, depending on the provided parameters, and removes the created credential after the test. This fixture is implemented in plugin.py and is added to the existing list of fixtures for consistent and easy-to-use testing setup. Additionally, we have introduced an integration test called test_storage_credential in the test catalog for fixtures. This test utilizes the new make_storage_credential fixture and verifies the functionality of creating a storage credential and the integration between the system and storage services. These new additions will make it easier to write tests that require access to storage resources and improve the efficiency and ease of testing and developing new features in the codebase.
  • Added make_table fixture. In this release, we've added the make_table fixture to simplify testing operations on tables and catalogs. This fixture creates a table with a given catalog and schema name, CTAS statement, and properties. It can create the table as a non-delta or delta table, external table with CSV or Delta location, or a view, and allows overriding the storage location. Additionally, we've updated the fixture to include new parameters and functionality, such as logging a workspace link for the created table and specifying the catalog and schema where the table will be created. The fixture now also includes new functions for creating and casting columns in the table. After the test, the fixture automatically removes the created table. This release aims to provide a more customizable and convenient way to test table operations.
  • Added make_udf fixture. The make_udf fixture has been added to facilitate the creation and removal of User-Defined Functions (UDFs) for testing purposes. This fixture creates a UDF with optional parameters to specify catalog, schema, name, and Hive UDF creation. It returns an instance of databricks.sdk.service.catalog.FunctionInfo. The UDF is removed after the test. This feature is utilized in the new test_make_some_udfs integration test, where it creates two UDFs in a schema within the Hive metastore, one with and one without Hive support. Additionally, the test_create_view test is now skipped, and the test_table_fixture test remains unchanged. This change improves the ability to test UDFs within the Hive metastore, and allows for more comprehensive testing by creating UDFs programmatically.
  • Added make_warehouse fixture. A new make_warehouse fixture has been added to the test suite, which allows for the creation and customization of a Databricks warehouse for testing purposes. The fixture accepts optional keyword arguments such as warehouse_name, warehouse_type, cluster_size, max_num_clusters, and enable_serverless_compute, allowing users to configure the warehouse's properties. It returns a function that creates a warehouse using the provided parameters and handles cleanup after the test is complete. Additionally, a corresponding test function test_warehouse_has_remove_after_tag has been added to verify that a newly created warehouse has the expected RemoveAfter tag, facilitating automated testing and resource management. This enhancement expands the testing capabilities of the plugin and provides a more streamlined approach to testing functionality related to Databricks warehouses.
  • Added ability to specify custom SQL in make_query. The make_query fixture has been updated to allow for greater customization in testing, with the addition of a new query keyword argument. This parameter enables users to specify a custom SQL query to be stored and executed, with the default value being SELECT * FROM <newly created random table>. The fixture continues to create and remove the LegacyQuery object, making it user-friendly. With this enhancement, users have increased flexibility to tailor their tests to specific needs, providing more targeted and precise testing outcomes.
  • Added documentation for make_cluster_policy. In this release, we introduce new features to enhance testing and managing Databricks cluster policies and workspace link logging in your project. We've added the make_cluster_policy fixture, which simplifies the creation and deletion of cluster policies using a specified workspace. This fixture returns a CreatePolicyResponse instance and can be used within test functions. Additionally, we've developed the log_workspace_link fixture, which constructs and logs a workspace link for debugging and tracking purposes. The make_cluster_policy function is also introduced in the plugin.py file, enabling users to manage and test Databricks cluster policies using the pytester framework. To ensure proper functionality, the test_compute.py file includes a test function for make_cluster_policy. These improvements will help streamline testing processes and enhance the overall user experience.
  • Added documentation for make_group and make_user. In this release, we have introduced the make_group and make_user fixtures to manage Databricks workspace groups and users, respectively. The make_group fixture allows you to create groups with specified members, roles, and entitlements, handling eventual consistency issues and waiting for group provisioning if required. The make_user fixture creates a user and deletes it after the test, handling naming conflicts by retrying the creation process for 30 seconds. Both fixtures return instances of Group and User, respectively, and have been documented in the README.md with usage examples. Additionally, we have introduced a built-in logger that traces entity creation and deletion through links in the Databricks Workspace UI, and added documentation for the make_group and make_user functions using the gen-readme.py script. The release also includes updates to the conftest.py file in the tests/integration directory, importing the fixture function from pytest and the install_logger and logging modules from databricks.labs.blueprint.logger to improve documentation and configure logging for the project.
  • Added documentation for make_notebook, make_directory, and make_repo. The make_notebook, make_directory, and make_repo fixtures have been updated with new functionality and improved documentation in this release. These fixtures are used in tests to manage Databricks notebooks, directories, and repos respectively, and they now return functions that create resources with specified parameters. The make_notebook fixture now includes optional keyword arguments for path, content, language, format, and overwrite, and returns an os.PathLike object that will be automatically deleted after the test is complete. The make_directory fixture now includes an optional keyword argument for path, and the make_repo fixture now includes optional keyword arguments for url, provider, and path. These fixtures simplify the process of creating and managing Databricks resources in tests and help ensure that resources are properly cleaned up after each test is complete. The commit also includes documentation for the new functionality and integration tests for these fixtures.
  • Added documentation for make_secret_scope and make_secret_scope_acl. In this release, documentation has been added for two new functions, make_secret_scope and make_secret_scope_acl, which are used for creating and managing secret scopes and their associated access control lists (ACLs) in a Databricks Workspace. The make_secret_scope function creates a new secret scope with a unique name generated using a random name generator, and automatically deletes the scope after the test is complete. The make_secret_scope_acl function manages ACLs for secret scopes, defining permissions for principals (users or groups) on specific secret scopes. Three new test functions have also been added to test the functionality of creating secret scopes and managing their ACLs using these new functions. Additionally, type hints have been added to the package to support PEP 561. Overall, these changes improve the documentation and testing of the project, making it easier for developers to understand and use these new functions for managing secret scopes and their ACLs in a Databricks Workspace.
  • Added documentation update on make fmt (#34). In this release, the make fmt command in the documentation has been updated to include an additional step that runs the gen-readme.py script before executing hatch run fmt. This new script generates or updates the README file with detailed documentation on various PyTest fixtures available in the Python Testing for Databricks project. A new Fixture dataclass has been introduced to represent a fixture's metadata, and the databricks.labs.pytester.fixtures.plugin module is used to discover all fixtures. The FIXTURES section in the README.md file has been updated with the new documentation, which includes information on the purpose, parameters, return values, and usage examples for each fixture. The test and lint targets in the Makefile remain unchanged. Please note that this project is not officially supported by Databricks.
  • Added downstream testing. In this enhancement, we have implemented downstream testing in our CI/CD pipeline through the introduction of a new GitHub Actions workflow called "downstreams.yml." This workflow runs tests when pull requests are opened, synchronized, or checked during a merge group, and on pushes to the main branch. The job compatibility is set up to run on the latest version of Ubuntu, and it includes steps to checkout the code with a full fetch depth, install Python, install the toolchain, and run the downstreams test suite using the databrickslabs/sandbox/downstreams action. The downstreams matrix includes the blueprint, lsql, ucx, and remorph repositories in the databrickslabs organization. The GITHUB_TOKEN environment variable is used for authentication. This improvement will help ensure that our codebase remains stable and functional as we continue to develop and release new features.
  • Added note on UCX project. In the 2024 release, the open-source library has undergone significant updates, incorporating the UCX project into its ecosystem. UCX, an open-source project providing a unified communication layer for various high-performance computing (HPC) platforms, enhances the library's functionality, particularly in automated migrations and static code analysis. The library, developed as part of the Unity Catalog Automated Migrations project, has also added new authors and maintainers, including Vuong Nguyen, Lars George, Cor Zuurmond, Andrew Snare, Pritish Pai, and removed Liran Bareket and Vuong Nguyen, indicating potential new contributions and teams involved. The logging section has also been improved, based on years of debugging integration tests for Databricks and its ecosystem, simplifying integration testing with Databricks for other projects.
  • Added support for .env files (#36). In this change, we have added support for .env files to the open-source library, allowing for local debugging and integration tests in IDEs. A new debug_env_name fixture has been introduced, which enables specifying the name of the debug environment with a default value of .env. If there are security concerns about using .env files, a ~/.databricks/debug-env.json file can be used instead. Additionally, we have updated the gen-readme.py script and the Fixture class to improve documentation and provide information about the relationships between fixtures and .env files. The debug_env fixture has been added to read a debug-env.json file if the code is running in debug mode, and the env_or_skip fixture has been updated to skip tests if required environment variables are not set. These changes improve the testing capabilities of the library, allowing for easier management and integration of environment variables in tests.
  • Added supporting documents. In this release, we introduce a new changelog file for the project, versioned at 0.0.0, to record notable changes over time. Additionally, we have added a CODEOWNERS file, designating @nfx as the default code owner for all files in the repository, and a CONTRIBUTING.md file that provides detailed guidelines for contributing to the project. The CONTRIBUTING.md file covers a wide range of topics, including first principles, change management, code organization, adding new fixtures, common mypy error fixes, integration testing infrastructure, local setup, first contribution, and troubleshooting. These additions aim to improve code quality, maintainability, and collaboration for the project's developers and users.
  • Added telemetry tracking. A new telemetry tracking feature has been implemented in the project with the addition of the with_user_agent_extra method in the "init.py" file. This method, sourced from the "databricks.sdk.core" package, enables the attachment of an extra user agent string to HTTP requests, which includes the version of the pytester project. The "about.py" file's __version__ variable is utilized to ensure the specific version of the pytester project is incorporated in the user agent string. This enhancement allows for the tracking of project usage and statistics through user agents, providing valuable insights for future development and improvements.
  • Added unit testing for test fixtures. In this release, we have added comprehensive unit tests for various entities in our codebase, such as alerts, authorization permissions, catalog, cluster, cluster policies, dashboard permissions, directories, experiments, feature table permissions, groups, instance pools, instance pool permissions, jobs, job permissions, lakeview dashboard permissions, models, notebooks, notebook permissions, pipelines, pipeline permissions, queries, query permissions, registered model permissions, repos, repo permissions, secret scopes, secret scope ACLs, serving endpoints, serving endpoint permissions, storage credentials, UDFs, users, warehouses, warehouse permissions, workspace file path permissions, and workspace file permissions. Additionally, we have updated fixtures such as sql_backend, workspace_library, debug_env, and product_info with tests and provided examples on how to use these fixtures in the code. We have also updated our configuration files to improve code quality, maintainability, and reliability, including updating the version of mypy, adding the unit package to the known-first-party modules in isort configuration, and updating the ignore list for pylint. Furthermore, we have added a new unwrap.py file to the databricks/labs/pytester/fixtures directory to support unit testing of pytest fixtures. We have also added unit tests for test fixtures in various files, ensuring that the fixtures behave as expected, thereby improving the reliability and stability of the codebase. Lastly, we have added a new unit test file for testing catalog functionality, specifically for the make_table function, which creates a new managed table with a specified schema and table type.
  • Bump unit testing coverage. This commit enhances the unit testing coverage and improves the overall code quality of the open-source library. Several changes have been introduced, including the addition of new fixtures sql_backend, sql_exec, and sql_fetch_all for testing SQL-related functionality in the Databricks platform. These fixtures are demonstrated in the newly added random_string test case. The commit also introduces a new section exclude_also under the "[tool.mypy]" section in the pyproject.toml file, which provides more precise control over the lines checked during mypy type checking. Furthermore, the environment.py file has been removed, and several SQL backend and test resource purge time-related fixtures have been deleted, resulting in increased unit testing coverage. Additionally, the catalog.py and compute.py files in the databricks/labs/pytester/fixtures directory have been updated to improve resource management and ensure proper handling after tests are executed. The permissions.py file has been modified to remove the sql/ prefix from permission paths for dashboards, alerts, and queries, simplifying the permission hierarchy in the tests. The plugin.py file has been updated to reorganize SQL and environment-related functions, making them more modular and maintainable. Finally, new utility fixtures watchdog_remove_after and watchdog_purge_suffix have been added in the watchdog.py file to manage and purge test objects as needed, and a new file, .env, has been added to the tests/unit/fixtures/ directory to provide consistent testing conditions. These changes contribute to a better testing environment and improved overall project quality.
  • Prettify fixture documentation (#35). In this release, the documentation of the ws fixture in the Databricks testing project has been significantly enhanced in the README file. The ws fixture now has more comprehensive documentation, including its purpose, usage example, and the fact that it is built on top of other fixtures. Additionally, the Fixture class in the gen-readme.py script has been improved for better readability and clarity. The make_random function in the baseline.py file has been refactored for improved documentation and clarity, with updated usage examples and the removal of a deprecated Returns section. These changes aim to provide clearer and more comprehensive documentation for users, making it easier to understand and utilize the features effectively.
  • Updated README.md. In this update, we have added several PyTest fixtures to enhance testing capabilities in the Databricks workspace. These fixtures include make_warehouse_permissions, make_lakeview_dashboard_permissions, log_workspace_link, make_dashboard_permissions, make_alert_permissions, make_query_permissions, make_experiment_permissions, make_registered_model_permissions, make_serving_endpoint_permissions, and make_feature_table_permissions. These additions enable easier testing of various functionalities and linking within the workspace. Furthermore, we have included the make_authorization_permissions fixture to facilitate testing of authorization functionalities. To aid in debugging, we have updated the Logging section with the debug_env_name and debug_env fixtures. Lastly, we have added the workspace_library fixture for testing library-related functionalities in the workspace. These changes improve the overall testing experience and enable more comprehensive testing within the Databricks workspace.
  • Updated pytest requirement from ~=8.1.0 to ~=8.3.3 (#31). In this pull request, we update the pytest requirement from version 8.1.0 to 8.3.3 in our pyproject.toml file. This update includes several bug fixes and improvements for our testing framework, such as avoiding the calling of properties during fixture discovery, fixing the issue of not displaying assertion failure differences with the --import-mode=importlib option in pytest 8.1 and above, and addressing a regression that caused mypy to fail. Additionally, we fix typing compatibility with Python 3.9 or less by replacing typing.Self with typing_extensions.Self. This update also ensures consistent path handling across environments by fixing an issue with backslashes being incorrectly converted in nodeid paths on Windows.

Dependency updates:

  • Updated pytest requirement from ~=8.1.0 to ~=8.3.3 (#31).

* Added Databricks Connect fixture. A new fixture named `spark` has been added to the codebase, providing a Databricks Connect Spark session for testing purposes. The fixture requires the `databricks-connect` package to be installed and takes a `WorkspaceClient` object as an argument. It first checks if a `cluster_id` is present in the environment, and if not, it skips the test and raises a message. The fixture then ensures that the cluster is running and attempts to import the `DatabricksSession` class from the `databricks.connect` module. If the import fails, it skips the test and raises a message. This new fixture enables easier testing of Databricks Connect functionality, reducing boilerplate code required to set up a Spark session within tests. Additionally, a new `is_in_debug` fixture has been added, although there is no further documentation or usage examples provided for it.
* Added `make_*_permissions` fixtures. In this release, we have added new fixtures to the pytester plugin for managing permissions in Databricks. These fixtures include `make_alert_permissions`, `make_authorization_permissions`, `make_cluster_permissions`, `make_cluster_policy_permissions`, `make_dashboard_permissions`, `make_directory_permissions`, `make_instance_pool_permissions`, `make_job_permissions`, `make_notebook_permissions`, `make_pipeline_permissions`, `make_query_permissions`, `make_registered_model_permissions`, `make_repository_permissions`, `make_serving_endpoint_permissions`, `make_warehouse_permissions`, `make_workspace_file_permissions`, and `make_workspace_file_path_permissions`. These fixtures allow for easier testing of functionality that requires managing permissions in Databricks, and are used for managing permissions for various Databricks resources such as alerts, authorization, clusters, cluster policies, dashboards, directories, instance pools, jobs, notebooks, pipelines, queries, registered models, repositories, serving endpoints, warehouses, and workspace files. Additionally, a new `make_notebook_permissions` fixture has been introduced in the `test_permissions.py` file for integration tests, which allows for more comprehensive testing of the IAM system's behavior when handling notebook permissions.
* Added `make_catalog` fixture. A new fixture, `make_catalog`, has been added to the codebase to facilitate testing with specific catalogs, ensuring isolation and reproducibility. This fixture creates a catalog, returns its information, and removes the catalog after the test is complete. It can be used in conjunction with other fixtures such as `ws`, `sql_backend`, and `make_random`. The fixture is utilized in the updated `test_catalog_fixture` integration test function, which now includes new arguments `make_catalog`, `make_schema`, and `make_table`. These fixtures create catalog, schema, and table objects, enabling more comprehensive testing of the catalog, schema, and table creation functionality. Please note that catalogs created using this fixture are not currently protected from being deleted by the watchdog.
* Added `make_catalog`, `make_schema`, and `make_table` fixtures ([#33](#33)). In this release, we have updated the `databricks-labs-blueprint` package dependency to `databricks-labs-lsql~=0.10` and added several fixtures to the codebase to improve the reliability and maintainability of the test suite. We have introduced three new fixtures `make_catalog`, `make_schema`, and `make_table` that are used for creating and managing test catalogs, schemas, and tables, respectively. These fixtures enable the creation of arbitrary test data and simplify testing by allowing predictable and consistent setup and teardown of test data for integration tests. Additionally, we have added several debugging fixtures, including `debug_env_name`, `debug_env`, `env_or_skip`, and `sql_backend`, to aid in testing DataBricks features related to SQL, environments, and more. The `make_udf` fixture has also been added for testing user-defined functions in DataBricks. These new fixtures and methods will assist in testing the project's functionality and ensure that the code is working as intended, making the tests more maintainable and easier to understand.
* Added `make_cluster` documentation. The `make_cluster` fixture has been updated with new functionality and improvements. It now creates a Databricks cluster with specified configurations, waits for it to start, and cleans it up after the test, returning a function to create clusters. The `cluster_id` attribute is accessible from the returned object. The fixture accepts several keyword arguments: `single_node` to create a single-node cluster, `cluster_name` to specify a cluster name, `spark_version` to set the Spark version, and `autotermination_minutes` to determine when the cluster should be automatically terminated. The `ws` and `make_random` parameters have been removed. The commit also introduces a new test function, `test_cluster`, that creates a single-node cluster and outputs a message indicating the creation. Documentation for the `make_cluster` function has been added, and the `make_cluster_policy` function remains unchanged.
* Added `make_experiment` fixture. In this release, we introduce the `make_experiment` fixture in the `databricks.labs.pytester.fixtures.ml` module, facilitating the creation and cleanup of Databricks Experiments for testing purposes. This fixture accepts optional `path` and `experiment_name` parameters and returns a `databricks.sdk.service.ml.CreateExperimentResponse` object. Additionally, `make_experiment_permissions` has been added for managing experiment permissions. In the `permissions.py` file, the `_make_permissions_factory` function replaces the previous `_make_redash_permissions_factory`, enhancing the code's maintainability and extensibility. Furthermore, a `make_experiment` fixture has been added to the `plugin.py` file for creating experiments with custom names and descriptions. Lastly, a `test_experiments` function has been included in the `tests/integration/fixtures` directory, utilizing `make_group`, `make_experiment`, and `make_experiment_permissions` fixtures to create experiments and assign group permissions.
* Added `make_instance_pool` documentation. In this release, the `make_instance_pool` fixture has been updated with added documentation, and the usage example has been slightly modified. The fixture now accepts optional keyword arguments for the instance pool name and node type ID, with default values set for each. The `make_random` fixture is still required for generating unique names. Additionally, a new function, `log_workspace_link`, has been updated to accept a new parameter `anchor` for controlling the inclusion of an anchor (`#`) in the generated URL. New test functions `test_instance_pool` and `test_cluster_policy` have been added to enhance the integration testing of the compute system, providing more comprehensive coverage for instance pools and cluster policies. Furthermore, documentation has been added for the `make_instance_pool` fixture. Lastly, three test functions, `test_cluster`, `test_instance_pool`, and `test_job`, have been removed, but the setup functions for these tests are retained, indicating a possible streamlining of the codebase.
* Added `make_job` documentation. The `make_job` fixture has been updated with additional arguments and improved documentation. It now accepts `notebook_path`, `name`, `spark_conf`, and `libraries` as optional keyword arguments, and can accept any additional arguments to be passed to the `WorkspaceClient.jobs.create` method. If no `notebook_path` or `tasks` argument is provided, a random notebook is created and a single task with a notebook task is run using the latest Spark version and a single worker cluster. The fixture has been improved to manage Databricks jobs and clean them up after testing. Additionally, documentation has been added for the `make_job` function and the `test_job` function in the test fixtures file. The `test_job` function, which created a job and logged its creation, has been removed, and the `test_cluster` and `test_pipeline` functions remain unchanged. The `os` module is no longer imported in this file.
* Added `make_model` fixture. A new pytest fixture, `make_model`, has been added to the codebase for the open-source library. This fixture facilitates the creation and automatic cleanup of Databricks Models during tests, returning a `GetModelResponse` object. The optional `model_name` parameter allows for customization, with a default value of `dummy-*`. The `make_model` fixture can be utilized in conjunction with other fixtures such as `ws`, `make_random`, and `make_registered_model_permissions`, streamlining the testing of model-related functionality. Additionally, a new test function, `test_models`, has been introduced, utilizing `make_model`, `make_group`, and `make_registered_model_permissions` fixtures to test model management within the system. This new feature enhances the library's testing capabilities, making it easier to create, configure, and manage models and related resources during test execution.
* Added `make_pipeline` fixture. A new fixture named `make_pipeline` has been added to the project, which facilitates the creation and cleanup of a Delta Live Tables Pipeline after testing. This fixture is added to the `compute.py` file and takes optional keyword arguments such as `name`, `libraries`, and `clusters`. It generates a random name, creates a disposable notebook with random libraries, and creates a single node cluster with 16GB memory and local disk if these arguments are not provided. The fixture returns a function to create pipelines, resulting in a `CreatePipelineResponse` instance. Additionally, a new integration test has been added to test the functionality of this fixture, and it logs information about the created pipeline for debugging and inspection purposes. This new fixture improves the testing capabilities of the project, allowing for more robust and flexible tests of pipeline creation and management.
* Added `make_query` fixture. In this release, we have added a new fixture called `make_query` to the plugin module for the Redash integration. This fixture creates a `LegacyQuery` object for testing query-related functionality in a controlled environment. It can be used in conjunction with the `make_user` and `make_query_permissions` fixtures to test query permissions for a specific user. The `make_query` fixture generates a random query name, creates a table, and uses the `ws.queries_legacy.create` method to create the query. The query is then deleted using the `ws.queries_legacy.delete` method after the test is completed. This fixture is utilized in the `test_permissions_for_redash` function, which creates a user and a query, and then sets the permission level for the query for the created user using the `make_query_permissions` fixture. This enhancement improves the testing capabilities of the Pytester framework for projects that utilize Redash.
* Added `make_schema` fixture. A new `make_schema` fixture has been added to the open-source library to improve schema management and testing. This fixture creates a schema with an optional catalog name and a schema name, which defaults to a random string. The fixture cleans up the schema after the test is complete and returns an instance of `SchemaInfo`. It can be used in conjunction with other fixtures such as `make_table` and `make_udf` for easier testing and setup of schemas. Additionally, the `make_schema` fixture includes a new keyword-only argument `log_workspace_link` to log a link to the created schema in the Databricks workspace. The `make_catalog` fixture has also been updated to include the `log_workspace_link` argument for logging links to created catalogs. These changes enhance the testability of the code and provide better catalog and schema management in the Databricks workspace.
* Added `make_serving_endpoint` fixture. A new `make_serving_endpoint` fixture has been added to the codebase, located in `baseline.py`, `ml.py`, and `plugin.py` files, and `tests/integration/fixtures/test_ml.py`. This fixture enables the creation and deletion of Databricks Serving Endpoints, handling any potential DatabricksError exceptions during teardown. It also creates a model for a small workload size and returns a `ServingEndpointDetailed` object. The `make_serving_endpoint_permissions` fixture is introduced as well, creating serving endpoint permissions for a specified object ID, permission level, and group name. New tests have been implemented to demonstrate the usage of these fixtures, showing how to create serving endpoints, grant query permissions to a group, and test the endpoint. Additionally, updates have been made to the README.md file to include documentation for the new fixtures.
* Added `make_storage_credential` fixture. In this release, we have added a new fixture called `make_storage_credential` to our testing utilities. This fixture creates a storage credential with configurable parameters such as credential name, Azure service principal information, AWS IAM role ARN, and read-only status. It can be used to create either an Azure or AWS storage credential, depending on the provided parameters, and removes the created credential after the test. This fixture is implemented in `plugin.py` and is added to the existing list of fixtures for consistent and easy-to-use testing setup. Additionally, we have introduced an integration test called `test_storage_credential` in the test catalog for fixtures. This test utilizes the new `make_storage_credential` fixture and verifies the functionality of creating a storage credential and the integration between the system and storage services. These new additions will make it easier to write tests that require access to storage resources and improve the efficiency and ease of testing and developing new features in the codebase.
* Added `make_table` fixture. In this release, we've added the `make_table` fixture to simplify testing operations on tables and catalogs. This fixture creates a table with a given catalog and schema name, CTAS statement, and properties. It can create the table as a non-delta or delta table, external table with CSV or Delta location, or a view, and allows overriding the storage location. Additionally, we've updated the fixture to include new parameters and functionality, such as logging a workspace link for the created table and specifying the catalog and schema where the table will be created. The fixture now also includes new functions for creating and casting columns in the table. After the test, the fixture automatically removes the created table. This release aims to provide a more customizable and convenient way to test table operations.
* Added `make_udf` fixture. The `make_udf` fixture has been added to facilitate the creation and removal of User-Defined Functions (UDFs) for testing purposes. This fixture creates a UDF with optional parameters to specify catalog, schema, name, and Hive UDF creation. It returns an instance of `databricks.sdk.service.catalog.FunctionInfo`. The UDF is removed after the test. This feature is utilized in the new `test_make_some_udfs` integration test, where it creates two UDFs in a schema within the Hive metastore, one with and one without Hive support. Additionally, the `test_create_view` test is now skipped, and the `test_table_fixture` test remains unchanged. This change improves the ability to test UDFs within the Hive metastore, and allows for more comprehensive testing by creating UDFs programmatically.
* Added `make_warehouse` fixture. A new `make_warehouse` fixture has been added to the test suite, which allows for the creation and customization of a Databricks warehouse for testing purposes. The fixture accepts optional keyword arguments such as `warehouse_name`, `warehouse_type`, `cluster_size`, `max_num_clusters`, and `enable_serverless_compute`, allowing users to configure the warehouse's properties. It returns a function that creates a warehouse using the provided parameters and handles cleanup after the test is complete. Additionally, a corresponding test function `test_warehouse_has_remove_after_tag` has been added to verify that a newly created warehouse has the expected `RemoveAfter` tag, facilitating automated testing and resource management. This enhancement expands the testing capabilities of the plugin and provides a more streamlined approach to testing functionality related to Databricks warehouses.
* Added ability to specify custom SQL in `make_query`. The `make_query` fixture has been updated to allow for greater customization in testing, with the addition of a new `query` keyword argument. This parameter enables users to specify a custom SQL query to be stored and executed, with the default value being `SELECT * FROM <newly created random table>`. The fixture continues to create and remove the `LegacyQuery` object, making it user-friendly. With this enhancement, users have increased flexibility to tailor their tests to specific needs, providing more targeted and precise testing outcomes.
* Added documentation for `make_cluster_policy`. In this release, we introduce new features to enhance testing and managing Databricks cluster policies and workspace link logging in your project. We've added the `make_cluster_policy` fixture, which simplifies the creation and deletion of cluster policies using a specified workspace. This fixture returns a `CreatePolicyResponse` instance and can be used within test functions. Additionally, we've developed the `log_workspace_link` fixture, which constructs and logs a workspace link for debugging and tracking purposes. The `make_cluster_policy` function is also introduced in the `plugin.py` file, enabling users to manage and test Databricks cluster policies using the pytester framework. To ensure proper functionality, the `test_compute.py` file includes a test function for `make_cluster_policy`. These improvements will help streamline testing processes and enhance the overall user experience.
* Added documentation for `make_group` and `make_user`. In this release, we have introduced the `make_group` and `make_user` fixtures to manage Databricks workspace groups and users, respectively. The `make_group` fixture allows you to create groups with specified members, roles, and entitlements, handling eventual consistency issues and waiting for group provisioning if required. The `make_user` fixture creates a user and deletes it after the test, handling naming conflicts by retrying the creation process for 30 seconds. Both fixtures return instances of `Group` and `User`, respectively, and have been documented in the README.md with usage examples. Additionally, we have introduced a built-in logger that traces entity creation and deletion through links in the Databricks Workspace UI, and added documentation for the `make_group` and `make_user` functions using the `gen-readme.py` script. The release also includes updates to the `conftest.py` file in the `tests/integration` directory, importing the `fixture` function from `pytest` and the `install_logger` and `logging` modules from `databricks.labs.blueprint.logger` to improve documentation and configure logging for the project.
* Added documentation for `make_notebook`, `make_directory`, and `make_repo`. The `make_notebook`, `make_directory`, and `make_repo` fixtures have been updated with new functionality and improved documentation in this release. These fixtures are used in tests to manage Databricks notebooks, directories, and repos respectively, and they now return functions that create resources with specified parameters. The `make_notebook` fixture now includes optional keyword arguments for `path`, `content`, `language`, `format`, and `overwrite`, and returns an `os.PathLike` object that will be automatically deleted after the test is complete. The `make_directory` fixture now includes an optional keyword argument for `path`, and the `make_repo` fixture now includes optional keyword arguments for `url`, `provider`, and `path`. These fixtures simplify the process of creating and managing Databricks resources in tests and help ensure that resources are properly cleaned up after each test is complete. The commit also includes documentation for the new functionality and integration tests for these fixtures.
* Added documentation for `make_secret_scope` and `make_secret_scope_acl`. In this release, documentation has been added for two new functions, `make_secret_scope` and `make_secret_scope_acl`, which are used for creating and managing secret scopes and their associated access control lists (ACLs) in a Databricks Workspace. The `make_secret_scope` function creates a new secret scope with a unique name generated using a random name generator, and automatically deletes the scope after the test is complete. The `make_secret_scope_acl` function manages ACLs for secret scopes, defining permissions for principals (users or groups) on specific secret scopes. Three new test functions have also been added to test the functionality of creating secret scopes and managing their ACLs using these new functions. Additionally, type hints have been added to the package to support PEP 561. Overall, these changes improve the documentation and testing of the project, making it easier for developers to understand and use these new functions for managing secret scopes and their ACLs in a Databricks Workspace.
* Added documentation update on `make fmt` ([#34](#34)). In this release, the `make fmt` command in the documentation has been updated to include an additional step that runs the `gen-readme.py` script before executing `hatch run fmt`. This new script generates or updates the README file with detailed documentation on various PyTest fixtures available in the Python Testing for Databricks project. A new `Fixture` dataclass has been introduced to represent a fixture's metadata, and the `databricks.labs.pytester.fixtures.plugin` module is used to discover all fixtures. The `FIXTURES` section in the README.md file has been updated with the new documentation, which includes information on the purpose, parameters, return values, and usage examples for each fixture. The `test` and `lint` targets in the Makefile remain unchanged. Please note that this project is not officially supported by Databricks.
* Added downstream testing. In this enhancement, we have implemented downstream testing in our CI/CD pipeline through the introduction of a new GitHub Actions workflow called "downstreams.yml." This workflow runs tests when pull requests are opened, synchronized, or checked during a merge group, and on pushes to the main branch. The job compatibility is set up to run on the latest version of Ubuntu, and it includes steps to checkout the code with a full fetch depth, install Python, install the toolchain, and run the downstreams test suite using the databrickslabs/sandbox/downstreams action. The downstreams matrix includes the blueprint, lsql, ucx, and remorph repositories in the databrickslabs organization. The GITHUB_TOKEN environment variable is used for authentication. This improvement will help ensure that our codebase remains stable and functional as we continue to develop and release new features.
* Added note on UCX project. In the 2024 release, the open-source library has undergone significant updates, incorporating the UCX project into its ecosystem. UCX, an open-source project providing a unified communication layer for various high-performance computing (HPC) platforms, enhances the library's functionality, particularly in automated migrations and static code analysis. The library, developed as part of the Unity Catalog Automated Migrations project, has also added new authors and maintainers, including Vuong Nguyen, Lars George, Cor Zuurmond, Andrew Snare, Pritish Pai, and removed Liran Bareket and Vuong Nguyen, indicating potential new contributions and teams involved. The logging section has also been improved, based on years of debugging integration tests for Databricks and its ecosystem, simplifying integration testing with Databricks for other projects.
* Added support for `.env` files ([#36](#36)). In this change, we have added support for `.env` files to the open-source library, allowing for local debugging and integration tests in IDEs. A new `debug_env_name` fixture has been introduced, which enables specifying the name of the debug environment with a default value of `.env`. If there are security concerns about using `.env` files, a `~/.databricks/debug-env.json` file can be used instead. Additionally, we have updated the `gen-readme.py` script and the `Fixture` class to improve documentation and provide information about the relationships between fixtures and `.env` files. The `debug_env` fixture has been added to read a `debug-env.json` file if the code is running in debug mode, and the `env_or_skip` fixture has been updated to skip tests if required environment variables are not set. These changes improve the testing capabilities of the library, allowing for easier management and integration of environment variables in tests.
* Added supporting documents. In this release, we introduce a new changelog file for the project, versioned at 0.0.0, to record notable changes over time. Additionally, we have added a CODEOWNERS file, designating @nfx as the default code owner for all files in the repository, and a CONTRIBUTING.md file that provides detailed guidelines for contributing to the project. The CONTRIBUTING.md file covers a wide range of topics, including first principles, change management, code organization, adding new fixtures, common mypy error fixes, integration testing infrastructure, local setup, first contribution, and troubleshooting. These additions aim to improve code quality, maintainability, and collaboration for the project's developers and users.
* Added telemetry tracking. A new telemetry tracking feature has been implemented in the project with the addition of the `with_user_agent_extra` method in the "__init__.py" file. This method, sourced from the "databricks.sdk.core" package, enables the attachment of an extra user agent string to HTTP requests, which includes the version of the `pytester` project. The "_about_\.py" file's `__version__` variable is utilized to ensure the specific version of the `pytester` project is incorporated in the user agent string. This enhancement allows for the tracking of project usage and statistics through user agents, providing valuable insights for future development and improvements.
* Added unit testing for test fixtures. In this release, we have added comprehensive unit tests for various entities in our codebase, such as alerts, authorization permissions, catalog, cluster, cluster policies, dashboard permissions, directories, experiments, feature table permissions, groups, instance pools, instance pool permissions, jobs, job permissions, lakeview dashboard permissions, models, notebooks, notebook permissions, pipelines, pipeline permissions, queries, query permissions, registered model permissions, repos, repo permissions, secret scopes, secret scope ACLs, serving endpoints, serving endpoint permissions, storage credentials, UDFs, users, warehouses, warehouse permissions, workspace file path permissions, and workspace file permissions. Additionally, we have updated fixtures such as sql_backend, workspace_library, debug_env, and product_info with tests and provided examples on how to use these fixtures in the code. We have also updated our configuration files to improve code quality, maintainability, and reliability, including updating the version of mypy, adding the unit package to the known-first-party modules in isort configuration, and updating the ignore list for pylint. Furthermore, we have added a new `unwrap.py` file to the `databricks/labs/pytester/fixtures` directory to support unit testing of pytest fixtures. We have also added unit tests for test fixtures in various files, ensuring that the fixtures behave as expected, thereby improving the reliability and stability of the codebase. Lastly, we have added a new unit test file for testing catalog functionality, specifically for the `make_table` function, which creates a new managed table with a specified schema and table type.
* Bump unit testing coverage. This commit enhances the unit testing coverage and improves the overall code quality of the open-source library. Several changes have been introduced, including the addition of new fixtures `sql_backend`, `sql_exec`, and `sql_fetch_all` for testing SQL-related functionality in the Databricks platform. These fixtures are demonstrated in the newly added `random_string` test case. The commit also introduces a new section `exclude_also` under the "[tool.mypy]" section in the pyproject.toml file, which provides more precise control over the lines checked during mypy type checking. Furthermore, the environment.py file has been removed, and several SQL backend and test resource purge time-related fixtures have been deleted, resulting in increased unit testing coverage. Additionally, the `catalog.py` and `compute.py` files in the `databricks/labs/pytester/fixtures` directory have been updated to improve resource management and ensure proper handling after tests are executed. The `permissions.py` file has been modified to remove the `sql/` prefix from permission paths for dashboards, alerts, and queries, simplifying the permission hierarchy in the tests. The `plugin.py` file has been updated to reorganize SQL and environment-related functions, making them more modular and maintainable. Finally, new utility fixtures `watchdog_remove_after` and `watchdog_purge_suffix` have been added in the `watchdog.py` file to manage and purge test objects as needed, and a new file, `.env`, has been added to the `tests/unit/fixtures/` directory to provide consistent testing conditions. These changes contribute to a better testing environment and improved overall project quality.
* Prettify fixture documentation ([#35](#35)). In this release, the documentation of the `ws` fixture in the Databricks testing project has been significantly enhanced in the README file. The `ws` fixture now has more comprehensive documentation, including its purpose, usage example, and the fact that it is built on top of other fixtures. Additionally, the Fixture class in the gen-readme.py script has been improved for better readability and clarity. The `make_random` function in the baseline.py file has been refactored for improved documentation and clarity, with updated usage examples and the removal of a deprecated `Returns` section. These changes aim to provide clearer and more comprehensive documentation for users, making it easier to understand and utilize the features effectively.
* Updated README.md. In this update, we have added several PyTest fixtures to enhance testing capabilities in the Databricks workspace. These fixtures include `make_warehouse_permissions`, `make_lakeview_dashboard_permissions`, `log_workspace_link`, `make_dashboard_permissions`, `make_alert_permissions`, `make_query_permissions`, `make_experiment_permissions`, `make_registered_model_permissions`, `make_serving_endpoint_permissions`, and `make_feature_table_permissions`. These additions enable easier testing of various functionalities and linking within the workspace. Furthermore, we have included the `make_authorization_permissions` fixture to facilitate testing of authorization functionalities. To aid in debugging, we have updated the `Logging` section with the `debug_env_name` and `debug_env` fixtures. Lastly, we have added the `workspace_library` fixture for testing library-related functionalities in the workspace. These changes improve the overall testing experience and enable more comprehensive testing within the Databricks workspace.
* Updated pytest requirement from ~=8.1.0 to ~=8.3.3 ([#31](#31)). In this pull request, we update the pytest requirement from version 8.1.0 to 8.3.3 in our pyproject.toml file. This update includes several bug fixes and improvements for our testing framework, such as avoiding the calling of properties during fixture discovery, fixing the issue of not displaying assertion failure differences with the `--import-mode=importlib` option in pytest 8.1 and above, and addressing a regression that caused mypy to fail. Additionally, we fix typing compatibility with Python 3.9 or less by replacing `typing.Self` with `typing_extensions.Self`. This update also ensures consistent path handling across environments by fixing an issue with backslashes being incorrectly converted in nodeid paths on Windows.

Dependency updates:

 * Updated pytest requirement from ~=8.1.0 to ~=8.3.3 ([#31](#31)).
Copy link

This PR breaks backwards compatibility for databrickslabs/lsql downstream. See build logs for more details.

Running from downstreams #5

Copy link

This PR breaks backwards compatibility for databrickslabs/blueprint downstream. See build logs for more details.

Running from downstreams #5

Copy link

❌ 26/27 passed, 1 failed, 3 skipped, 2m7s total

❌ test_permissions_for_redash: AttributeError: 'str' object has no attribute 'value' (5.332s)
AttributeError: 'str' object has no attribute 'value'
[gw8] linux -- Python 3.10.14 /home/runner/work/pytester/pytester/.venv/bin/python
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created [email protected]: https://DATABRICKS_HOST/#settings/workspace/identity-and-access/users/4818718868036841
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added workspace user fixture: User(active=True, display_name='[email protected]', emails=[ComplexValue(display=None, primary=True, ref=None, type='work', value='[email protected]')], entitlements=[], external_id=None, groups=[], id='4818718868036841', name=Name(family_name=None, given_name='[email protected]'), roles=[], schemas=[<UserSchema.URN_IETF_PARAMS_SCIM_SCHEMAS_CORE_2_0_USER: 'urn:ietf:params:scim:schemas:core:2.0:User'>, <UserSchema.URN_IETF_PARAMS_SCIM_SCHEMAS_EXTENSION_WORKSPACE_2_0_USER: 'urn:ietf:params:scim:schemas:extension:workspace:2.0:User'>], user_name='[email protected]')
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created hive_metastore.dummy_syyan schema: https://DATABRICKS_HOST/#explore/data/hive_metastore/dummy_syyan
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.dummy_syyan', metastore_id=None, name='dummy_syyan', owner=None, properties=None, schema_id=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None)
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created hive_metastore.dummy_syyan.ucx_tsj2l schema: https://DATABRICKS_HOST/#explore/data/hive_metastore/dummy_syyan/ucx_tsj2l
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.dummy_syyan.ucx_tsj2l', metastore_id=None, name='ucx_tsj2l', owner=None, pipeline_id=None, properties={'RemoveAfter': '2024091722'}, row_filter=None, schema_name='dummy_syyan', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_syyan/ucx_tsj2l', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_query_QKwON_ra78a5304a query: https://DATABRICKS_HOST/#sql/editor/85992f5e-11be-47e7-ba71-9a76232bc903
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added query fixture: LegacyQuery(can_edit=None, created_at='2024-09-17T20:35:11Z', data_source_id=None, description='TEST QUERY FOR UCX', id='85992f5e-11be-47e7-ba71-9a76232bc903', is_archived=False, is_draft=False, is_favorite=False, is_safe=True, last_modified_by=User(email='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', id=481119220561874, name='labs-account-admin-identity'), last_modified_by_id=None, latest_query_data_id=None, name='dummy_query_QKwON_ra78a5304a', options=QueryOptions(catalog=None, moved_to_trash_at=None, parameters=[], schema=None), parent='folders/4279257340449065', permission_tier=None, query='SELECT * FROM hive_metastore.dummy_syyan.ucx_tsj2l', query_hash=None, run_as_role=<RunAsRole.OWNER: 'owner'>, tags=['original_query_tag'], updated_at='2024-09-17T20:35:12Z', user=User(email='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', id=481119220561874, name='labs-account-admin-identity'), user_id=481119220561874, visualizations=[LegacyVisualization(created_at='2024-09-17T20:35:11Z', description='', id='67c1e7b2-6df4-4807-bca0-21baddeb7144', name='Results', options={'version': 2}, query=None, type='TABLE', updated_at='2024-09-17T20:35:11Z')])
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created [email protected]: https://DATABRICKS_HOST/#settings/workspace/identity-and-access/users/4818718868036841
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added workspace user fixture: User(active=True, display_name='[email protected]', emails=[ComplexValue(display=None, primary=True, ref=None, type='work', value='[email protected]')], entitlements=[], external_id=None, groups=[], id='4818718868036841', name=Name(family_name=None, given_name='[email protected]'), roles=[], schemas=[<UserSchema.URN_IETF_PARAMS_SCIM_SCHEMAS_CORE_2_0_USER: 'urn:ietf:params:scim:schemas:core:2.0:User'>, <UserSchema.URN_IETF_PARAMS_SCIM_SCHEMAS_EXTENSION_WORKSPACE_2_0_USER: 'urn:ietf:params:scim:schemas:extension:workspace:2.0:User'>], user_name='[email protected]')
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created hive_metastore.dummy_syyan schema: https://DATABRICKS_HOST/#explore/data/hive_metastore/dummy_syyan
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.dummy_syyan', metastore_id=None, name='dummy_syyan', owner=None, properties=None, schema_id=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None)
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created hive_metastore.dummy_syyan.ucx_tsj2l schema: https://DATABRICKS_HOST/#explore/data/hive_metastore/dummy_syyan/ucx_tsj2l
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.dummy_syyan.ucx_tsj2l', metastore_id=None, name='ucx_tsj2l', owner=None, pipeline_id=None, properties={'RemoveAfter': '2024091722'}, row_filter=None, schema_name='dummy_syyan', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_syyan/ucx_tsj2l', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
20:35 INFO [databricks.labs.pytester.fixtures.baseline] Created dummy_query_QKwON_ra78a5304a query: https://DATABRICKS_HOST/#sql/editor/85992f5e-11be-47e7-ba71-9a76232bc903
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] added query fixture: LegacyQuery(can_edit=None, created_at='2024-09-17T20:35:11Z', data_source_id=None, description='TEST QUERY FOR UCX', id='85992f5e-11be-47e7-ba71-9a76232bc903', is_archived=False, is_draft=False, is_favorite=False, is_safe=True, last_modified_by=User(email='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', id=481119220561874, name='labs-account-admin-identity'), last_modified_by_id=None, latest_query_data_id=None, name='dummy_query_QKwON_ra78a5304a', options=QueryOptions(catalog=None, moved_to_trash_at=None, parameters=[], schema=None), parent='folders/4279257340449065', permission_tier=None, query='SELECT * FROM hive_metastore.dummy_syyan.ucx_tsj2l', query_hash=None, run_as_role=<RunAsRole.OWNER: 'owner'>, tags=['original_query_tag'], updated_at='2024-09-17T20:35:12Z', user=User(email='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', id=481119220561874, name='labs-account-admin-identity'), user_id=481119220561874, visualizations=[LegacyVisualization(created_at='2024-09-17T20:35:11Z', description='', id='67c1e7b2-6df4-4807-bca0-21baddeb7144', name='Results', options={'version': 2}, query=None, type='TABLE', updated_at='2024-09-17T20:35:11Z')])
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 0 query permissions fixtures
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 query fixtures
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] removing query fixture: LegacyQuery(can_edit=None, created_at='2024-09-17T20:35:11Z', data_source_id=None, description='TEST QUERY FOR UCX', id='85992f5e-11be-47e7-ba71-9a76232bc903', is_archived=False, is_draft=False, is_favorite=False, is_safe=True, last_modified_by=User(email='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', id=481119220561874, name='labs-account-admin-identity'), last_modified_by_id=None, latest_query_data_id=None, name='dummy_query_QKwON_ra78a5304a', options=QueryOptions(catalog=None, moved_to_trash_at=None, parameters=[], schema=None), parent='folders/4279257340449065', permission_tier=None, query='SELECT * FROM hive_metastore.dummy_syyan.ucx_tsj2l', query_hash=None, run_as_role=<RunAsRole.OWNER: 'owner'>, tags=['original_query_tag'], updated_at='2024-09-17T20:35:12Z', user=User(email='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', id=481119220561874, name='labs-account-admin-identity'), user_id=481119220561874, visualizations=[LegacyVisualization(created_at='2024-09-17T20:35:11Z', description='', id='67c1e7b2-6df4-4807-bca0-21baddeb7144', name='Results', options={'version': 2}, query=None, type='TABLE', updated_at='2024-09-17T20:35:11Z')])
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 table fixtures
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=<DataSourceFormat.DELTA: 'DELTA'>, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.dummy_syyan.ucx_tsj2l', metastore_id=None, name='ucx_tsj2l', owner=None, pipeline_id=None, properties={'RemoveAfter': '2024091722'}, row_filter=None, schema_name='dummy_syyan', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/dummy_syyan/ucx_tsj2l', table_constraints=None, table_id=None, table_type=<TableType.MANAGED: 'MANAGED'>, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None)
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 schema fixtures
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.dummy_syyan', metastore_id=None, name='dummy_syyan', owner=None, properties=None, schema_id=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None)
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] clearing 1 workspace user fixtures
20:35 DEBUG [databricks.labs.pytester.fixtures.baseline] removing workspace user fixture: User(active=True, display_name='[email protected]', emails=[ComplexValue(display=None, primary=True, ref=None, type='work', value='[email protected]')], entitlements=[], external_id=None, groups=[], id='4818718868036841', name=Name(family_name=None, given_name='[email protected]'), roles=[], schemas=[<UserSchema.URN_IETF_PARAMS_SCIM_SCHEMAS_CORE_2_0_USER: 'urn:ietf:params:scim:schemas:core:2.0:User'>, <UserSchema.URN_IETF_PARAMS_SCIM_SCHEMAS_EXTENSION_WORKSPACE_2_0_USER: 'urn:ietf:params:scim:schemas:extension:workspace:2.0:User'>], user_name='[email protected]')
[gw8] linux -- Python 3.10.14 /home/runner/work/pytester/pytester/.venv/bin/python

Running from acceptance #45

@nfx nfx merged commit 47a00ec into main Sep 17, 2024
6 of 10 checks passed
@nfx nfx deleted the prepare/0.1.0 branch September 17, 2024 20:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant