diff --git a/README.md b/README.md index eccf2bd..d89f924 100644 --- a/README.md +++ b/README.md @@ -376,7 +376,8 @@ If your script has dependent files, you can make them available to your script by: * Building a private Docker image with the dependent files and publishing the - image to a public site, or privately to Google Container Registry + image to a public site, or privately to Google Container Registry or + Artifact Registry * Uploading the files to Google Cloud Storage To upload the files to Google Cloud Storage, you can use the @@ -465,8 +466,9 @@ local directory in a similar fashion to support your local development. ##### Mounting a Google Cloud Storage bucket -To have the `google-v2` or `google-cls-v2` provider mount a Cloud Storage bucket using -Cloud Storage FUSE, use the `--mount` command line flag: +To have the `google-v2` or `google-cls-v2` provider mount a Cloud Storage bucket +using [Cloud Storage FUSE](https://cloud.google.com/storage/docs/gcs-fuse), +use the `--mount` command line flag: --mount RESOURCES=gs://mybucket diff --git a/docs/code.md b/docs/code.md index 1ba14ef..6debce6 100644 --- a/docs/code.md +++ b/docs/code.md @@ -111,9 +111,10 @@ sites such as [Docker Hub](https://hub.docker.com/). Images can be pulled from Docker Hub or any container registry: ``` ---image debian:jessie # pull image implicitly from Docker hub. ---image gcr.io/PROJECT/IMAGE # pull from GCR registry. ---image quay.io/quay/ubuntu # pull from Quay.io. +--image debian:jessie # pull image implicitly from Docker hub +--image gcr.io/PROJECT/IMAGE # pull from Google Container Registry +--image us-central1.pkg.dev/PROJECT/REPO/IMAGE # pull from Artifact Registry +--image quay.io/quay/ubuntu # pull from Quay.io ``` When you have more than a single custom script to run or you have dependent @@ -123,8 +124,9 @@ store it in a container registry. A quick way to start using custom Docker images is to use Google Container Builder which will build an image remotely and store it in the [Google Container -Registry](https://cloud.google.com/container-registry/docs/). Alternatively you -can build a Docker image locally and push it to a registry. See the +Registry](https://cloud.google.com/container-registry/docs) +or [Artifact Registry](https://cloud.google.com/artifact-registry/docs). +Alternatively you can build a Docker image locally and push it to a registry. See the [FastQC example](../examples/fastqc) for a demonstration of both strategies. For information on building Docker images, see the Docker documentation: diff --git a/docs/compute_resources.md b/docs/compute_resources.md index 1e8a9c4..a91c3b6 100644 --- a/docs/compute_resources.md +++ b/docs/compute_resources.md @@ -82,8 +82,8 @@ A Compute Engine VM by default has both a public (external) IP address and a private (internal) IP address. For batch processing, it is often the case that no public IP address is necessary. If your job only accesses Google services, such as Cloud Storage (inputs, outputs, and logging) and Google Container -Registry (your Docker image), then you can run your `dsub` job on VMs without a -public IP address. +Registry or Artifact Registry (your Docker image), then you can run your `dsub` +job on VMs without a public IP address. For more information on Compute Engine IP addresses, see: @@ -132,7 +132,9 @@ was assigned.** The default `--image` used for `dsub` tasks is `ubuntu:14.04` which is pulled from Dockerhub. For VMs that do not have a public IP address, set the `--image` flag to a Docker image hosted by -[Google Container Registry](https://cloud.google.com/container-registry/docs). +[Google Container Registry](https://cloud.google.com/container-registry/docs) or +[Artifact Registry](https://cloud.google.com/artifact-registry/docs). + Google provides a set of [Managed Base Images](https://cloud.google.com/container-registry/docs/managed-base-images) in Container Registry that can be used as simple replacements for your tasks. diff --git a/docs/input_output.md b/docs/input_output.md index 9c88711..0b241fa 100644 --- a/docs/input_output.md +++ b/docs/input_output.md @@ -256,15 +256,28 @@ the name of the input parameter must comply with the ## Requester Pays -To access a Google Cloud Storage -[Requester Pays bucket](https://cloud.google.com/storage/docs/requester-pays), -you will need to specify a billing project. To do so, use the `dsub` -command-line option `--user-project`: +Unless specifically enabled, a Google Cloud Storage bucket is "owner pays" +for all requests. This includes +[network charges](https://cloud.google.com/vpc/network-pricing) for egress +(data downloads or copies to a different cloud region), as well as +[retrieval charges](https://cloud.google.com/storage/pricing#retrieval-pricing) +on files in "cold" storage classes, such as Nearline, Coldline, and Archive. + +When [Requester Pays](https://cloud.google.com/storage/docs/requester-pays) +is enabled on a bucket, the requester must specify a Cloud project to which +charges can be billed. Use the `dsub` command-line option `--user-project`: ``` --user-project my-cloud-project ``` +The user project specified will be passed for all GCS interactions, including: + +- Logging +- Localization (inputs) +- Delocalization (outputs) +- Mount (gcs fuse) + ## Unsupported path formats: * GCS recursive wildcards (**) are not supported diff --git a/dsub/_dsub_version.py b/dsub/_dsub_version.py index 3d88a1c..3feda53 100644 --- a/dsub/_dsub_version.py +++ b/dsub/_dsub_version.py @@ -26,4 +26,4 @@ 0.1.3.dev0 -> 0.1.3 -> 0.1.4.dev0 -> ... """ -DSUB_VERSION = '0.4.9' +DSUB_VERSION = '0.4.10' diff --git a/dsub/commands/dsub.py b/dsub/commands/dsub.py index c8d69f7..4e282e6 100644 --- a/dsub/commands/dsub.py +++ b/dsub/commands/dsub.py @@ -180,20 +180,26 @@ def get_credentials(args): def _check_private_address(args): - """If --use-private-address is enabled, ensure the Docker path is for GCR.""" + """If --use-private-address is enabled, Docker path must be for GCR or AR.""" if args.use_private_address: image = args.image or DEFAULT_IMAGE split = image.split('/', 1) - if len(split) == 1 or not split[0].endswith('gcr.io'): + if len(split) == 1 or not ( + split[0].endswith('gcr.io') or split[0].endswith('pkg.dev') + ): raise ValueError( - '--use-private-address must specify a --image with a gcr.io host') + '--use-private-address must specify a --image with a gcr.io or' + ' pkg.dev host' + ) def _check_nvidia_driver_version(args): """If --nvidia-driver-version is set, warn that it is ignored.""" if args.nvidia_driver_version: - print('***WARNING: The --nvidia-driver-version flag is deprecated and will ' - 'be ignored.') + print( + '***WARNING: The --nvidia-driver-version flag is deprecated and will ' + 'be ignored.' + ) def _google_cls_v2_parse_arguments(args): @@ -360,8 +366,10 @@ def _parse_arguments(prog, argv): parser.add_argument( '--user-project', help="""Specify a user project to be billed for all requests to Google - Cloud Storage (logging, localization, delocalization). This flag exists - to support accessing Requester Pays buckets (default: None)""") + Cloud Storage (logging, localization, delocalization, mounting). + This flag exists to support accessing Requester Pays buckets + (default: None)""", + ) parser.add_argument( '--mount', nargs='*', diff --git a/dsub/providers/google_v2_base.py b/dsub/providers/google_v2_base.py index 940fc8b..b02506d 100644 --- a/dsub/providers/google_v2_base.py +++ b/dsub/providers/google_v2_base.py @@ -296,12 +296,24 @@ def _get_logging_env(self, logging_uri, user_project): 'USER_PROJECT': user_project, } - def _get_mount_actions(self, mounts, mnt_datadisk): + def _get_mount_actions(self, mounts, mnt_datadisk, user_project): """Returns a list of two actions per gcs bucket to mount.""" actions_to_add = [] for mount in mounts: bucket = mount.value[len('gs://'):] mount_path = mount.docker_path + + mount_command = ( + ['--billing-project', user_project] if user_project else [] + ) + mount_command.extend([ + '--implicit-dirs', + '--foreground', + '-o ro', + bucket, + os.path.join(_DATA_MOUNT_POINT, mount_path), + ]) + actions_to_add.extend([ google_v2_pipelines.build_action( name='mount-{}'.format(bucket), @@ -309,17 +321,18 @@ def _get_mount_actions(self, mounts, mnt_datadisk): run_in_background=True, image_uri=_GCSFUSE_IMAGE, mounts=[mnt_datadisk], - commands=[ - '--implicit-dirs', '--foreground', '-o ro', bucket, - os.path.join(_DATA_MOUNT_POINT, mount_path) - ]), + commands=mount_command, + ), google_v2_pipelines.build_action( name='mount-wait-{}'.format(bucket), enable_fuse=True, image_uri=_GCSFUSE_IMAGE, mounts=[mnt_datadisk], - commands=['wait', - os.path.join(_DATA_MOUNT_POINT, mount_path)]) + commands=[ + 'wait', + os.path.join(_DATA_MOUNT_POINT, mount_path), + ], + ), ]) return actions_to_add @@ -418,7 +431,9 @@ def _build_pipeline_request(self, task_view): if job_resources.ssh: optional_actions += 1 - mount_actions = self._get_mount_actions(gcs_mounts, mnt_datadisk) + mount_actions = self._get_mount_actions( + gcs_mounts, mnt_datadisk, user_project + ) optional_actions += len(mount_actions) user_action = 4 + optional_actions diff --git a/dsub/providers/local/runner.sh b/dsub/providers/local/runner.sh index 138ede4..83e1e27 100644 --- a/dsub/providers/local/runner.sh +++ b/dsub/providers/local/runner.sh @@ -153,7 +153,9 @@ function configure_docker_if_necessary() { # Check that the prefix is gcr.io or .gcr.io if [[ "${prefix}" == "gcr.io" ]] || - [[ "${prefix}" == *.gcr.io ]]; then + [[ "${prefix}" == *.gcr.io ]] || + [[ "${prefix}" == "pkg.dev" ]] || + [[ "${prefix}" == *.pkg.dev ]] ; then log_info "Ensuring docker auth is configured for ${prefix}" gcloud --quiet auth configure-docker "${prefix}" fi diff --git a/setup.py b/setup.py index 872ba7a..5c5cf39 100644 --- a/setup.py +++ b/setup.py @@ -14,28 +14,28 @@ # dependencies for dsub, ddel, dstat # Pin to known working versions to prevent episodic breakage from library # version mismatches. - # This version list generated: 04/13/2023 + # This version list generated: 12/07/2023 # direct dependencies - 'google-api-python-client>=2.47.0,<=2.85.0', - 'google-auth>=2.6.6,<=2.17.3', - 'google-cloud-batch==0.10.0', + 'google-api-python-client>=2.47.0,<=2.109.0', + 'google-auth>=2.6.6,<=2.25.1', + 'google-cloud-batch==0.17.5', 'python-dateutil<=2.8.2', 'pytz<=2023.3', - 'pyyaml<=6.0', - 'tenacity<=8.2.2', + 'pyyaml<=6.0.1', + 'tenacity<=8.2.3', 'tabulate<=0.9.0', # downstream dependencies 'funcsigs==1.0.2', - 'google-api-core>=2.7.3,<=2.11.0', - 'google-auth-httplib2<=0.1.0', + 'google-api-core>=2.7.3,<=2.15.0', + 'google-auth-httplib2<=0.1.1', 'httplib2<=0.22.0', - 'pyasn1<=0.4.8', - 'pyasn1-modules<=0.2.8', + 'pyasn1<=0.5.1', + 'pyasn1-modules<=0.3.0', 'rsa<=4.9', 'uritemplate<=4.1.1', # dependencies for test code - 'parameterized<=0.8.1', - 'mock<=4.0.3', + 'parameterized<=0.9.0', + 'mock<=5.1.0', ] diff --git a/test/integration/e2e_dstat.sh b/test/integration/e2e_dstat.sh index f11478c..0cb1457 100755 --- a/test/integration/e2e_dstat.sh +++ b/test/integration/e2e_dstat.sh @@ -35,9 +35,9 @@ function verify_dstat_output() { # Verify that that the jobs are found and are in the expected order. # dstat sort ordering is by create-time (descending), so job 0 here should be the last started. - local first_job_name="$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[0].job-name")" - local second_job_name="$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[1].job-name")" - local third_job_name="$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[2].job-name")" + local first_job_name="$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[0].job-name")" + local second_job_name="$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[1].job-name")" + local third_job_name="$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[2].job-name")" if [[ "${first_job_name}" != "${RUNNING_JOB_NAME_2}" ]]; then 1>&2 echo "Job ${RUNNING_JOB_NAME_2} not found in the correct location in the dstat output! " @@ -87,8 +87,8 @@ function verify_dstat_google_provider_fields() { for (( task=0; task < 3; task++ )); do # Run the provider test. - local job_name="$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].job-name")" - local job_provider="$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].provider")" + local job_name="$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].job-name")" + local job_provider="$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].provider")" # Validate provider. if [[ "${job_provider}" != "${DSUB_PROVIDER}" ]]; then @@ -99,7 +99,7 @@ function verify_dstat_google_provider_fields() { # For google-cls-v2, validate that the correct "location" was used for the request. if [[ "${DSUB_PROVIDER}" == "google-cls-v2" ]]; then - local op_name="$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${DSTAT_OUTPUT}" "[0].internal-id")" + local op_name="$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${DSTAT_OUTPUT}" "[0].internal-id")" # The operation name format is projects//locations//operations/ local op_location="$(echo -n "${op_name}" | awk -F '/' '{ print $4 }')" @@ -131,7 +131,7 @@ function verify_dstat_google_provider_fields() { util::dstat_yaml_assert_boolean_field_equal "${dstat_out}" "[${task}].provider-attributes.preemptible" "false" # Check that instance name is not empty - local instance_name=$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].provider-attributes.instance-name") + local instance_name=$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].provider-attributes.instance-name") if [[ -z "${instance_name}" ]]; then 1>&2 echo " - FAILURE: Instance ${instance_name} for job ${job_name}, task $((task+1)) is empty." 1>&2 echo "${dstat_out}" @@ -139,7 +139,7 @@ function verify_dstat_google_provider_fields() { fi # Check zone exists and is expected format - local job_zone=$(python "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].provider-attributes.zone") + local job_zone=$(python3 "${SCRIPT_DIR}"/get_data_value.py "yaml" "${dstat_out}" "[${task}].provider-attributes.zone") if ! [[ "${job_zone}" =~ ^[a-z]{1,4}-[a-z]{2,15}[0-9]-[a-z]$ ]]; then 1>&2 echo " - FAILURE: Zone ${job_zone} for job ${job_name}, task $((task+1)) not valid." 1>&2 echo "${dstat_out}" diff --git a/test/integration/e2e_io_mount_bucket_requester_pays.google-v2.sh b/test/integration/e2e_io_mount_bucket_requester_pays.google-v2.sh new file mode 100755 index 0000000..94c8d5c --- /dev/null +++ b/test/integration/e2e_io_mount_bucket_requester_pays.google-v2.sh @@ -0,0 +1,43 @@ +#!/bin/bash + +# Copyright 2023 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -o errexit +set -o nounset + +# Test gcsfuse abilities. +# +# This test is designed to verify that named GCS bucket (mount) +# command-line parameters work correctly. +# +# The actual operation performed here is to mount to a bucket containing a BAM +# and compute its md5, writing it to .bam.md5. + +readonly SCRIPT_DIR="$(dirname "${0}")" + +# Do standard test setup +source "${SCRIPT_DIR}/test_setup_e2e.sh" + +# Do io setup +source "${SCRIPT_DIR}/io_setup.sh" + +echo "Launching pipeline..." + +JOB_ID="$(io_setup::run_dsub_with_mount "gs://${DSUB_BUCKET_REQUESTER_PAYS}" "true")" +echo "JOB_ID = $JOB_ID" + +# Do validation +io_setup::check_output +io_setup::check_dstat "${JOB_ID}" false "gs://${DSUB_BUCKET_REQUESTER_PAYS}" diff --git a/test/integration/e2e_local_relative_paths.local.sh b/test/integration/e2e_local_relative_paths.local.sh index a020f18..30304fe 100755 --- a/test/integration/e2e_local_relative_paths.local.sh +++ b/test/integration/e2e_local_relative_paths.local.sh @@ -35,9 +35,9 @@ cd "${TEST_LOCAL_ROOT}" # OUTPUT_FILE=outputs/relative/test.txt readonly INPUT_PATH_RELATIVE="$( - python -c "import os; print(os.path.relpath('"${LOCAL_INPUTS}/relative"'));")" + python3 -c "import os; print(os.path.relpath('"${LOCAL_INPUTS}/relative"'));")" readonly OUTPUT_PATH_RELATIVE="$( - python -c "import os; print(os.path.relpath('"${LOCAL_OUTPUTS}/relative"'));")" + python3 -c "import os; print(os.path.relpath('"${LOCAL_OUTPUTS}/relative"'));")" readonly INPUT_TEST_FILE="${INPUT_PATH_RELATIVE}/test.txt" readonly OUTPUT_TEST_FILE="${OUTPUT_PATH_RELATIVE}/test.txt" diff --git a/test/integration/io_setup.sh b/test/integration/io_setup.sh index aaab1f0..79d2a24 100644 --- a/test/integration/io_setup.sh +++ b/test/integration/io_setup.sh @@ -137,7 +137,7 @@ function io_setup::run_dsub_requester_pays() { run_dsub \ --unique-job-id \ ${IMAGE:+--image "${IMAGE}"} \ - --user-project "$PROJECT_ID" \ + --user-project "${PROJECT_ID}" \ --script "${SCRIPT_DIR}/script_io_test.sh" \ --env TASK_ID="task" \ --input INPUT_PATH="${REQUESTER_PAYS_INPUT_BAM_FULL_PATH}" \ @@ -151,10 +151,12 @@ readonly -f io_setup::run_dsub_requester_pays function io_setup::run_dsub_with_mount() { local mount_point="${1}" + local requester_pays="${2:-}" run_dsub \ --unique-job-id \ ${IMAGE:+--image "${IMAGE}"} \ + ${requester_pays:+--user-project "${PROJECT_ID}"} \ --script "${SCRIPT_DIR}/script_io_test.sh" \ --env TASK_ID="task" \ --output OUTPUT_PATH="${OUTPUTS}/task/*.md5" \ diff --git a/test/integration/logging_paths_tasks_setup.sh b/test/integration/logging_paths_tasks_setup.sh index 112f975..ce54677 100644 --- a/test/integration/logging_paths_tasks_setup.sh +++ b/test/integration/logging_paths_tasks_setup.sh @@ -70,7 +70,7 @@ function logging_paths_tasks_setup::dstat_get_logging() { --format json) # Tasks are listed in reverse order, so use -${task_id}. - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ "json" "${dstat_out}" "[-${task_id}].logging" } readonly -f logging_paths_tasks_setup::dstat_get_logging diff --git a/test/integration/retries_setup.sh b/test/integration/retries_setup.sh index a1248a9..10808fd 100644 --- a/test/integration/retries_setup.sh +++ b/test/integration/retries_setup.sh @@ -98,7 +98,7 @@ function retries_setup::check_job_attr() { local expected for expected in ${expected_values}; do local result="$( - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ yaml "${dstat_out}" "[${num}].${attr}")" if [[ "${result}" != "${expected}" ]]; then @@ -115,7 +115,7 @@ function retries_setup::check_job_attr() { # Check that there were no extra attempts echo "Checking that there are no unexpected attempts" local -r beyond="$( - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ yaml "${dstat_out}" "[${num}].${attr}")" if [[ -n "${beyond}" ]]; then echo "Unexpected attempt for job ${job_name}" diff --git a/test/integration/test_unit_util.sh b/test/integration/test_unit_util.sh index 4413620..3fe63f8 100644 --- a/test/integration/test_unit_util.sh +++ b/test/integration/test_unit_util.sh @@ -65,7 +65,7 @@ readonly -f test_failed function get_stderr_value() { local value="${1}" - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ "json" "$(<"${TEST_STDERR}")" "${value}" } readonly -f get_stderr_value diff --git a/test/integration/test_util.sh b/test/integration/test_util.sh index 5626f55..97a4ec1 100644 --- a/test/integration/test_util.sh +++ b/test/integration/test_util.sh @@ -36,7 +36,7 @@ readonly -f util::exit_handler # util::join # -# Bash analog to Python string join() routine. +# Bash analog to python string join() routine. # First argument is a delimiter. # Remaining arguments will be joined together, separated by the delimiter. function util::join() { @@ -108,7 +108,7 @@ function util::get_job_status() { return 1 fi - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ "json" "${dstat_out}" "[0].status" } readonly -f util::get_job_status @@ -130,7 +130,7 @@ function util::get_job_status_detail() { return 1 fi - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ "json" "${dstat_out}" "[0].status-detail" } readonly -f util::get_job_status_detail @@ -148,7 +148,7 @@ function util::get_job_logging() { --full \ --format json) - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ "json" "${dstat_out}" "[0].logging" } readonly -f util::get_job_logging @@ -167,7 +167,7 @@ function util::wait_for_canceled_status() { # "operations.cancel" will return success when the operation is internally # marked for deletion and there can be a short delay before it is externally # marked as CANCELED. - local max_wait_sec=10 + local max_wait_sec=20 local status echo "Waiting up to ${max_wait_sec} sec for CANCELED status of ${job_id}" @@ -192,7 +192,7 @@ function util::is_valid_dstat_datetime() { local datetime="${1}" # If this fails to parse, it will exit with a non-zero exit code - python -c ' + python3 -c ' import datetime import sys datetime.datetime.strptime(sys.argv[1], "%Y-%m-%d %H:%M:%S.%f") @@ -204,7 +204,7 @@ function util::dstat_yaml_output_value() { local dstat_out="${1}" local field="${2}" - python "${SCRIPT_DIR}"/get_data_value.py \ + python3 "${SCRIPT_DIR}"/get_data_value.py \ "yaml" "${dstat_out}" "${field}" } readonly -f util::dstat_yaml_output_value diff --git a/test/integration/unit_flags.google-v2.sh b/test/integration/unit_flags.google-v2.sh index 8f7ba69..79610c9 100755 --- a/test/integration/unit_flags.google-v2.sh +++ b/test/integration/unit_flags.google-v2.sh @@ -451,13 +451,51 @@ function test_use_private_address_with_public_image() { assert_output_empty assert_err_contains \ - "ValueError: --use-private-address must specify a --image with a gcr.io host" + "ValueError: --use-private-address must specify a --image with a gcr.io or pkg.dev host" test_passed "${subtest}" fi } readonly -f test_use_private_address_with_public_image +function test_use_private_address_with_gcr_io() { + local subtest="${FUNCNAME[0]}" + + if DOCKER_IMAGE_OVERRIDE="marketplace.gcr.io/google/debian9" call_dsub \ + --command 'echo "${TEST_NAME}"' \ + --regions us-central1 \ + --use-private-address; then + + # Check that the output contains expected values + assert_err_value_equals \ + "[0].pipeline.actions.[3].imageUri" "marketplace.gcr.io/google/debian9" + + test_passed "${subtest}" + else + test_failed "${subtest}" + fi +} +readonly -f test_use_private_address_with_gcr_io + +function test_use_private_address_with_pkg_dev() { + local subtest="${FUNCNAME[0]}" + + if DOCKER_IMAGE_OVERRIDE="us-central1-docker.pkg.dev/my-project/my-repo/my-image" call_dsub \ + --command 'echo "${TEST_NAME}"' \ + --regions us-central1 \ + --use-private-address; then + + # Check that the output contains expected values + assert_err_value_equals \ + "[0].pipeline.actions.[3].imageUri" "us-central1-docker.pkg.dev/my-project/my-repo/my-image" + + test_passed "${subtest}" + else + test_failed "${subtest}" + fi +} +readonly -f test_use_private_address_with_pkg_dev + function test_cpu_platform() { local subtest="${FUNCNAME[0]}" @@ -969,6 +1007,8 @@ echo test_network test_no_network test_use_private_address_with_public_image +test_use_private_address_with_gcr_io +test_use_private_address_with_pkg_dev echo test_cpu_platform diff --git a/test/integration/unit_version.sh b/test/integration/unit_version.sh index 46764f9..f4b09ec 100755 --- a/test/integration/unit_version.sh +++ b/test/integration/unit_version.sh @@ -27,7 +27,7 @@ readonly SCRIPT_DIR="$(dirname "${0}")" source "${SCRIPT_DIR}/test_setup_unit.sh" readonly VERSION_NUMBER="$( - python -c "from dsub._dsub_version import DSUB_VERSION; print(DSUB_VERSION)")" + python3 -c "from dsub._dsub_version import DSUB_VERSION; print(DSUB_VERSION)")" readonly EXPECTED_STRING="dsub version: ${VERSION_NUMBER}" # Define tests template. diff --git a/test/setup_and_run_tests.sh b/test/setup_and_run_tests.sh index c37ca14..bbb6e4f 100755 --- a/test/setup_and_run_tests.sh +++ b/test/setup_and_run_tests.sh @@ -6,7 +6,7 @@ set -o nounset if [[ "${1:-}" == "--help" ]]; then cat <