This blueprint shows how to leverage Cloud Asset Inventory Exporting to Bigquery feature to keep track of your project wide assets over time storing information in Bigquery.
The data stored in Bigquery can then be used for different purposes:
- dashboarding
- analysis
The blueprint uses export resources at the project level for ease of testing, in actual use a few changes are needed to operate at the resource hierarchy level:
- the export should be set at the folder or organization level
- the
roles/cloudasset.viewer
on the service account should be set at the folder or organization level
The resources created in this blueprint are shown in the high level diagram below:
Ensure that you grant your account one of the following roles on your project, folder, or organization:
- Cloud Asset Viewer role (
roles/cloudasset.viewer
) - Owner primitive role (
roles/owner
)
Clone this repository, specify your variables in a terraform.tvars
and then go through the following steps to create resources:
terraform init
terraform apply
Once done testing, you can clean up resources by running terraform destroy
. To persist state, check out the backend.tf.sample
file.
Once resources are created, you can run queries on the data you exported on Bigquery. Here you can find some blueprint of queries you can run.
You can also create a dashboard connecting Datalab or any other BI tools of your choice to your Bigquery dataset.
This is an optional part.
Regular file-based exports of data from Cloud Asset Inventory may be useful for e.g. scale-out network dependencies discovery tools like Planet Exporter, or to update legacy workloads tracking or configuration management systems. Bigquery supports multiple export formats and one may upload objects to Storage Bucket using provided Cloud Function. Specify job.DestinationFormat
as defined in documentation, e.g. NEWLINE_DELIMITED_JSON
.
It helps to create custom scheduled query from CAI export tables, and to write out results in to dedicated table (with overwrites). Define such query's output columns to comply with downstream systems' fields requirements, and time query execution after CAI export into BQ for freshness. See sample queries.
This is an optional part, created if cai_gcs_export
is set to true
. The high level diagram extends to the following:
name | description | type | required | default |
---|---|---|---|---|
cai_config | Cloud Asset Inventory export config. | object({…}) |
✓ | |
project_id | Project id that references existing project. | string |
✓ | |
billing_account | Billing account id used as default for new projects. | string |
null |
|
bundle_path | Path used to write the intermediate Cloud Function code bundle. | string |
"./bundle.zip" |
|
bundle_path_cffile | Path used to write the intermediate Cloud Function code bundle. | string |
"./bundle_cffile.zip" |
|
cai_gcs_export | Enable optional part to export tables to GCS. | bool |
false |
|
file_config | Optional BQ table as a file export function config. | object({…}) |
{…} |
|
location | Appe Engine location used in the example. | string |
"europe-west" |
|
name | Arbitrary string used to name created resources. | string |
"asset-inventory" |
|
name_cffile | Arbitrary string used to name created resources. | string |
"cffile-exporter" |
|
project_create | Create project instead ofusing an existing one. | bool |
true |
|
region | Compute region used in the example. | string |
"europe-west1" |
|
root_node | The resource name of the parent folder or organization for project creation, in 'folders/folder_id' or 'organizations/org_id' format. | string |
null |
name | description | sensitive |
---|---|---|
bq-dataset | Bigquery instance details. | |
cloud-function | Cloud Function instance details. |