diff --git a/.changelog/10090.txt b/.changelog/10090.txt new file mode 100644 index 0000000000..64da68cbdc --- /dev/null +++ b/.changelog/10090.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +redis: added `node_type` and `precise_size_gb` fields to `google_redis_cluster` +``` \ No newline at end of file diff --git a/.changelog/10123.txt b/.changelog/10123.txt new file mode 100644 index 0000000000..f7d6047b7c --- /dev/null +++ b/.changelog/10123.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +container: added `node_pool_auto_config.resource_manager_tags` field to `google_container_cluster` resource +``` \ No newline at end of file diff --git a/.changelog/10126.txt b/.changelog/10126.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10126.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10153.txt b/.changelog/10153.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10153.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10170.txt b/.changelog/10170.txt new file mode 100644 index 0000000000..a7fa1b8840 --- /dev/null +++ b/.changelog/10170.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +bigquery: added in-place column drop support for `bigquery_table` +``` \ No newline at end of file diff --git a/.changelog/10172.txt b/.changelog/10172.txt new file mode 100644 index 0000000000..a8876ec4bb --- /dev/null +++ b/.changelog/10172.txt @@ -0,0 +1,3 @@ +```release-note:none +Fixing failing terraform integration tests +``` \ No newline at end of file diff --git a/.changelog/10209.txt b/.changelog/10209.txt new file mode 100644 index 0000000000..a3153dcc9c --- /dev/null +++ b/.changelog/10209.txt @@ -0,0 +1,4 @@ +```release-note:new-datasource +tags_tag_keys +tags_tag_values +``` \ No newline at end of file diff --git a/.changelog/10216.txt b/.changelog/10216.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10216.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10217.txt b/.changelog/10217.txt new file mode 100644 index 0000000000..d6c3d07775 --- /dev/null +++ b/.changelog/10217.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +compute: added `params.resource_manager_tags` field to `resource_compute_instance_group_manager` and `resource_compute_region_instance_group_manager` that enables to create these resources with tags (beta) +``` \ No newline at end of file diff --git a/.changelog/10220.txt b/.changelog/10220.txt new file mode 100644 index 0000000000..030931ef30 --- /dev/null +++ b/.changelog/10220.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +cloudrunv2: added `template.volumes.nfs` field to `google_cloud_run_v2_job` resource (beta) +``` \ No newline at end of file diff --git a/.changelog/10225.txt b/.changelog/10225.txt new file mode 100644 index 0000000000..aacf286980 --- /dev/null +++ b/.changelog/10225.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +container: added `enable_cilium_clusterwide_network_policy` field to `google_container_cluster` resource +``` \ No newline at end of file diff --git a/.changelog/10231.txt b/.changelog/10231.txt new file mode 100644 index 0000000000..a882d9b14e --- /dev/null +++ b/.changelog/10231.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +google_data_loss_prevention_discovery_config +``` \ No newline at end of file diff --git a/.changelog/10242.txt b/.changelog/10242.txt new file mode 100644 index 0000000000..8b5beddd6e --- /dev/null +++ b/.changelog/10242.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +artifactregistry: added `remote_repository_config._repository.custom_repository.uri` field to `google_artifact_registry_repository` resource. +``` \ No newline at end of file diff --git a/.changelog/10246.txt b/.changelog/10246.txt new file mode 100644 index 0000000000..1a83c99b0e --- /dev/null +++ b/.changelog/10246.txt @@ -0,0 +1,6 @@ +```release-note:enhancement +networksecurity: Added 'disabled' field to 'google_network_security_firewall_endpoint_association' resource; +``` +```release-note:bug +networksecurity: fixed an issue where `google_network_security_firewall_endpoint_association` resources could not be created due to a bad parameter +``` \ No newline at end of file diff --git a/.changelog/10247.txt b/.changelog/10247.txt new file mode 100644 index 0000000000..38eb64683e --- /dev/null +++ b/.changelog/10247.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +gkebackup: added `backup_schedule.0.rpo_config` field to `google_gke_backup_backup_plan` resource +``` \ No newline at end of file diff --git a/.changelog/10255.txt b/.changelog/10255.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10255.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10261.txt b/.changelog/10261.txt new file mode 100644 index 0000000000..fca8706c58 --- /dev/null +++ b/.changelog/10261.txt @@ -0,0 +1,3 @@ +```release-note:bug +compute: Added explicit update_encoder to `ComputeTargetHttpsProxy` and `ComputeRegionTargetHttpsProxy` resources. +``` \ No newline at end of file diff --git a/.changelog/10282.txt b/.changelog/10282.txt new file mode 100644 index 0000000000..cf079dcaf6 --- /dev/null +++ b/.changelog/10282.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +firestore: added `vector_config` to `google_firestore_index` resource +``` \ No newline at end of file diff --git a/.changelog/10299.txt b/.changelog/10299.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10299.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10307.txt b/.changelog/10307.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10307.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10309.txt b/.changelog/10309.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10309.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10310.txt b/.changelog/10310.txt new file mode 100644 index 0000000000..d9e5a85141 --- /dev/null +++ b/.changelog/10310.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +gkeonprem: added `disable_bundled_ingress` field to `google_gkeonprem_vmware_cluster` resource +``` \ No newline at end of file diff --git a/.changelog/10312.txt b/.changelog/10312.txt new file mode 100644 index 0000000000..0d3062de2e --- /dev/null +++ b/.changelog/10312.txt @@ -0,0 +1,6 @@ +```release-note:enhancement +storage: added `project_number` attribute to `google_storage_bucket` resource and data source +``` +```release-note:enhancement +storage: added ability to provide `project` argument to `google_storage_bucket` data source. This will not impact reading the resource's data, instead this helps users avoid calls to the Compute API within the data source. +``` \ No newline at end of file diff --git a/.changelog/10314.txt b/.changelog/10314.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10314.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10316.txt b/.changelog/10316.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10316.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10319.txt b/.changelog/10319.txt new file mode 100644 index 0000000000..e2f29e9f95 --- /dev/null +++ b/.changelog/10319.txt @@ -0,0 +1,3 @@ +```release-note:bug +bigquery: fixed a crash when `google_bigquery_table` had a `primary_key.columns` entry set to `""` +``` \ No newline at end of file diff --git a/.changelog/10324.txt b/.changelog/10324.txt new file mode 100644 index 0000000000..36863c990e --- /dev/null +++ b/.changelog/10324.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +filestore: Added `protocol` property for `google_filestore_instance` instance to support NFSv3 and NFSv4.1 +``` \ No newline at end of file diff --git a/.changelog/10325.txt b/.changelog/10325.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10325.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10330.txt b/.changelog/10330.txt new file mode 100644 index 0000000000..9e888f2c6d --- /dev/null +++ b/.changelog/10330.txt @@ -0,0 +1,3 @@ +```release-note:bug +appengine: fixed a crash in `google_app_engine_flexible_app_version` due to the `deployment` field not being returned by the API +``` \ No newline at end of file diff --git a/.changelog/10338.txt b/.changelog/10338.txt new file mode 100644 index 0000000000..eb2604dada --- /dev/null +++ b/.changelog/10338.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +compute: added 'endpoint_types' field to 'google_compute_router_nat' resource +``` \ No newline at end of file diff --git a/.changelog/10340.txt b/.changelog/10340.txt new file mode 100644 index 0000000000..045835d981 --- /dev/null +++ b/.changelog/10340.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +vmwareengine: added support for stretch private clouds +``` \ No newline at end of file diff --git a/.changelog/10342.txt b/.changelog/10342.txt new file mode 100644 index 0000000000..0ee344f61c --- /dev/null +++ b/.changelog/10342.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +compute: promoted `google_compute_instance_settings` to GA +``` \ No newline at end of file diff --git a/.changelog/10343.txt b/.changelog/10343.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10343.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10346.txt b/.changelog/10346.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10346.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10347.txt b/.changelog/10347.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10347.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10350.txt b/.changelog/10350.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10350.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10352.txt b/.changelog/10352.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10352.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10353.txt b/.changelog/10353.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10353.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10354.txt b/.changelog/10354.txt new file mode 100644 index 0000000000..5d05893929 --- /dev/null +++ b/.changelog/10354.txt @@ -0,0 +1,3 @@ +```release-note:bug +privateca: fixed permission issues when activating a sub-CA in a different region +``` \ No newline at end of file diff --git a/.changelog/10356.txt b/.changelog/10356.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10356.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10365.txt b/.changelog/10365.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10365.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10366.txt b/.changelog/10366.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10366.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10368.txt b/.changelog/10368.txt new file mode 100644 index 0000000000..4d9ec44ada --- /dev/null +++ b/.changelog/10368.txt @@ -0,0 +1,3 @@ +```release-note:bug +dns: fixed bug where some methods of authentication didn't work when using `dns` data sources +``` \ No newline at end of file diff --git a/.changelog/10370.txt b/.changelog/10370.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10370.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10372.txt b/.changelog/10372.txt new file mode 100644 index 0000000000..43d50230f8 --- /dev/null +++ b/.changelog/10372.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +cloudfunctions2: added `build_config.service_account` field to `google_cloudfunctions2_function` resource +``` \ No newline at end of file diff --git a/.changelog/10375.txt b/.changelog/10375.txt new file mode 100644 index 0000000000..a967ff2713 --- /dev/null +++ b/.changelog/10375.txt @@ -0,0 +1,9 @@ +```release-note:enhancement +compute: added `identifier_range` field to `google_compute_router` resource (beta) +``` +```release-note:enhancement +compute: added `ip_version` field to `google_compute_router_interface` resource (beta) +``` +```release-note:enhancement +compute: added `enable_ipv4`, `ipv4_nexthop_address` and `peer_ipv4_nexthop_address` fields to `google_compute_router_peer` resource (beta) +``` \ No newline at end of file diff --git a/.changelog/10376.txt b/.changelog/10376.txt new file mode 100644 index 0000000000..367cbb931f --- /dev/null +++ b/.changelog/10376.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +`google_project_iam_member_remove` +``` \ No newline at end of file diff --git a/.changelog/10379.txt b/.changelog/10379.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10379.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10380.txt b/.changelog/10380.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10380.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10383.txt b/.changelog/10383.txt new file mode 100644 index 0000000000..322ca740b0 --- /dev/null +++ b/.changelog/10383.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +sql: added `enable_google_ml_integration` field to `google_sql_database_instance` resource +``` \ No newline at end of file diff --git a/.changelog/10385.txt b/.changelog/10385.txt new file mode 100644 index 0000000000..15a5e31a77 --- /dev/null +++ b/.changelog/10385.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +compute: increased `google_compute_security_policy` timeouts from 8 minutes to 20 minutes +``` \ No newline at end of file diff --git a/.changelog/10386.txt b/.changelog/10386.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10386.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10387.txt b/.changelog/10387.txt new file mode 100644 index 0000000000..b69d94c0d5 --- /dev/null +++ b/.changelog/10387.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +storage: added labels validation to 'google_storage_bucket' resource +``` \ No newline at end of file diff --git a/.changelog/10388.txt b/.changelog/10388.txt new file mode 100644 index 0000000000..daed6e3529 --- /dev/null +++ b/.changelog/10388.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +bigquerydatapolicy: added `data_masking_policy.routine` to `google_bigquery_data_policy` +``` \ No newline at end of file diff --git a/.changelog/10389.txt b/.changelog/10389.txt new file mode 100644 index 0000000000..3fec6e872f --- /dev/null +++ b/.changelog/10389.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +composer: fixed validation on `google_composer_environment` resource so it will identify a disallowed upgrade to Composer 3 before attempting to provide feedback that's specific to using Composer 3 +``` \ No newline at end of file diff --git a/.changelog/10390.txt b/.changelog/10390.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10390.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10394.txt b/.changelog/10394.txt new file mode 100644 index 0000000000..27f5c9d032 --- /dev/null +++ b/.changelog/10394.txt @@ -0,0 +1,3 @@ +```release-note:bug +iam: fixed a bug that prevented setting 'create_ignore_already_exists' on existing resources in `google_service_account`. +``` \ No newline at end of file diff --git a/.changelog/10400.txt b/.changelog/10400.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10400.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10401.txt b/.changelog/10401.txt new file mode 100644 index 0000000000..dde4c0dc51 --- /dev/null +++ b/.changelog/10401.txt @@ -0,0 +1,3 @@ +```release-note:bug +cloudrun: fixed the bug that computed `metadata.0.labels` and `metadata.0.annotations` fields don't appear in terraform plan when creating resource `google_cloud_run_service` and `google_cloud_run_domain_mapping` +``` \ No newline at end of file diff --git a/.changelog/10403.txt b/.changelog/10403.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10403.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10404.txt b/.changelog/10404.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10404.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10405.txt b/.changelog/10405.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10405.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10407.txt b/.changelog/10407.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10407.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10408.txt b/.changelog/10408.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10408.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10410.txt b/.changelog/10410.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10410.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10411.txt b/.changelog/10411.txt new file mode 100644 index 0000000000..2b8044ab43 --- /dev/null +++ b/.changelog/10411.txt @@ -0,0 +1,3 @@ +```release-note:bug +sql: fixed issues with updating the `enable_google_ml_integration` field in `google_sql_database_instance` resource +``` \ No newline at end of file diff --git a/.changelog/10412.txt b/.changelog/10412.txt new file mode 100644 index 0000000000..001c0b03d7 --- /dev/null +++ b/.changelog/10412.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +apigee: add support for `api_consumer_data_location`, `api_consumer_data_encryption_key_name`, and `control_plane_encryption_key_name` in `google_apigee_organization` +``` \ No newline at end of file diff --git a/.changelog/10417.txt b/.changelog/10417.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10417.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10418.txt b/.changelog/10418.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10418.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10420.txt b/.changelog/10420.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10420.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10426.txt b/.changelog/10426.txt new file mode 100644 index 0000000000..f6e1b298a1 --- /dev/null +++ b/.changelog/10426.txt @@ -0,0 +1,3 @@ +```release-note: bug +storage: added validation to `name` field in `google_storage_bucket` resource +``` \ No newline at end of file diff --git a/.changelog/10427.txt b/.changelog/10427.txt new file mode 100644 index 0000000000..57de563b1b --- /dev/null +++ b/.changelog/10427.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +workstations: added output-only field `control_plane_ip` to `google_workstations_workstation_cluster` resource (beta) +``` \ No newline at end of file diff --git a/.changelog/10429.txt b/.changelog/10429.txt new file mode 100644 index 0000000000..5caeeffd68 --- /dev/null +++ b/.changelog/10429.txt @@ -0,0 +1,3 @@ +```release-note:bug +apigee: fixed permadiff in ordering of `google_apigee_organization.properties.property`. +``` \ No newline at end of file diff --git a/.changelog/10430.txt b/.changelog/10430.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10430.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10439.txt b/.changelog/10439.txt new file mode 100644 index 0000000000..11b317edf5 --- /dev/null +++ b/.changelog/10439.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resourcemanager: added the field `api_method` to datasource `google_active_folder` so you can use either `SEARCH` or `LIST` to find your folder +``` \ No newline at end of file diff --git a/.changelog/10440.txt b/.changelog/10440.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10440.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10441.txt b/.changelog/10441.txt new file mode 100644 index 0000000000..b7d173f13a --- /dev/null +++ b/.changelog/10441.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +secretmanager: added `version_destroy_ttl` field to `google_secret_manager_secret` resource +``` \ No newline at end of file diff --git a/.changelog/10443.txt b/.changelog/10443.txt new file mode 100644 index 0000000000..ada1b13b90 --- /dev/null +++ b/.changelog/10443.txt @@ -0,0 +1,3 @@ +```release-note: bug +vmwareengine: fixed stretched cluster creation in `google_vmwareengine_private_cloud` +``` \ No newline at end of file diff --git a/.changelog/10447.txt b/.changelog/10447.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10447.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10448.txt b/.changelog/10448.txt new file mode 100644 index 0000000000..c93b028b65 --- /dev/null +++ b/.changelog/10448.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +firebasehosting: added `path` field to `google_firebase_hosting_version` +``` \ No newline at end of file diff --git a/.changelog/10449.txt b/.changelog/10449.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10449.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10453.txt b/.changelog/10453.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10453.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10455.txt b/.changelog/10455.txt new file mode 100644 index 0000000000..31c114a1d2 --- /dev/null +++ b/.changelog/10455.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +bigquery: added `resource_tags` field to `google_bigquery_table` resource +``` \ No newline at end of file diff --git a/.changelog/10457.txt b/.changelog/10457.txt new file mode 100644 index 0000000000..5fb846bfeb --- /dev/null +++ b/.changelog/10457.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +apigee: added `forward_proxy_uri` field to `google_apigee_environment` +``` \ No newline at end of file diff --git a/.changelog/10459.txt b/.changelog/10459.txt new file mode 100644 index 0000000000..6cf37328de --- /dev/null +++ b/.changelog/10459.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +billing: added `ownership_scope` field to `google_billing_budget` resource +``` \ No newline at end of file diff --git a/.changelog/10464.txt b/.changelog/10464.txt new file mode 100644 index 0000000000..23fddec597 --- /dev/null +++ b/.changelog/10464.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10465.txt b/.changelog/10465.txt new file mode 100644 index 0000000000..47e6d41c54 --- /dev/null +++ b/.changelog/10465.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +google_network_connectivity_internal_range +``` \ No newline at end of file diff --git a/.changelog/10466.txt b/.changelog/10466.txt new file mode 100644 index 0000000000..c986d50694 --- /dev/null +++ b/.changelog/10466.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +composer: Add resource for userWorkloadsSecret in Composer environment. Includes tests and documentation for this resource. Adding in 'beta' version for now. +``` \ No newline at end of file diff --git a/.changelog/10469.txt b/.changelog/10469.txt new file mode 100644 index 0000000000..42b910df15 --- /dev/null +++ b/.changelog/10469.txt @@ -0,0 +1,3 @@ +```release-note:none + +``` \ No newline at end of file diff --git a/.changelog/10474.txt b/.changelog/10474.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10474.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10475.txt b/.changelog/10475.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10475.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/10476.txt b/.changelog/10476.txt new file mode 100644 index 0000000000..2a7a071f04 --- /dev/null +++ b/.changelog/10476.txt @@ -0,0 +1,3 @@ +```release-note:bug +appengine: added suppression for a diff in google_app_engine_standard_app_version.automatic_scaling when the block is unset in configuration +``` \ No newline at end of file diff --git a/.changelog/10477.txt b/.changelog/10477.txt new file mode 100644 index 0000000000..126505bd3f --- /dev/null +++ b/.changelog/10477.txt @@ -0,0 +1,2 @@ +```release-note:none +``` \ No newline at end of file diff --git a/.changelog/9481.txt b/.changelog/9481.txt new file mode 100644 index 0000000000..0e16df85b1 --- /dev/null +++ b/.changelog/9481.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +container: added field `stateful_ha_config` for `resource_container_cluster`. +``` \ No newline at end of file diff --git a/.changelog/9910.txt b/.changelog/9910.txt new file mode 100644 index 0000000000..ea55f917be --- /dev/null +++ b/.changelog/9910.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +`google_parallelstore_instance` +``` \ No newline at end of file diff --git a/.goreleaser.yml b/.goreleaser.yml index f87bc539fb..384d593818 100644 --- a/.goreleaser.yml +++ b/.goreleaser.yml @@ -1,7 +1,7 @@ archives: - files: - # Only include built binary in archive - - 'none*' + # Include built binary and license files in archive + - 'LICENSE' format: zip name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}' builds: diff --git a/CHANGELOG.md b/CHANGELOG.md index c06fd8508d..3835d99f4f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,132 @@ -## 5.23.0 (Unreleased) +## 5.27.0 (Unreleased) + +FEATURES: +* **New Data Source:** `google_storage_bucket_objects` ([#7270](https://github.com/hashicorp/terraform-provider-google-beta/pull/7270)) +* **New Resource:** `google_composer_user_workloads_secret` ([#7257](https://github.com/hashicorp/terraform-provider-google-beta/pull/7257)) +* **New Resource:** `google_compute_security_policy_rule` ([#7282](https://github.com/hashicorp/terraform-provider-google-beta/pull/7282)) +* **New Resource:** `google_data_loss_prevention_discovery_config` ([#7252](https://github.com/hashicorp/terraform-provider-google-beta/pull/7252)) +* **New Resource:** `google_integrations_auth_config` ([#7268](https://github.com/hashicorp/terraform-provider-google-beta/pull/7268)) +* **New Resource:** `google_network_connectivity_internal_range` ([#7265](https://github.com/hashicorp/terraform-provider-google-beta/pull/7265)) + +IMPROVEMENTS: +* alloydb: added `network_config` field to `google_alloydb_instance` resource ([#7271](https://github.com/hashicorp/terraform-provider-google-beta/pull/7271)) +* alloydb: added `public_ip_address` field to `google_alloydb_instance` resource ([#7271](https://github.com/hashicorp/terraform-provider-google-beta/pull/7271)) +* apigee: added `forward_proxy_uri` field to `google_apigee_environment` resource ([#7260](https://github.com/hashicorp/terraform-provider-google-beta/pull/7260)) +* bigquerydatapolicy: added `data_masking_policy.routine` field to `google_bigquery_data_policy` resource ([#7250](https://github.com/hashicorp/terraform-provider-google-beta/pull/7250)) +* compute: added `server_tls_policy` field to `google_compute_region_target_https_proxy` resource ([#7280](https://github.com/hashicorp/terraform-provider-google-beta/pull/7280)) +* filestore: added `protocol` field to `google_filestore_instance` resource to support NFSv3 and NFSv4.1 ([#7254](https://github.com/hashicorp/terraform-provider-google-beta/pull/7254)) +* firebasehosting: added `config.rewrites.path` field to `google_firebase_hosting_version` resource ([#7258](https://github.com/hashicorp/terraform-provider-google-beta/pull/7258)) +* logging: added `intercept_children` field to `google_logging_organization_sink` and `google_logging_folder_sink` resources ([#7279](https://github.com/hashicorp/terraform-provider-google-beta/pull/7279)) +* monitoring: added `service_agent_authentication` field to `google_monitoring_uptime_check_config` resource ([#7276](https://github.com/hashicorp/terraform-provider-google-beta/pull/7276)) +* privateca: added `subject_key_id` field to `google_privateca_certificate` and `google_privateca_certificate_authority` resources ([#7273](https://github.com/hashicorp/terraform-provider-google-beta/pull/7273)) +* secretmanager: added `version_destroy_ttl` field to `google_secret_manager_secret` resource ([#7253](https://github.com/hashicorp/terraform-provider-google-beta/pull/7253)) + +BUG FIXES: +* appengine: added suppression for a diff in `google_app_engine_standard_app_version.automatic_scaling` when the block is unset in configuration ([#7262](https://github.com/hashicorp/terraform-provider-google-beta/pull/7262)) +* sql: fixed issues with updating the `enable_google_ml_integration` field in `google_sql_database_instance` resource ([#7249](https://github.com/hashicorp/terraform-provider-google-beta/pull/7249)) + +## 5.26.0 (Apr 22, 2024) + +FEATURES: +* **New Resource:** `google_project_iam_member_remove` ([#7242](https://github.com/hashicorp/terraform-provider-google-beta/pull/7242)) + +IMPROVEMENTS: +* apigee: added support for `api_consumer_data_location`, `api_consumer_data_encryption_key_name`, and `control_plane_encryption_key_name` in `google_apigee_organization` ([#7245](https://github.com/hashicorp/terraform-provider-google-beta/pull/7245)) +* artifactregistry: added `remote_repository_config._repository.custom_repository.uri` field to `google_artifact_registry_repository` resource. ([#7230](https://github.com/hashicorp/terraform-provider-google-beta/pull/7230)) +* bigquery: added `resource_tags` field to `google_bigquery_table` resource ([#7247](https://github.com/hashicorp/terraform-provider-google-beta/pull/7247)) +* billing: added `ownership_scope` field to `google_billing_budget` resource ([#7239](https://github.com/hashicorp/terraform-provider-google-beta/pull/7239)) +* cloudfunctions2: added `build_config.service_account` field to `google_cloudfunctions2_function` resource ([#7231](https://github.com/hashicorp/terraform-provider-google-beta/pull/7231)) +* composer: fixed validation on `google_composer_environment` resource so it will identify a disallowed upgrade to Composer 3 before attempting to provide feedback that's specific to using Composer 3 ([#7213](https://github.com/hashicorp/terraform-provider-google-beta/pull/7213)) +* compute: added `params.resource_manager_tags` field to `resource_compute_instance_group_manager` and `resource_compute_region_instance_group_manager` that enables to create these resources with tags (beta) ([#7226](https://github.com/hashicorp/terraform-provider-google-beta/pull/7226)) +* resourcemanager: added the field `api_method` to datasource `google_active_folder` so you can use either `SEARCH` or `LIST` to find your folder ([#7248](https://github.com/hashicorp/terraform-provider-google-beta/pull/7248)) +* storage: added labels validation to `google_storage_bucket` resource ([#7212](https://github.com/hashicorp/terraform-provider-google-beta/pull/7212)) +* workstations: added output-only field `control_plane_ip` to `google_workstations_workstation_cluster` resource (beta) ([#7240](https://github.com/hashicorp/terraform-provider-google-beta/pull/7240)) + +BUG FIXES: +* apigee: fixed permadiff in ordering of `google_apigee_organization.properties.property`. ([#7234](https://github.com/hashicorp/terraform-provider-google-beta/pull/7234)) +* cloudrun: fixed the bug that computed `metadata.0.labels` and `metadata.0.annotations` fields don't appear in terraform plan when creating resource `google_cloud_run_service` and `google_cloud_run_domain_mapping` ([#7217](https://github.com/hashicorp/terraform-provider-google-beta/pull/7217)) +* dns: fixed bug where some methods of authentication didn't work when using `dns` data sources ([#7233](https://github.com/hashicorp/terraform-provider-google-beta/pull/7233)) +* iam: fixed a bug that prevented setting `create_ignore_already_exists` on existing resources in `google_service_account`. ([#7236](https://github.com/hashicorp/terraform-provider-google-beta/pull/7236)) +* sql: fixed issues with updating the `enable_google_ml_integration` field in `google_sql_database_instance` resource ([#7249](https://github.com/hashicorp/terraform-provider-google-beta/pull/7249)) +* storage: added validation to `name` field in `google_storage_bucket` resource ([#7237](https://github.com/hashicorp/terraform-provider-google-beta/pull/7237)) +* vmwareengine: fixed stretched cluster creation in `google_vmwareengine_private_cloud` ([#7246](https://github.com/hashicorp/terraform-provider-google-beta/pull/7246)) + +## 5.25.0 (Apr 15, 2024) + +FEATURES: +* **New Data Source:** `google_tags_tag_keys` ([#7196](https://github.com/hashicorp/terraform-provider-google-beta/pull/7196)) +* **New Data Source:** `google_tags_tag_values` ([#7196](https://github.com/hashicorp/terraform-provider-google-beta/pull/7196)) +* **New Resource:** `google_parallelstore_instance` ([#7209](https://github.com/hashicorp/terraform-provider-google-beta/pull/7209)) + +IMPROVEMENTS: +* bigquery: added in-place schema column drop support for `google_bigquery_table` resource ([#7193](https://github.com/hashicorp/terraform-provider-google-beta/pull/7193)) +* compute: added `endpoint_types` field to `google_compute_router_nat` resource ([#7190](https://github.com/hashicorp/terraform-provider-google-beta/pull/7190)) +* compute: added `enable_ipv4`, `ipv4_nexthop_address` and `peer_ipv4_nexthop_address` fields to `google_compute_router_peer` resource ([#7207](https://github.com/hashicorp/terraform-provider-google-beta/pull/7207)) +* compute: added `identifier_range` field to `google_compute_router` resource ([#7207](https://github.com/hashicorp/terraform-provider-google-beta/pull/7207)) +* compute: added `ip_version` field to `google_compute_router_interface` resource ([#7207](https://github.com/hashicorp/terraform-provider-google-beta/pull/7207)) +* compute: increased timeouts from 8 minutes to 20 minutes for `google_compute_security_policy` resource ([#7204](https://github.com/hashicorp/terraform-provider-google-beta/pull/7204)) +* container: added `stateful_ha_config` field to `google_container_cluster` resource ([#7206](https://github.com/hashicorp/terraform-provider-google-beta/pull/7206)) +* firestore: added `vector_config` field to `google_firestore_index` resource ([#7180](https://github.com/hashicorp/terraform-provider-google-beta/pull/7180)) +* gkebackup: added `backup_schedule.rpo_config` field to `google_gke_backup_backup_plan` resource ([#7211](https://github.com/hashicorp/terraform-provider-google-beta/pull/7211)) +* networksecurity: added `disabled` field to `google_network_security_firewall_endpoint_association` resource ([#7184](https://github.com/hashicorp/terraform-provider-google-beta/pull/7184)) +* sql: added `enable_google_ml_integration` field to `google_sql_database_instance` resource ([#7208](https://github.com/hashicorp/terraform-provider-google-beta/pull/7208)) +* storage: added labels validation to `google_storage_bucket` resource ([#7212](https://github.com/hashicorp/terraform-provider-google-beta/pull/7212)) +* vmwareengine: added `preferred_zone` and `secondary_zone` fields to `google_vmwareengine_private_cloud` resource ([#7210](https://github.com/hashicorp/terraform-provider-google-beta/pull/7210)) + +BUG FIXES: +* networksecurity: fixed an issue where `google_network_security_firewall_endpoint_association` resource could not be created due to a bad parameter ([#7184](https://github.com/hashicorp/terraform-provider-google-beta/pull/7184)) +* privateca: fixed permission issue by specifying signer certs chain when activating a sub-CA across regions for `google_privateca_certificate_authority` resource ([#7197](https://github.com/hashicorp/terraform-provider-google-beta/pull/7197)) + +## 5.24.0 (Apr 8, 2024) + +IMPROVEMENTS: +* cloudrunv2: added `template.volumes.nfs` field to `google_cloud_run_v2_job` resource ([#7169](https://github.com/hashicorp/terraform-provider-google-beta/pull/7169)) +* container: added `enable_cilium_clusterwide_network_policy` field to `google_container_cluster` resource ([#7171](https://github.com/hashicorp/terraform-provider-google-beta/pull/7171)) +* container: added `node_pool_auto_config.resource_manager_tags` field to `google_container_cluster` resource ([#7162](https://github.com/hashicorp/terraform-provider-google-beta/pull/7162)) +* gkeonprem: added `disable_bundled_ingress` field to `google_gkeonprem_vmware_cluster` resource ([#7163](https://github.com/hashicorp/terraform-provider-google-beta/pull/7163)) +* redis: added `node_type` and `precise_size_gb` fields to `google_redis_cluster` ([#7174](https://github.com/hashicorp/terraform-provider-google-beta/pull/7174)) +* storage: added `project_number` attribute to `google_storage_bucket` resource and data source ([#7164](https://github.com/hashicorp/terraform-provider-google-beta/pull/7164)) +* storage: added ability to provide `project` argument to `google_storage_bucket` data source. This will not impact reading the resource's data, instead this helps users avoid calls to the Compute API within the data source. ([#7164](https://github.com/hashicorp/terraform-provider-google-beta/pull/7164)) + +BUG FIXES: +* appengine: fixed a crash in `google_app_engine_flexible_app_version` due to the `deployment` field not being returned by the API ([#7175](https://github.com/hashicorp/terraform-provider-google-beta/pull/7175)) +* bigquery: fixed a crash when `google_bigquery_table` had a `primary_key.columns` entry set to `""` ([#7166](https://github.com/hashicorp/terraform-provider-google-beta/pull/7166)) +* compute: fixed update scenarios on `google_compute_region_target_https_proxy` and `google_compute_target_https_proxy` resources. ([#7170](https://github.com/hashicorp/terraform-provider-google-beta/pull/7170)) +* dataflow: fixed an issue where the provider would crash when `enable_streaming_engine` is set as a `parameter` value in `google_dataflow_flex_template_job` ([#7160](https://github.com/hashicorp/terraform-provider-google-beta/pull/7160)) + +## 5.23.0 (Apr 01, 2023) + +NOTES: +* provider: introduced support for [provider-defined functions](https://developer.hashicorp.com/terraform/plugin/framework/functions). This feature is in Terraform v1.8.0+. ([#7153](https://github.com/hashicorp/terraform-provider-google-beta/pull/7153)) + +DEPRECATIONS: +* kms: deprecated `attestation.external_protection_level_options` in favor of `external_protection_level_options` in `google_kms_crypto_key_version` ([#7155](https://github.com/hashicorp/terraform-provider-google-beta/pull/7155)) + +FEATURES: +* **New Data Source:** `google_apphub_application` ([#7143](https://github.com/hashicorp/terraform-provider-google-beta/pull/7143)) +* **New Resource:** `google_cloud_quotas_quota_preference` ([#7126](https://github.com/hashicorp/terraform-provider-google-beta/pull/7126)) +* **New Resource:** `google_vertex_ai_deployment_resource_pool` ([#7158](https://github.com/hashicorp/terraform-provider-google-beta/pull/7158)) +* **New Resource:** `google_integrations_client` ([#7129](https://github.com/hashicorp/terraform-provider-google-beta/pull/7129)) + +IMPROVEMENTS: +* bigquery: added `dataGovernanceType` to `google_bigquery_routine` resource ([#7149](https://github.com/hashicorp/terraform-provider-google-beta/pull/7149)) +* bigquery: added support for `external_data_configuration.json_extension` to `google_bigquery_table` ([#7138](https://github.com/hashicorp/terraform-provider-google-beta/pull/7138)) +* compute: added `cloud_router_ipv6_address`, `customer_router_ipv6_address` fields to `google_compute_interconnect_attachment` resource ([#7151](https://github.com/hashicorp/terraform-provider-google-beta/pull/7151)) +* compute: added `generated_id` field to `google_compute_region_backend_service` resource ([#7128](https://github.com/hashicorp/terraform-provider-google-beta/pull/7128)) +* integrations: added deletion support for `google_integrations_client` resource ([#7142](https://github.com/hashicorp/terraform-provider-google-beta/pull/7142)) +* kms: added `crypto_key_backend` field to `google_kms_crypto_key` resource ([#7155](https://github.com/hashicorp/terraform-provider-google-beta/pull/7155)) +* metastore: added `scheduled_backup` field to `google_dataproc_metastore_service` resource ([#7140](https://github.com/hashicorp/terraform-provider-google-beta/pull/7140)) +* provider: added provider-defined function `name_from_id` for retrieving the short-form name of a resource from its self link or id ([#7153](https://github.com/hashicorp/terraform-provider-google-beta/pull/7153)) +* provider: added provider-defined function `project_from_id` for retrieving the project id from a resource's self link or id ([#7153](https://github.com/hashicorp/terraform-provider-google-beta/pull/7153)) +* provider: added provider-defined function `region_from_zone` for deriving a region from a zone's name ([#7153](https://github.com/hashicorp/terraform-provider-google-beta/pull/7153)) +* provider: added provider-defined functions `location_from_id`, `region_from_id`, and `zone_from_id` for retrieving the location/region/zone names from a resource's self link or id ([#7153](https://github.com/hashicorp/terraform-provider-google-beta/pull/7153)) + +BUG FIXES: +* cloudrunv2: fixed Terraform state inconsistency when resource `google_cloud_run_v2_job` creation fails ([#7159](https://github.com/hashicorp/terraform-provider-google-beta/pull/7159)) +* cloudrunv2: fixed Terraform state inconsistency when resource `google_cloud_run_v2_service` creation fails ([#7159](https://github.com/hashicorp/terraform-provider-google-beta/pull/7159)) +* container: fixed `google_container_cluster` permadiff when `master_ipv4_cidr_block` is set for a private flexible cluster ([#7147](https://github.com/hashicorp/terraform-provider-google-beta/pull/7147)) +* dataflow: fixed an issue where the provider would crash when `enableStreamingEngine` is set as a `parameter` value in `google_dataflow_flex_template_job` ([#7160](https://github.com/hashicorp/terraform-provider-google-beta/pull/7160)) +* kms: added top-level `external_protection_level_options` field in `google_kms_crypto_key_version` resource ([#7155](https://github.com/hashicorp/terraform-provider-google-beta/pull/7155)) ## 5.22.0 (Mar 26, 2024) @@ -6,9 +134,9 @@ BREAKING CHANGES: * networksecurity: added required field `billing_project_id` to `google_network_security_firewall_endpoint` resource. Any configuration without `billing_project_id` specified will cause resource creation fail (beta) ([#7124](https://github.com/hashicorp/terraform-provider-google-beta/pull/7124)) FEATURES: +* **New Data Source:** `google_cloud_quotas_quota_info` ([#7092](https://github.com/hashicorp/terraform-provider-google-beta/pull/7092)) * **New Data Source:** `google_cloud_quotas_quota_infos` ([#7116](https://github.com/hashicorp/terraform-provider-google-beta/pull/7116)) * **New Resource:** `google_access_context_manager_service_perimeter_dry_run_resource` ([#7115](https://github.com/hashicorp/terraform-provider-google-beta/pull/7115)) -* **New Resource:** `google_cloud_quotas_quota_info` ([#7092](https://github.com/hashicorp/terraform-provider-google-beta/pull/7092)) IMPROVEMENTS: * accesscontextmanager: supported managing service perimeter dry run resources outside the perimeter via new resource `google_access_context_manager_service_perimeter_dry_run_resource` ([#7115](https://github.com/hashicorp/terraform-provider-google-beta/pull/7115)) diff --git a/GNUmakefile b/GNUmakefile index 0e296d936f..b88a975957 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -1,4 +1,4 @@ -TEST?=$$(go list ./... | grep -v github.com/hashicorp/terraform-provider-google-beta/scripts) +TEST?=$$(go list -e ./... | grep -v github.com/hashicorp/terraform-provider-google-beta/scripts) WEBSITE_REPO=github.com/hashicorp/terraform-website PKG_NAME=google DIR_NAME=google-beta diff --git a/go.mod b/go.mod index 35bf8537b5..f746849edb 100644 --- a/go.mod +++ b/go.mod @@ -25,10 +25,11 @@ require ( github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/hashstructure v1.1.0 github.com/sirupsen/logrus v1.8.1 - golang.org/x/net v0.21.0 - golang.org/x/oauth2 v0.17.0 - google.golang.org/api v0.167.0 - google.golang.org/genproto/googleapis/rpc v0.0.0-20240213162025-012b6fc9bca9 + golang.org/x/exp v0.0.0-20240409090435-93d18d7e34b8 + golang.org/x/net v0.22.0 + golang.org/x/oauth2 v0.18.0 + google.golang.org/api v0.171.0 + google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c google.golang.org/grpc v1.62.1 google.golang.org/protobuf v1.33.0 ) @@ -63,7 +64,7 @@ require ( github.com/google/s2a-go v0.1.7 // indirect github.com/google/uuid v1.6.0 // indirect github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect - github.com/googleapis/gax-go/v2 v2.12.1 // indirect + github.com/googleapis/gax-go/v2 v2.12.3 // indirect github.com/hashicorp/go-checkpoint v0.5.0 // indirect github.com/hashicorp/go-hclog v1.5.0 // indirect github.com/hashicorp/go-plugin v1.6.0 // indirect @@ -91,19 +92,19 @@ require ( github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect github.com/zclconf/go-cty v1.14.2 // indirect go.opencensus.io v0.24.0 // indirect - go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.48.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.48.0 // indirect - go.opentelemetry.io/otel v1.23.0 // indirect - go.opentelemetry.io/otel/metric v1.23.0 // indirect - go.opentelemetry.io/otel/trace v1.23.0 // indirect - golang.org/x/crypto v0.19.0 // indirect - golang.org/x/mod v0.15.0 // indirect - golang.org/x/sync v0.6.0 // indirect - golang.org/x/sys v0.17.0 // indirect + go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect + go.opentelemetry.io/otel v1.24.0 // indirect + go.opentelemetry.io/otel/metric v1.24.0 // indirect + go.opentelemetry.io/otel/trace v1.24.0 // indirect + golang.org/x/crypto v0.21.0 // indirect + golang.org/x/mod v0.17.0 // indirect + golang.org/x/sync v0.7.0 // indirect + golang.org/x/sys v0.18.0 // indirect golang.org/x/text v0.14.0 // indirect golang.org/x/time v0.5.0 // indirect google.golang.org/appengine v1.6.8 // indirect google.golang.org/genproto v0.0.0-20240205150955-31a09d347014 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20240205150955-31a09d347014 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect ) diff --git a/go.sum b/go.sum index 1672ae714e..4a68953c65 100644 --- a/go.sum +++ b/go.sum @@ -134,8 +134,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs= github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0= -github.com/googleapis/gax-go/v2 v2.12.1 h1:9F8GV9r9ztXyAi00gsMQHNoF51xPZm8uj1dpYt2ZETM= -github.com/googleapis/gax-go/v2 v2.12.1/go.mod h1:61M8vcyyXR2kqKFxKrfA22jaA8JGF7Dc8App1U3H6jc= +github.com/googleapis/gax-go/v2 v2.12.3 h1:5/zPPDvw8Q1SuXjrqrZslrqT7dL/uJT2CQii/cLCKqA= +github.com/googleapis/gax-go/v2 v2.12.3/go.mod h1:AKloxT6GtNbaLm8QTNSidHUVsHYcBHwWRvkNFJUQcS4= github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 h1:+9834+KizmvFV7pXQGSXQTsaWhq2GjuNUt0aUU0YBYw= github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y= github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= @@ -273,18 +273,18 @@ github.com/zclconf/go-cty v1.14.2 h1:kTG7lqmBou0Zkx35r6HJHUQTvaRPr5bIAf3AoHS0izI github.com/zclconf/go-cty v1.14.2/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= -go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.48.0 h1:P+/g8GpuJGYbOp2tAdKrIPUX9JO02q8Q0YNlHolpibA= -go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.48.0/go.mod h1:tIKj3DbO8N9Y2xo52og3irLsPI4GW02DSMtrVgNMgxg= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.48.0 h1:doUP+ExOpH3spVTLS0FcWGLnQrPct/hD/bCPbDRUEAU= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.48.0/go.mod h1:rdENBZMT2OE6Ne/KLwpiXudnAsbdrdBaqBvTN8M8BgA= -go.opentelemetry.io/otel v1.23.0 h1:Df0pqjqExIywbMCMTxkAwzjLZtRf+bBKLbUcpxO2C9E= -go.opentelemetry.io/otel v1.23.0/go.mod h1:YCycw9ZeKhcJFrb34iVSkyT0iczq/zYDtZYFufObyB0= -go.opentelemetry.io/otel/metric v1.23.0 h1:pazkx7ss4LFVVYSxYew7L5I6qvLXHA0Ap2pwV+9Cnpo= -go.opentelemetry.io/otel/metric v1.23.0/go.mod h1:MqUW2X2a6Q8RN96E2/nqNoT+z9BSms20Jb7Bbp+HiTo= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw= +go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo= +go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo= +go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI= +go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco= go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8= go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E= -go.opentelemetry.io/otel/trace v1.23.0 h1:37Ik5Ib7xfYVb4V1UtnT97T1jI+AoIYkJyPkuL4iJgI= -go.opentelemetry.io/otel/trace v1.23.0/go.mod h1:GSGTbIClEsuZrGIzoEHqsVfxgn5UkggkflQwDScNUsk= +go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI= +go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= @@ -292,17 +292,19 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.19.0 h1:ENy+Az/9Y1vSrlrvBSyna3PITt4tiZLf7sgCjZBX7Wo= -golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= +golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= +golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= +golang.org/x/exp v0.0.0-20240409090435-93d18d7e34b8 h1:ESSUROHIBHg7USnszlcdmjBEwdMj9VUvU+OPk4yl2mc= +golang.org/x/exp v0.0.0-20240409090435-93d18d7e34b8/go.mod h1:/lliqkxwWAhPjf5oSOIJup2XcqJaw8RGS6k3TGEc7GI= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.15.0 h1:SernR4v+D55NyBH2QiEQrlBAnj1ECL6AGrA5+dPaMY8= -golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA= +golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -314,19 +316,19 @@ golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwY golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4= -golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44= +golang.org/x/net v0.22.0 h1:9sGLhx7iRIHEiX0oAJ3MRZMUCElJgy7Br1nO+AMN3Tc= +golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.17.0 h1:6m3ZPmLEFdVxKKWnKq4VqZ60gutO35zm+zrAHVmHyDQ= -golang.org/x/oauth2 v0.17.0/go.mod h1:OzPDGQiuQMguemayvdylqddI7qcD9lnSDb+1FiwQ5HA= +golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI= +golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= -golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M= +golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -344,12 +346,12 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y= -golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= +golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.17.0 h1:mkTF7LCd6WGJNL3K1Ad7kwxNfYAW6a8a8QqtMblp/4U= -golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= +golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8= +golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= @@ -368,14 +370,14 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= -golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ= -golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.20.0 h1:hz/CVckiOxybQvFw6h7b/q80NTr9IUQb4s1IIzW7KNY= +golang.org/x/tools v0.20.0/go.mod h1:WvitBU7JJf6A4jOdg4S1tviW9bhUxkgeCui/0JHctQg= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/api v0.167.0 h1:CKHrQD1BLRii6xdkatBDXyKzM0mkawt2QP+H3LtPmSE= -google.golang.org/api v0.167.0/go.mod h1:4FcBc686KFi7QI/U51/2GKKevfZMpM17sCdibqe/bSA= +google.golang.org/api v0.171.0 h1:w174hnBPqut76FzW5Qaupt7zY8Kql6fiVjgys4f58sU= +google.golang.org/api v0.171.0/go.mod h1:Hnq5AHm4OTMt2BUVjael2CWZFD6vksJdWCWiUAmjC9o= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM= @@ -386,10 +388,10 @@ google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfG google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20240205150955-31a09d347014 h1:g/4bk7P6TPMkAUbUhquq98xey1slwvuVJPosdBqYJlU= google.golang.org/genproto v0.0.0-20240205150955-31a09d347014/go.mod h1:xEgQu1e4stdSSsxPDK8Azkrk/ECl5HvdPf6nbZrTS5M= -google.golang.org/genproto/googleapis/api v0.0.0-20240205150955-31a09d347014 h1:x9PwdEgd11LgK+orcck69WVRo7DezSO4VUMPI4xpc8A= -google.golang.org/genproto/googleapis/api v0.0.0-20240205150955-31a09d347014/go.mod h1:rbHMSEDyoYX62nRVLOCc4Qt1HbsdytAYoVwgjiOhF3I= -google.golang.org/genproto/googleapis/rpc v0.0.0-20240213162025-012b6fc9bca9 h1:hZB7eLIaYlW9qXRfCq/qDaPdbeY3757uARz5Vvfv+cY= -google.golang.org/genproto/googleapis/rpc v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:YUWgXUFRPfoYK1IHMuxH5K6nPEXSCzIMljnQ59lLRCk= +google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2 h1:rIo7ocm2roD9DcFIX67Ym8icoGCKSARAiPljFhh5suQ= +google.golang.org/genproto/googleapis/api v0.0.0-20240311132316-a219d84964c2/go.mod h1:O1cOfN1Cy6QEYr7VxtjOyP5AdAuR0aJ/MYZaaof623Y= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c h1:lfpJ/2rWPa/kJgxyyXM8PrNnfCzcmxJ265mADgwmvLI= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240314234333-6e1732d8331c/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= diff --git a/google-beta/acctest/bootstrap_test_utils.go b/google-beta/acctest/bootstrap_test_utils.go index db99e10483..f7ce2869e2 100644 --- a/google-beta/acctest/bootstrap_test_utils.go +++ b/google-beta/acctest/bootstrap_test_utils.go @@ -145,13 +145,13 @@ func BootstrapKMSKeyWithPurposeInLocationAndName(t *testing.T, purpose, location } } -var serviceAccountEmail = "tf-bootstrap-service-account" +var serviceAccountPrefix = "tf-bootstrap-sa-" var serviceAccountDisplay = "Bootstrapped Service Account for Terraform tests" // Some tests need a second service account, other than the test runner, to assert functionality on. // This provides a well-known service account that can be used when dynamically creating a service // account isn't an option. -func getOrCreateServiceAccount(config *transport_tpg.Config, project string) (*iam.ServiceAccount, error) { +func getOrCreateServiceAccount(config *transport_tpg.Config, project, serviceAccountEmail string) (*iam.ServiceAccount, error) { name := fmt.Sprintf("projects/%s/serviceAccounts/%s@%s.iam.gserviceaccount.com", project, serviceAccountEmail, project) log.Printf("[DEBUG] Verifying %s as bootstrapped service account.\n", name) @@ -208,13 +208,19 @@ func impersonationServiceAccountPermissions(config *transport_tpg.Config, sa *ia return nil } -func BootstrapServiceAccount(t *testing.T, project, testRunner string) string { +// A separate testId should be used for each test, to create separate service accounts for each, +// and avoid race conditions where the policy of the same service account is being modified by 2 +// tests at once. This is needed as long as the function overwrites the policy on every run. +func BootstrapServiceAccount(t *testing.T, testId, testRunner string) string { + project := envvar.GetTestProjectFromEnv() + serviceAccountEmail := serviceAccountPrefix + testId + config := BootstrapConfig(t) if config == nil { return "" } - sa, err := getOrCreateServiceAccount(config, project) + sa, err := getOrCreateServiceAccount(config, project, serviceAccountEmail) if err != nil { t.Fatalf("Bootstrapping failed. Cannot retrieve service account, %s", err) } @@ -389,10 +395,9 @@ const SharedTestGlobalAddressPrefix = "tf-bootstrap-addr-" // params are the functions to set compute global address func BootstrapSharedTestGlobalAddress(t *testing.T, testId string, params ...func(*AddressSettings)) string { project := envvar.GetTestProjectFromEnv() - projectNumber := envvar.GetTestProjectNumberFromEnv() addressName := SharedTestGlobalAddressPrefix + testId networkName := BootstrapSharedTestNetwork(t, testId) - networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + networkId := fmt.Sprintf("projects/%v/global/networks/%v", project, networkName) config := BootstrapConfig(t) if config == nil { @@ -1231,7 +1236,8 @@ func SetupProjectsAndGetAccessToken(org, billing, pid, service string, config *t } // Create a service account for project-1 - sa1, err := getOrCreateServiceAccount(config, pid) + serviceAccountEmail := serviceAccountPrefix + service + sa1, err := getOrCreateServiceAccount(config, pid, serviceAccountEmail) if err != nil { return "", err } diff --git a/google-beta/acctest/provider_test_utils.go b/google-beta/acctest/provider_test_utils.go index 9c293f9e0f..5f69b68762 100644 --- a/google-beta/acctest/provider_test_utils.go +++ b/google-beta/acctest/provider_test_utils.go @@ -73,6 +73,35 @@ func AccTestPreCheck(t *testing.T) { } } +// AccTestPreCheck_AdcCredentialsOnly is a PreCheck function for acceptance tests that use ADCs when +func AccTestPreCheck_AdcCredentialsOnly(t *testing.T) { + if v := os.Getenv("GOOGLE_CREDENTIALS_FILE"); v != "" { + t.Log("Ignoring GOOGLE_CREDENTIALS_FILE; acceptance test doesn't use credentials other than ADCs") + } + + // Fail on set creds + if v := transport_tpg.MultiEnvSearch(envvar.CredsEnvVarsExcludingAdcs()); v != "" { + t.Fatalf("This acceptance test only uses ADCs, so all of %s must be unset", strings.Join(envvar.CredsEnvVarsExcludingAdcs(), ", ")) + } + + // Fail on ADC ENV not set + if v := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS"); v == "" { + t.Fatalf("GOOGLE_APPLICATION_CREDENTIALS must be set for acceptance tests that are dependent on ADCs") + } + + if v := transport_tpg.MultiEnvSearch(envvar.ProjectEnvVars); v == "" { + t.Fatalf("One of %s must be set for acceptance tests", strings.Join(envvar.ProjectEnvVars, ", ")) + } + + if v := transport_tpg.MultiEnvSearch(envvar.RegionEnvVars); v == "" { + t.Fatalf("One of %s must be set for acceptance tests", strings.Join(envvar.RegionEnvVars, ", ")) + } + + if v := transport_tpg.MultiEnvSearch(envvar.ZoneEnvVars); v == "" { + t.Fatalf("One of %s must be set for acceptance tests", strings.Join(envvar.ZoneEnvVars, ", ")) + } +} + // GetTestRegion has the same logic as the provider's GetRegion, to be used in tests. func GetTestRegion(is *terraform.InstanceState, config *transport_tpg.Config) (string, error) { if res, ok := is.Attributes["region"]; ok { diff --git a/google-beta/envvar/envvar_utils.go b/google-beta/envvar/envvar_utils.go index 407a33d340..7f1379ec84 100644 --- a/google-beta/envvar/envvar_utils.go +++ b/google-beta/envvar/envvar_utils.go @@ -21,6 +21,18 @@ var CredsEnvVars = []string{ "GOOGLE_USE_DEFAULT_CREDENTIALS", } +// CredsEnvVarsExcludingAdcs returns the contents of CredsEnvVars excluding GOOGLE_APPLICATION_CREDENTIALS +func CredsEnvVarsExcludingAdcs() []string { + envs := CredsEnvVars + var filtered []string + for _, e := range envs { + if e != "GOOGLE_APPLICATION_CREDENTIALS" { + filtered = append(filtered, e) + } + } + return filtered +} + var ProjectNumberEnvVars = []string{ "GOOGLE_PROJECT_NUMBER", } diff --git a/google-beta/fwmodels/provider_model.go b/google-beta/fwmodels/provider_model.go index ce20756e0f..29bbec4505 100644 --- a/google-beta/fwmodels/provider_model.go +++ b/google-beta/fwmodels/provider_model.go @@ -126,6 +126,7 @@ type ProviderModel struct { OrgPolicyCustomEndpoint types.String `tfsdk:"org_policy_custom_endpoint"` OSConfigCustomEndpoint types.String `tfsdk:"os_config_custom_endpoint"` OSLoginCustomEndpoint types.String `tfsdk:"os_login_custom_endpoint"` + ParallelstoreCustomEndpoint types.String `tfsdk:"parallelstore_custom_endpoint"` PrivatecaCustomEndpoint types.String `tfsdk:"privateca_custom_endpoint"` PublicCACustomEndpoint types.String `tfsdk:"public_ca_custom_endpoint"` PubsubCustomEndpoint types.String `tfsdk:"pubsub_custom_endpoint"` diff --git a/google-beta/fwprovider/framework_provider.go b/google-beta/fwprovider/framework_provider.go index 5473e8045c..b0c1f25bc3 100644 --- a/google-beta/fwprovider/framework_provider.go +++ b/google-beta/fwprovider/framework_provider.go @@ -19,7 +19,6 @@ import ( "github.com/hashicorp/terraform-provider-google-beta/google-beta/functions" "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwmodels" "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/dns" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/firebase" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/resourcemanager" @@ -734,6 +733,12 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, transport_tpg.CustomEndpointValidator(), }, }, + "parallelstore_custom_endpoint": &schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + transport_tpg.CustomEndpointValidator(), + }, + }, "privateca_custom_endpoint": &schema.StringAttribute{ Optional: true, Validators: []validator.String{ @@ -1039,10 +1044,6 @@ func (p *FrameworkProvider) DataSources(_ context.Context) []func() datasource.D return []func() datasource.DataSource{ resourcemanager.NewGoogleClientConfigDataSource, resourcemanager.NewGoogleClientOpenIDUserinfoDataSource, - dns.NewGoogleDnsManagedZoneDataSource, - dns.NewGoogleDnsManagedZonesDataSource, - dns.NewGoogleDnsRecordSetDataSource, - dns.NewGoogleDnsKeysDataSource, firebase.NewGoogleFirebaseAndroidAppConfigDataSource, firebase.NewGoogleFirebaseAppleAppConfigDataSource, firebase.NewGoogleFirebaseWebAppConfigDataSource, diff --git a/google-beta/fwtransport/framework_config.go b/google-beta/fwtransport/framework_config.go index 2854483bce..ba8ee745d5 100644 --- a/google-beta/fwtransport/framework_config.go +++ b/google-beta/fwtransport/framework_config.go @@ -149,6 +149,7 @@ type FrameworkProviderConfig struct { OrgPolicyBasePath string OSConfigBasePath string OSLoginBasePath string + ParallelstoreBasePath string PrivatecaBasePath string PublicCABasePath string PubsubBasePath string @@ -316,6 +317,7 @@ func (p *FrameworkProviderConfig) LoadAndValidateFramework(ctx context.Context, p.OrgPolicyBasePath = data.OrgPolicyCustomEndpoint.ValueString() p.OSConfigBasePath = data.OSConfigCustomEndpoint.ValueString() p.OSLoginBasePath = data.OSLoginCustomEndpoint.ValueString() + p.ParallelstoreBasePath = data.ParallelstoreCustomEndpoint.ValueString() p.PrivatecaBasePath = data.PrivatecaCustomEndpoint.ValueString() p.PublicCABasePath = data.PublicCACustomEndpoint.ValueString() p.PubsubBasePath = data.PubsubCustomEndpoint.ValueString() @@ -1257,6 +1259,14 @@ func (p *FrameworkProviderConfig) HandleDefaults(ctx context.Context, data *fwmo data.OSLoginCustomEndpoint = types.StringValue(customEndpoint.(string)) } } + if data.ParallelstoreCustomEndpoint.IsNull() { + customEndpoint := transport_tpg.MultiEnvDefault([]string{ + "GOOGLE_PARALLELSTORE_CUSTOM_ENDPOINT", + }, transport_tpg.DefaultBasePaths[transport_tpg.ParallelstoreBasePathKey]) + if customEndpoint != nil { + data.ParallelstoreCustomEndpoint = types.StringValue(customEndpoint.(string)) + } + } if data.PrivatecaCustomEndpoint.IsNull() { customEndpoint := transport_tpg.MultiEnvDefault([]string{ "GOOGLE_PRIVATECA_CUSTOM_ENDPOINT", diff --git a/google-beta/provider/provider.go b/google-beta/provider/provider.go index 78672828a3..014a1484c6 100644 --- a/google-beta/provider/provider.go +++ b/google-beta/provider/provider.go @@ -635,6 +635,11 @@ func Provider() *schema.Provider { Optional: true, ValidateFunc: transport_tpg.ValidateCustomEndpoint, }, + "parallelstore_custom_endpoint": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: transport_tpg.ValidateCustomEndpoint, + }, "privateca_custom_endpoint": { Type: schema.TypeString, Optional: true, @@ -936,6 +941,8 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr } } } + // Configure DCL basePath + transport_tpg.ProviderDCLConfigure(d, &config) // Replace hostname by the universe_domain field. if config.UniverseDomain != "" && config.UniverseDomain != "googleapis.com" { @@ -1098,6 +1105,7 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr config.OrgPolicyBasePath = d.Get("org_policy_custom_endpoint").(string) config.OSConfigBasePath = d.Get("os_config_custom_endpoint").(string) config.OSLoginBasePath = d.Get("os_login_custom_endpoint").(string) + config.ParallelstoreBasePath = d.Get("parallelstore_custom_endpoint").(string) config.PrivatecaBasePath = d.Get("privateca_custom_endpoint").(string) config.PublicCABasePath = d.Get("public_ca_custom_endpoint").(string) config.PubsubBasePath = d.Get("pubsub_custom_endpoint").(string) @@ -1155,7 +1163,7 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr return nil, diag.FromErr(err) } - return transport_tpg.ProviderDCLConfigure(d, &config), nil + return &config, nil } func mergeResourceMaps(ms ...map[string]*schema.Resource) (map[string]*schema.Resource, error) { diff --git a/google-beta/provider/provider_mmv1_resources.go b/google-beta/provider/provider_mmv1_resources.go index 0165477f64..98ef3c3fd3 100644 --- a/google-beta/provider/provider_mmv1_resources.go +++ b/google-beta/provider/provider_mmv1_resources.go @@ -103,6 +103,7 @@ import ( "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/orgpolicy" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/osconfig" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/oslogin" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/parallelstore" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/privateca" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/publicca" "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/pubsub" @@ -232,6 +233,10 @@ var handwrittenDatasources = map[string]*schema.Resource{ "google_container_registry_repository": containeranalysis.DataSourceGoogleContainerRepo(), "google_dataproc_metastore_service": dataprocmetastore.DataSourceDataprocMetastoreService(), "google_datastream_static_ips": datastream.DataSourceGoogleDatastreamStaticIps(), + "google_dns_keys": dns.DataSourceDNSKeys(), + "google_dns_managed_zone": dns.DataSourceDnsManagedZone(), + "google_dns_managed_zones": dns.DataSourceDnsManagedZones(), + "google_dns_record_set": dns.DataSourceDnsRecordSet(), "google_filestore_instance": filestore.DataSourceGoogleFilestoreInstance(), "google_iam_policy": resourcemanager.DataSourceGoogleIamPolicy(), "google_iam_role": resourcemanager.DataSourceGoogleIamRole(), @@ -296,12 +301,15 @@ var handwrittenDatasources = map[string]*schema.Resource{ "google_service_networking_peered_dns_domain": servicenetworking.DataSourceGoogleServiceNetworkingPeeredDNSDomain(), "google_storage_bucket": storage.DataSourceGoogleStorageBucket(), "google_storage_bucket_object": storage.DataSourceGoogleStorageBucketObject(), + "google_storage_bucket_objects": storage.DataSourceGoogleStorageBucketObjects(), "google_storage_bucket_object_content": storage.DataSourceGoogleStorageBucketObjectContent(), "google_storage_object_signed_url": storage.DataSourceGoogleSignedUrl(), "google_storage_project_service_account": storage.DataSourceGoogleStorageProjectServiceAccount(), "google_storage_transfer_project_service_account": storagetransfer.DataSourceGoogleStorageTransferProjectServiceAccount(), "google_tags_tag_key": tags.DataSourceGoogleTagsTagKey(), + "google_tags_tag_keys": tags.DataSourceGoogleTagsTagKeys(), "google_tags_tag_value": tags.DataSourceGoogleTagsTagValue(), + "google_tags_tag_values": tags.DataSourceGoogleTagsTagValues(), "google_tpu_tensorflow_versions": tpu.DataSourceTpuTensorflowVersions(), "google_tpu_v2_runtime_versions": tpuv2.DataSourceTpuV2RuntimeVersions(), "google_tpu_v2_accelerator_types": tpuv2.DataSourceTpuV2AcceleratorTypes(), @@ -441,9 +449,9 @@ var handwrittenIAMDatasources = map[string]*schema.Resource{ } // Resources -// Generated resources: 456 +// Generated resources: 461 // Generated IAM resources: 267 -// Total generated resources: 723 +// Total generated resources: 728 var generatedResources = map[string]*schema.Resource{ "google_folder_access_approval_settings": accessapproval.ResourceAccessApprovalFolderSettings(), "google_organization_access_approval_settings": accessapproval.ResourceAccessApprovalOrganizationSettings(), @@ -706,6 +714,7 @@ var generatedResources = map[string]*schema.Resource{ "google_compute_route": compute.ResourceComputeRoute(), "google_compute_router": compute.ResourceComputeRouter(), "google_compute_router_nat": compute.ResourceComputeRouterNat(), + "google_compute_security_policy_rule": compute.ResourceComputeSecurityPolicyRule(), "google_compute_service_attachment": compute.ResourceComputeServiceAttachment(), "google_compute_snapshot": compute.ResourceComputeSnapshot(), "google_compute_snapshot_iam_binding": tpgiamresource.ResourceIamBinding(compute.ComputeSnapshotIamSchema, compute.ComputeSnapshotIamUpdaterProducer, compute.ComputeSnapshotIdParseFunc), @@ -764,6 +773,7 @@ var generatedResources = map[string]*schema.Resource{ "google_data_fusion_instance_iam_member": tpgiamresource.ResourceIamMember(datafusion.DataFusionInstanceIamSchema, datafusion.DataFusionInstanceIamUpdaterProducer, datafusion.DataFusionInstanceIdParseFunc), "google_data_fusion_instance_iam_policy": tpgiamresource.ResourceIamPolicy(datafusion.DataFusionInstanceIamSchema, datafusion.DataFusionInstanceIamUpdaterProducer, datafusion.DataFusionInstanceIdParseFunc), "google_data_loss_prevention_deidentify_template": datalossprevention.ResourceDataLossPreventionDeidentifyTemplate(), + "google_data_loss_prevention_discovery_config": datalossprevention.ResourceDataLossPreventionDiscoveryConfig(), "google_data_loss_prevention_inspect_template": datalossprevention.ResourceDataLossPreventionInspectTemplate(), "google_data_loss_prevention_job_trigger": datalossprevention.ResourceDataLossPreventionJobTrigger(), "google_data_loss_prevention_stored_info_type": datalossprevention.ResourceDataLossPreventionStoredInfoType(), @@ -951,6 +961,7 @@ var generatedResources = map[string]*schema.Resource{ "google_identity_platform_tenant_oauth_idp_config": identityplatform.ResourceIdentityPlatformTenantOauthIdpConfig(), "google_integration_connectors_connection": integrationconnectors.ResourceIntegrationConnectorsConnection(), "google_integration_connectors_endpoint_attachment": integrationconnectors.ResourceIntegrationConnectorsEndpointAttachment(), + "google_integrations_auth_config": integrations.ResourceIntegrationsAuthConfig(), "google_integrations_client": integrations.ResourceIntegrationsClient(), "google_kms_crypto_key": kms.ResourceKMSCryptoKey(), "google_kms_crypto_key_version": kms.ResourceKMSCryptoKeyVersion(), @@ -985,6 +996,7 @@ var generatedResources = map[string]*schema.Resource{ "google_netapp_backup_vault": netapp.ResourceNetappbackupVault(), "google_netapp_kmsconfig": netapp.ResourceNetappkmsconfig(), "google_netapp_storage_pool": netapp.ResourceNetappstoragePool(), + "google_network_connectivity_internal_range": networkconnectivity.ResourceNetworkConnectivityInternalRange(), "google_network_connectivity_policy_based_route": networkconnectivity.ResourceNetworkConnectivityPolicyBasedRoute(), "google_network_connectivity_service_connection_policy": networkconnectivity.ResourceNetworkConnectivityServiceConnectionPolicy(), "google_network_management_connectivity_test": networkmanagement.ResourceNetworkManagementConnectivityTest(), @@ -1028,6 +1040,7 @@ var generatedResources = map[string]*schema.Resource{ "google_os_config_guest_policies": osconfig.ResourceOSConfigGuestPolicies(), "google_os_config_patch_deployment": osconfig.ResourceOSConfigPatchDeployment(), "google_os_login_ssh_public_key": oslogin.ResourceOSLoginSSHPublicKey(), + "google_parallelstore_instance": parallelstore.ResourceParallelstoreInstance(), "google_privateca_ca_pool": privateca.ResourcePrivatecaCaPool(), "google_privateca_ca_pool_iam_binding": tpgiamresource.ResourceIamBinding(privateca.PrivatecaCaPoolIamSchema, privateca.PrivatecaCaPoolIamUpdaterProducer, privateca.PrivatecaCaPoolIdParseFunc), "google_privateca_ca_pool_iam_member": tpgiamresource.ResourceIamMember(privateca.PrivatecaCaPoolIamSchema, privateca.PrivatecaCaPoolIamUpdaterProducer, privateca.PrivatecaCaPoolIdParseFunc), @@ -1185,6 +1198,7 @@ var handwrittenResources = map[string]*schema.Resource{ "google_billing_subaccount": resourcemanager.ResourceBillingSubaccount(), "google_cloudfunctions_function": cloudfunctions.ResourceCloudFunctionsFunction(), "google_composer_environment": composer.ResourceComposerEnvironment(), + "google_composer_user_workloads_secret": composer.ResourceComposerUserWorkloadsSecret(), "google_compute_attached_disk": compute.ResourceComputeAttachedDisk(), "google_compute_instance": compute.ResourceComputeInstance(), "google_compute_disk_async_replication": compute.ResourceComputeDiskAsyncReplication(), @@ -1241,6 +1255,7 @@ var handwrittenResources = map[string]*schema.Resource{ "google_project_default_service_accounts": resourcemanager.ResourceGoogleProjectDefaultServiceAccounts(), "google_project_service": resourcemanager.ResourceGoogleProjectService(), "google_project_iam_custom_role": resourcemanager.ResourceGoogleProjectIamCustomRole(), + "google_project_iam_member_remove": resourcemanager.ResourceGoogleProjectIamMemberRemove(), "google_project_organization_policy": resourcemanager.ResourceGoogleProjectOrganizationPolicy(), "google_project_usage_export_bucket": compute.ResourceProjectUsageBucket(), "google_runtimeconfig_config": runtimeconfig.ResourceRuntimeconfigConfig(), diff --git a/google-beta/services/accessapproval/resource_folder_access_approval_settings.go b/google-beta/services/accessapproval/resource_folder_access_approval_settings.go index 495c14187e..c4cd59b92d 100644 --- a/google-beta/services/accessapproval/resource_folder_access_approval_settings.go +++ b/google-beta/services/accessapproval/resource_folder_access_approval_settings.go @@ -21,6 +21,7 @@ import ( "bytes" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -221,6 +222,7 @@ func resourceAccessApprovalFolderSettingsCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_emails") { @@ -248,6 +250,7 @@ func resourceAccessApprovalFolderSettingsCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FolderSettings: %s", err) @@ -287,12 +290,14 @@ func resourceAccessApprovalFolderSettingsRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessApprovalFolderSettings %q", d.Id())) @@ -358,6 +363,7 @@ func resourceAccessApprovalFolderSettingsUpdate(d *schema.ResourceData, meta int } log.Printf("[DEBUG] Updating FolderSettings %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_emails") { @@ -393,6 +399,7 @@ func resourceAccessApprovalFolderSettingsUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/accessapproval/resource_organization_access_approval_settings.go b/google-beta/services/accessapproval/resource_organization_access_approval_settings.go index 16bc14806c..051d397d05 100644 --- a/google-beta/services/accessapproval/resource_organization_access_approval_settings.go +++ b/google-beta/services/accessapproval/resource_organization_access_approval_settings.go @@ -20,6 +20,7 @@ package accessapproval import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -181,6 +182,7 @@ func resourceAccessApprovalOrganizationSettingsCreate(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_emails") { @@ -208,6 +210,7 @@ func resourceAccessApprovalOrganizationSettingsCreate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OrganizationSettings: %s", err) @@ -247,12 +250,14 @@ func resourceAccessApprovalOrganizationSettingsRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessApprovalOrganizationSettings %q", d.Id())) @@ -318,6 +323,7 @@ func resourceAccessApprovalOrganizationSettingsUpdate(d *schema.ResourceData, me } log.Printf("[DEBUG] Updating OrganizationSettings %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_emails") { @@ -353,6 +359,7 @@ func resourceAccessApprovalOrganizationSettingsUpdate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/accessapproval/resource_project_access_approval_settings.go b/google-beta/services/accessapproval/resource_project_access_approval_settings.go index 14756907fb..ba310f6696 100644 --- a/google-beta/services/accessapproval/resource_project_access_approval_settings.go +++ b/google-beta/services/accessapproval/resource_project_access_approval_settings.go @@ -20,6 +20,7 @@ package accessapproval import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -195,6 +196,7 @@ func resourceAccessApprovalProjectSettingsCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_emails") { @@ -226,6 +228,7 @@ func resourceAccessApprovalProjectSettingsCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ProjectSettings: %s", err) @@ -265,12 +268,14 @@ func resourceAccessApprovalProjectSettingsRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessApprovalProjectSettings %q", d.Id())) @@ -345,6 +350,7 @@ func resourceAccessApprovalProjectSettingsUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating ProjectSettings %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_emails") { @@ -384,6 +390,7 @@ func resourceAccessApprovalProjectSettingsUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level.go index 6e7430add4..8f38aae193 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -387,6 +388,7 @@ func resourceAccessContextManagerAccessLevelCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -395,6 +397,7 @@ func resourceAccessContextManagerAccessLevelCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AccessLevel: %s", err) @@ -455,12 +458,14 @@ func resourceAccessContextManagerAccessLevelRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerAccessLevel %q", d.Id())) @@ -531,6 +536,7 @@ func resourceAccessContextManagerAccessLevelUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating AccessLevel %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("title") { @@ -570,6 +576,7 @@ func resourceAccessContextManagerAccessLevelUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -611,6 +618,8 @@ func resourceAccessContextManagerAccessLevelDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AccessLevel %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -620,6 +629,7 @@ func resourceAccessContextManagerAccessLevelDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AccessLevel") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level_condition.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level_condition.go index 8414234cd0..3798b5f40d 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level_condition.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_level_condition.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -316,6 +317,7 @@ func resourceAccessContextManagerAccessLevelConditionCreate(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -324,6 +326,7 @@ func resourceAccessContextManagerAccessLevelConditionCreate(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AccessLevelCondition: %s", err) @@ -410,12 +413,14 @@ func resourceAccessContextManagerAccessLevelConditionRead(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerAccessLevelCondition %q", d.Id())) @@ -495,6 +500,8 @@ func resourceAccessContextManagerAccessLevelConditionDelete(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AccessLevelCondition %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -504,6 +511,7 @@ func resourceAccessContextManagerAccessLevelConditionDelete(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AccessLevelCondition") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_levels.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_levels.go index 533d48d2cb..1aaef512b4 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_levels.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_levels.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -358,6 +359,7 @@ func resourceAccessContextManagerAccessLevelsCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -366,6 +368,7 @@ func resourceAccessContextManagerAccessLevelsCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AccessLevels: %s", err) @@ -412,12 +415,14 @@ func resourceAccessContextManagerAccessLevelsRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerAccessLevels %q", d.Id())) @@ -453,6 +458,7 @@ func resourceAccessContextManagerAccessLevelsUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating AccessLevels %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -467,6 +473,7 @@ func resourceAccessContextManagerAccessLevelsUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_policy.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_policy.go index bea8aacd54..e7c858467e 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_access_policy.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_access_policy.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -130,6 +131,7 @@ func resourceAccessContextManagerAccessPolicyCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -138,6 +140,7 @@ func resourceAccessContextManagerAccessPolicyCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AccessPolicy: %s", err) @@ -210,12 +213,14 @@ func resourceAccessContextManagerAccessPolicyRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerAccessPolicy %q", d.Id())) @@ -272,6 +277,7 @@ func resourceAccessContextManagerAccessPolicyUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating AccessPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("title") { @@ -303,6 +309,7 @@ func resourceAccessContextManagerAccessPolicyUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -344,6 +351,8 @@ func resourceAccessContextManagerAccessPolicyDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AccessPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -353,6 +362,7 @@ func resourceAccessContextManagerAccessPolicyDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AccessPolicy") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_authorized_orgs_desc.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_authorized_orgs_desc.go index 9e17e1e065..39efca8276 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_authorized_orgs_desc.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_authorized_orgs_desc.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -192,6 +193,7 @@ func resourceAccessContextManagerAuthorizedOrgsDescCreate(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -200,6 +202,7 @@ func resourceAccessContextManagerAuthorizedOrgsDescCreate(d *schema.ResourceData UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AuthorizedOrgsDesc: %s", err) @@ -265,12 +268,14 @@ func resourceAccessContextManagerAuthorizedOrgsDescRead(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerAuthorizedOrgsDesc %q", d.Id())) @@ -329,6 +334,7 @@ func resourceAccessContextManagerAuthorizedOrgsDescUpdate(d *schema.ResourceData } log.Printf("[DEBUG] Updating AuthorizedOrgsDesc %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("orgs") { @@ -354,6 +360,7 @@ func resourceAccessContextManagerAuthorizedOrgsDescUpdate(d *schema.ResourceData UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -394,6 +401,8 @@ func resourceAccessContextManagerAuthorizedOrgsDescDelete(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AuthorizedOrgsDesc %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -403,6 +412,7 @@ func resourceAccessContextManagerAuthorizedOrgsDescDelete(d *schema.ResourceData UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AuthorizedOrgsDesc") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_egress_policy.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_egress_policy.go index 969d7d6dfe..906a37cf95 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_egress_policy.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_egress_policy.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -100,6 +101,7 @@ func resourceAccessContextManagerEgressPolicyCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -108,6 +110,7 @@ func resourceAccessContextManagerEgressPolicyCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EgressPolicy: %s", err) @@ -178,12 +181,14 @@ func resourceAccessContextManagerEgressPolicyRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerEgressPolicy %q", d.Id())) @@ -238,6 +243,8 @@ func resourceAccessContextManagerEgressPolicyDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EgressPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -247,6 +254,7 @@ func resourceAccessContextManagerEgressPolicyDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EgressPolicy") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_gcp_user_access_binding.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_gcp_user_access_binding.go index 12b3010494..f2dbe63976 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_gcp_user_access_binding.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_gcp_user_access_binding.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -114,6 +115,7 @@ func resourceAccessContextManagerGcpUserAccessBindingCreate(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -122,6 +124,7 @@ func resourceAccessContextManagerGcpUserAccessBindingCreate(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GcpUserAccessBinding: %s", err) @@ -182,12 +185,14 @@ func resourceAccessContextManagerGcpUserAccessBindingRead(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerGcpUserAccessBinding %q", d.Id())) @@ -229,6 +234,7 @@ func resourceAccessContextManagerGcpUserAccessBindingUpdate(d *schema.ResourceDa } log.Printf("[DEBUG] Updating GcpUserAccessBinding %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("access_levels") { @@ -256,6 +262,7 @@ func resourceAccessContextManagerGcpUserAccessBindingUpdate(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -297,6 +304,8 @@ func resourceAccessContextManagerGcpUserAccessBindingDelete(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GcpUserAccessBinding %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -306,6 +315,7 @@ func resourceAccessContextManagerGcpUserAccessBindingDelete(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GcpUserAccessBinding") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_ingress_policy.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_ingress_policy.go index 85efd88ec1..9017be8608 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_ingress_policy.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_ingress_policy.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -100,6 +101,7 @@ func resourceAccessContextManagerIngressPolicyCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -108,6 +110,7 @@ func resourceAccessContextManagerIngressPolicyCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating IngressPolicy: %s", err) @@ -178,12 +181,14 @@ func resourceAccessContextManagerIngressPolicyRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerIngressPolicy %q", d.Id())) @@ -238,6 +243,8 @@ func resourceAccessContextManagerIngressPolicyDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting IngressPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -247,6 +254,7 @@ func resourceAccessContextManagerIngressPolicyDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "IngressPolicy") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go index d24ca8fc3f..098c952771 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -941,6 +942,7 @@ func resourceAccessContextManagerServicePerimeterCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -949,6 +951,7 @@ func resourceAccessContextManagerServicePerimeterCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServicePerimeter: %s", err) @@ -1009,12 +1012,14 @@ func resourceAccessContextManagerServicePerimeterRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerServicePerimeter %q", d.Id())) @@ -1110,6 +1115,7 @@ func resourceAccessContextManagerServicePerimeterUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating ServicePerimeter %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("title") { @@ -1153,6 +1159,7 @@ func resourceAccessContextManagerServicePerimeterUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1201,6 +1208,8 @@ func resourceAccessContextManagerServicePerimeterDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServicePerimeter %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1210,6 +1219,7 @@ func resourceAccessContextManagerServicePerimeterDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServicePerimeter") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_dry_run_resource.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_dry_run_resource.go index 725f032ded..6ac34571fc 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_dry_run_resource.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_dry_run_resource.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -109,6 +110,7 @@ func resourceAccessContextManagerServicePerimeterDryRunResourceCreate(d *schema. billingProject = bp } + headers := make(http.Header) obj["use_explicit_dry_run_spec"] = true res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -118,6 +120,7 @@ func resourceAccessContextManagerServicePerimeterDryRunResourceCreate(d *schema. UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServicePerimeterDryRunResource: %s", err) @@ -188,12 +191,14 @@ func resourceAccessContextManagerServicePerimeterDryRunResourceRead(d *schema.Re billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerServicePerimeterDryRunResource %q", d.Id())) @@ -255,6 +260,7 @@ func resourceAccessContextManagerServicePerimeterDryRunResourceDelete(d *schema. billingProject = bp } + headers := make(http.Header) obj["use_explicit_dry_run_spec"] = true log.Printf("[DEBUG] Deleting ServicePerimeterDryRunResource %q", d.Id()) @@ -266,6 +272,7 @@ func resourceAccessContextManagerServicePerimeterDryRunResourceDelete(d *schema. UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServicePerimeterDryRunResource") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_egress_policy.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_egress_policy.go index 7b80d4dd49..ea35076ec6 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_egress_policy.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_egress_policy.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -245,6 +246,7 @@ func resourceAccessContextManagerServicePerimeterEgressPolicyCreate(d *schema.Re billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -253,6 +255,7 @@ func resourceAccessContextManagerServicePerimeterEgressPolicyCreate(d *schema.Re UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServicePerimeterEgressPolicy: %s", err) @@ -326,12 +329,14 @@ func resourceAccessContextManagerServicePerimeterEgressPolicyRead(d *schema.Reso billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerServicePerimeterEgressPolicy %q", d.Id())) @@ -396,6 +401,8 @@ func resourceAccessContextManagerServicePerimeterEgressPolicyDelete(d *schema.Re billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServicePerimeterEgressPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -405,6 +412,7 @@ func resourceAccessContextManagerServicePerimeterEgressPolicyDelete(d *schema.Re UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServicePerimeterEgressPolicy") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_ingress_policy.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_ingress_policy.go index 9ac9da1646..2ca70c87bc 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_ingress_policy.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_ingress_policy.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -249,6 +250,7 @@ func resourceAccessContextManagerServicePerimeterIngressPolicyCreate(d *schema.R billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -257,6 +259,7 @@ func resourceAccessContextManagerServicePerimeterIngressPolicyCreate(d *schema.R UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServicePerimeterIngressPolicy: %s", err) @@ -330,12 +333,14 @@ func resourceAccessContextManagerServicePerimeterIngressPolicyRead(d *schema.Res billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerServicePerimeterIngressPolicy %q", d.Id())) @@ -400,6 +405,8 @@ func resourceAccessContextManagerServicePerimeterIngressPolicyDelete(d *schema.R billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServicePerimeterIngressPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -409,6 +416,7 @@ func resourceAccessContextManagerServicePerimeterIngressPolicyDelete(d *schema.R UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServicePerimeterIngressPolicy") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_resource.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_resource.go index b537c3abbe..2016f55b86 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_resource.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeter_resource.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -109,6 +110,7 @@ func resourceAccessContextManagerServicePerimeterResourceCreate(d *schema.Resour billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -117,6 +119,7 @@ func resourceAccessContextManagerServicePerimeterResourceCreate(d *schema.Resour UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServicePerimeterResource: %s", err) @@ -187,12 +190,14 @@ func resourceAccessContextManagerServicePerimeterResourceRead(d *schema.Resource billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerServicePerimeterResource %q", d.Id())) @@ -254,6 +259,8 @@ func resourceAccessContextManagerServicePerimeterResourceDelete(d *schema.Resour billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServicePerimeterResource %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -263,6 +270,7 @@ func resourceAccessContextManagerServicePerimeterResourceDelete(d *schema.Resour UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServicePerimeterResource") diff --git a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go index 679f35f12a..421a83845e 100644 --- a/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go +++ b/google-beta/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go @@ -20,6 +20,7 @@ package accesscontextmanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -900,6 +901,7 @@ func resourceAccessContextManagerServicePerimetersCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -908,6 +910,7 @@ func resourceAccessContextManagerServicePerimetersCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServicePerimeters: %s", err) @@ -954,12 +957,14 @@ func resourceAccessContextManagerServicePerimetersRead(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessContextManagerServicePerimeters %q", d.Id())) @@ -1001,6 +1006,7 @@ func resourceAccessContextManagerServicePerimetersUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating ServicePerimeters %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -1015,6 +1021,7 @@ func resourceAccessContextManagerServicePerimetersUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/activedirectory/resource_active_directory_domain.go b/google-beta/services/activedirectory/resource_active_directory_domain.go index 40038dc8db..13e424c5c5 100644 --- a/google-beta/services/activedirectory/resource_active_directory_domain.go +++ b/google-beta/services/activedirectory/resource_active_directory_domain.go @@ -20,6 +20,7 @@ package activedirectory import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -199,6 +200,7 @@ func resourceActiveDirectoryDomainCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -207,6 +209,7 @@ func resourceActiveDirectoryDomainCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -274,12 +277,14 @@ func resourceActiveDirectoryDomainRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -362,6 +367,7 @@ func resourceActiveDirectoryDomainUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating Domain %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("authorized_networks") { @@ -397,6 +403,7 @@ func resourceActiveDirectoryDomainUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -445,6 +452,8 @@ func resourceActiveDirectoryDomainDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Domain %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -454,6 +463,7 @@ func resourceActiveDirectoryDomainDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { diff --git a/google-beta/services/activedirectory/resource_active_directory_domain_trust.go b/google-beta/services/activedirectory/resource_active_directory_domain_trust.go index 807615b99d..2636d35320 100644 --- a/google-beta/services/activedirectory/resource_active_directory_domain_trust.go +++ b/google-beta/services/activedirectory/resource_active_directory_domain_trust.go @@ -20,6 +20,7 @@ package activedirectory import ( "fmt" "log" + "net/http" "reflect" "time" @@ -181,6 +182,7 @@ func resourceActiveDirectoryDomainTrustCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -189,6 +191,7 @@ func resourceActiveDirectoryDomainTrustCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DomainTrust: %s", err) @@ -273,12 +276,14 @@ func resourceActiveDirectoryDomainTrustRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ActiveDirectoryDomainTrust %q", d.Id())) @@ -395,6 +400,7 @@ func resourceActiveDirectoryDomainTrustUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating DomainTrust %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -409,6 +415,7 @@ func resourceActiveDirectoryDomainTrustUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/activedirectory/resource_active_directory_peering.go b/google-beta/services/activedirectory/resource_active_directory_peering.go index ca38b695b2..4bad14ea61 100644 --- a/google-beta/services/activedirectory/resource_active_directory_peering.go +++ b/google-beta/services/activedirectory/resource_active_directory_peering.go @@ -20,6 +20,7 @@ package activedirectory import ( "fmt" "log" + "net/http" "reflect" "time" @@ -167,6 +168,7 @@ func resourceActiveDirectoryPeeringCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -175,6 +177,7 @@ func resourceActiveDirectoryPeeringCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Peering: %s", err) @@ -241,12 +244,14 @@ func resourceActiveDirectoryPeeringRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ActiveDirectoryPeering %q", d.Id())) @@ -313,6 +318,7 @@ func resourceActiveDirectoryPeeringUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Peering %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -327,6 +333,7 @@ func resourceActiveDirectoryPeeringUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -373,6 +380,8 @@ func resourceActiveDirectoryPeeringDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Peering %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -382,6 +391,7 @@ func resourceActiveDirectoryPeeringDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Peering") diff --git a/google-beta/services/alloydb/resource_alloydb_backup.go b/google-beta/services/alloydb/resource_alloydb_backup.go index b696765044..0f8434a0dc 100644 --- a/google-beta/services/alloydb/resource_alloydb_backup.go +++ b/google-beta/services/alloydb/resource_alloydb_backup.go @@ -20,6 +20,7 @@ package alloydb import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -336,6 +337,7 @@ func resourceAlloydbBackupCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -344,6 +346,7 @@ func resourceAlloydbBackupCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Backup: %s", err) @@ -396,12 +399,14 @@ func resourceAlloydbBackupRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AlloydbBackup %q", d.Id())) @@ -548,6 +553,7 @@ func resourceAlloydbBackupUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Backup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -595,6 +601,7 @@ func resourceAlloydbBackupUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -642,6 +649,8 @@ func resourceAlloydbBackupDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Backup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -651,6 +660,7 @@ func resourceAlloydbBackupDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Backup") diff --git a/google-beta/services/alloydb/resource_alloydb_cluster.go b/google-beta/services/alloydb/resource_alloydb_cluster.go index 2b0556b86e..c99570b863 100644 --- a/google-beta/services/alloydb/resource_alloydb_cluster.go +++ b/google-beta/services/alloydb/resource_alloydb_cluster.go @@ -20,6 +20,7 @@ package alloydb import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -708,6 +709,7 @@ func resourceAlloydbClusterCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) // Read the restore variables from obj and remove them, since they do not map to anything in the cluster var backupSource interface{} var continuousBackupSource interface{} @@ -780,6 +782,7 @@ func resourceAlloydbClusterCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Cluster: %s", err) @@ -832,12 +835,14 @@ func resourceAlloydbClusterRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AlloydbCluster %q", d.Id())) @@ -1027,6 +1032,7 @@ func resourceAlloydbClusterUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Cluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("encryption_config") { @@ -1174,6 +1180,7 @@ func resourceAlloydbClusterUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1221,6 +1228,7 @@ func resourceAlloydbClusterDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) // Forcefully delete the secondary cluster and the dependent instances because deletion of secondary instance is not supported. if deletionPolicy := d.Get("deletion_policy"); deletionPolicy == "FORCE" { url = url + "?force=true" @@ -1235,6 +1243,7 @@ func resourceAlloydbClusterDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Cluster") diff --git a/google-beta/services/alloydb/resource_alloydb_cluster_sweeper.go b/google-beta/services/alloydb/resource_alloydb_cluster_sweeper.go index c766895b91..94e8831e6a 100644 --- a/google-beta/services/alloydb/resource_alloydb_cluster_sweeper.go +++ b/google-beta/services/alloydb/resource_alloydb_cluster_sweeper.go @@ -4,6 +4,7 @@ package alloydb import ( "context" + "fmt" "log" "strings" "testing" @@ -49,7 +50,8 @@ func testSweepAlloydbCluster(region string) error { }, } - listTemplate := strings.Split("https://alloydb.googleapis.com/v1beta/projects/{{project}}/locations/{{location}}/clusters", "?")[0] + // manual patch: use aggregated list instead of sweeper-specific location. This will clear secondary clusters. + listTemplate := strings.Split("https://alloydb.googleapis.com/v1beta/projects/{{project}}/locations/-/clusters", "?")[0] listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) if err != nil { log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) @@ -81,29 +83,19 @@ func testSweepAlloydbCluster(region string) error { nonPrefixCount := 0 for _, ri := range rl { obj := ri.(map[string]interface{}) - var name string - // Id detected in the delete URL, attempt to use id. - if obj["id"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["id"].(string)) - } else if obj["name"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - } else { - log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) - return nil - } + + // manual patch: use raw name for url instead of constructing it, so that resource locations are supplied through aggregated list + // manual patch: Using the force=true ensures that we delete instances as well. + name := obj["name"].(string) + shortname := tpgresource.GetResourceNameFromSelfLink(name) // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { + if !sweeper.IsSweepableTestResource(shortname) { nonPrefixCount++ continue } - deleteTemplate := "https://alloydb.googleapis.com/v1beta/projects/{{project}}/locations/{{location}}/clusters/{{cluster_id}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name + "?force=true" + deleteTemplate := "https://alloydb.googleapis.com/v1beta/%s?force=true" + deleteUrl := fmt.Sprintf(deleteTemplate, name) // Don't wait on operations as we may have a lot to delete _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ @@ -116,7 +108,7 @@ func testSweepAlloydbCluster(region string) error { if err != nil { log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", name, shortname) } } diff --git a/google-beta/services/alloydb/resource_alloydb_instance.go b/google-beta/services/alloydb/resource_alloydb_instance.go index 8c34808574..233a33c35d 100644 --- a/google-beta/services/alloydb/resource_alloydb_instance.go +++ b/google-beta/services/alloydb/resource_alloydb_instance.go @@ -20,6 +20,7 @@ package alloydb import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -180,6 +181,40 @@ Please refer to the field 'effective_labels' for all of the labels present on th }, }, }, + "network_config": { + Type: schema.TypeList, + Optional: true, + Description: `Instance level network configuration.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authorized_external_networks": { + Type: schema.TypeList, + Optional: true, + Description: `A list of external networks authorized to access this instance. This +field is only allowed to be set when 'enable_public_ip' is set to +true.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cidr_range": { + Type: schema.TypeString, + Optional: true, + Description: `CIDR range for one authorized network of the instance.`, + }, + }, + }, + RequiredWith: []string{"network_config.0.enable_public_ip"}, + }, + "enable_public_ip": { + Type: schema.TypeBool, + Optional: true, + Description: `Enabling public ip for the instance. If a user wishes to disable this, +please also clear the list of the authorized external networks set on +the same instance.`, + }, + }, + }, + }, "query_insights_config": { Type: schema.TypeList, Computed: true, @@ -253,6 +288,13 @@ Please refer to the field 'effective_labels' for all of the labels present on th Computed: true, Description: `The name of the instance resource.`, }, + "public_ip_address": { + Type: schema.TypeString, + Computed: true, + Description: `The public IP addresses for the Instance. This is available ONLY when +networkConfig.enablePublicIp is set to true. This is the connection +endpoint for an end-user application.`, + }, "reconciling": { Type: schema.TypeBool, Computed: true, @@ -348,6 +390,12 @@ func resourceAlloydbInstanceCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("client_connection_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(clientConnectionConfigProp)) && (ok || !reflect.DeepEqual(v, clientConnectionConfigProp)) { obj["clientConnectionConfig"] = clientConnectionConfigProp } + networkConfigProp, err := expandAlloydbInstanceNetworkConfig(d.Get("network_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("network_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(networkConfigProp)) && (ok || !reflect.DeepEqual(v, networkConfigProp)) { + obj["networkConfig"] = networkConfigProp + } labelsProp, err := expandAlloydbInstanceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err @@ -374,6 +422,21 @@ func resourceAlloydbInstanceCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + // Temporarily remove the enablePublicIp field if it is set to true since the + // API prohibits creating instances with public IP enabled. + var nc map[string]interface{} + if obj["networkConfig"] == nil { + nc = make(map[string]interface{}) + } else { + nc = obj["networkConfig"].(map[string]interface{}) + } + if nc["enablePublicIp"] == true { + delete(nc, "enablePublicIp") + delete(nc, "authorizedExternalNetworks") + } + obj["networkConfig"] = nc + // Read the config and call createsecondary api if instance_type is SECONDARY if instanceType := d.Get("instance_type"); instanceType == "SECONDARY" { @@ -387,6 +450,7 @@ func resourceAlloydbInstanceCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -409,6 +473,51 @@ func resourceAlloydbInstanceCreate(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error waiting to create Instance: %s", err) } + // If enablePublicIp is set to true, then we must create the instance first with + // it disabled then update to enable it. + networkConfigProp, err = expandAlloydbInstanceNetworkConfig(d.Get("network_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("network_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(networkConfigProp)) && (ok || !reflect.DeepEqual(v, networkConfigProp)) { + nc := networkConfigProp.(map[string]interface{}) + if nc["enablePublicIp"] == true { + obj["networkConfig"] = networkConfigProp + + updateMask := []string{} + updateMask = append(updateMask, "networkConfig") + url, err := tpgresource.ReplaceVars(d, config, "{{AlloydbBasePath}}{{cluster}}/instances/{{instance_id}}") + if err != nil { + return err + } + url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + updateRes, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "PATCH", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + }) + if err != nil { + return fmt.Errorf("Error updating the Instance to enable public ip: %s", err) + } else { + log.Printf("[DEBUG] Finished updating Instance to enable public ip %q: %#v", d.Id(), updateRes) + } + err = AlloydbOperationWaitTime( + config, updateRes, project, "Updating Instance", userAgent, + d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return err + } + } + } + log.Printf("[DEBUG] Finished creating Instance %q: %#v", d.Id(), res) return resourceAlloydbInstanceRead(d, meta) @@ -433,12 +542,14 @@ func resourceAlloydbInstanceRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AlloydbInstance %q", d.Id())) @@ -495,6 +606,12 @@ func resourceAlloydbInstanceRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("client_connection_config", flattenAlloydbInstanceClientConnectionConfig(res["clientConnectionConfig"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("network_config", flattenAlloydbInstanceNetworkConfig(res["networkConfig"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("public_ip_address", flattenAlloydbInstancePublicIpAddress(res["publicIpAddress"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } if err := d.Set("terraform_labels", flattenAlloydbInstanceTerraformLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } @@ -567,6 +684,12 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("client_connection_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, clientConnectionConfigProp)) { obj["clientConnectionConfig"] = clientConnectionConfigProp } + networkConfigProp, err := expandAlloydbInstanceNetworkConfig(d.Get("network_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("network_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, networkConfigProp)) { + obj["networkConfig"] = networkConfigProp + } labelsProp, err := expandAlloydbInstanceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err @@ -586,6 +709,7 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -620,6 +744,10 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err updateMask = append(updateMask, "clientConnectionConfig") } + if d.HasChange("network_config") { + updateMask = append(updateMask, "networkConfig") + } + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } @@ -649,6 +777,7 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -691,6 +820,7 @@ func resourceAlloydbInstanceDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) // Read the config and avoid calling the delete API if the instance_type is SECONDARY and instead return nil // Returning nil is equivalent of returning a success message to the users // This is done because deletion of secondary instance is not supported @@ -719,6 +849,7 @@ func resourceAlloydbInstanceDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") @@ -987,6 +1118,51 @@ func flattenAlloydbInstanceClientConnectionConfigSslConfigSslMode(v interface{}, return v } +func flattenAlloydbInstanceNetworkConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["authorized_external_networks"] = + flattenAlloydbInstanceNetworkConfigAuthorizedExternalNetworks(original["authorizedExternalNetworks"], d, config) + transformed["enable_public_ip"] = + flattenAlloydbInstanceNetworkConfigEnablePublicIp(original["enablePublicIp"], d, config) + return []interface{}{transformed} +} +func flattenAlloydbInstanceNetworkConfigAuthorizedExternalNetworks(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "cidr_range": flattenAlloydbInstanceNetworkConfigAuthorizedExternalNetworksCidrRange(original["cidrRange"], d, config), + }) + } + return transformed +} +func flattenAlloydbInstanceNetworkConfigAuthorizedExternalNetworksCidrRange(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenAlloydbInstanceNetworkConfigEnablePublicIp(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenAlloydbInstancePublicIpAddress(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenAlloydbInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -1192,6 +1368,62 @@ func expandAlloydbInstanceClientConnectionConfigSslConfigSslMode(v interface{}, return v, nil } +func expandAlloydbInstanceNetworkConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedAuthorizedExternalNetworks, err := expandAlloydbInstanceNetworkConfigAuthorizedExternalNetworks(original["authorized_external_networks"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedAuthorizedExternalNetworks); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["authorizedExternalNetworks"] = transformedAuthorizedExternalNetworks + } + + transformedEnablePublicIp, err := expandAlloydbInstanceNetworkConfigEnablePublicIp(original["enable_public_ip"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEnablePublicIp); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["enablePublicIp"] = transformedEnablePublicIp + } + + return transformed, nil +} + +func expandAlloydbInstanceNetworkConfigAuthorizedExternalNetworks(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedCidrRange, err := expandAlloydbInstanceNetworkConfigAuthorizedExternalNetworksCidrRange(original["cidr_range"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCidrRange); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["cidrRange"] = transformedCidrRange + } + + req = append(req, transformed) + } + return req, nil +} + +func expandAlloydbInstanceNetworkConfigAuthorizedExternalNetworksCidrRange(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandAlloydbInstanceNetworkConfigEnablePublicIp(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandAlloydbInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil diff --git a/google-beta/services/alloydb/resource_alloydb_instance_test.go b/google-beta/services/alloydb/resource_alloydb_instance_test.go index a6c624b26a..5448a057cb 100644 --- a/google-beta/services/alloydb/resource_alloydb_instance_test.go +++ b/google-beta/services/alloydb/resource_alloydb_instance_test.go @@ -566,3 +566,143 @@ data "google_compute_network" "default" { } `, context) } + +// This test passes if an instance can be created with public IP enabled, +// and update the authorized external networks. +func TestAccAlloydbInstance_networkConfig(t *testing.T) { + t.Parallel() + + suffix := acctest.RandString(t, 10) + networkName := acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydbinstance-networkconfig") + + context1 := map[string]interface{}{ + "random_suffix": suffix, + "network_name": networkName, + "enable_public_ip": true, + "authorized_external_networks": "", + } + context2 := map[string]interface{}{ + "random_suffix": suffix, + "network_name": networkName, + "enable_public_ip": true, + "authorized_external_networks": ` + authorized_external_networks { + cidr_range = "8.8.8.8/30" + } + authorized_external_networks { + cidr_range = "8.8.4.4/30" + } + `, + } + context3 := map[string]interface{}{ + "random_suffix": suffix, + "network_name": networkName, + "enable_public_ip": true, + "cidr_range": "8.8.8.8/30", + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckAlloydbInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccAlloydbInstance_networkConfig(context1), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.enable_public_ip", "true"), + ), + }, + { + ResourceName: "google_alloydb_instance.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"cluster", "instance_id", "reconciling", "update_time"}, + }, + { + Config: testAccAlloydbInstance_networkConfig(context2), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.enable_public_ip", "true"), + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.authorized_external_networks.0.cidr_range", "8.8.8.8/30"), + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.authorized_external_networks.1.cidr_range", "8.8.4.4/30"), + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.authorized_external_networks.#", "2"), + ), + }, + { + ResourceName: "google_alloydb_instance.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"cluster", "instance_id", "reconciling", "update_time"}, + }, + { + Config: testAccAlloydbInstance_networkConfigWithAnAuthNetwork(context3), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.enable_public_ip", "true"), + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.authorized_external_networks.0.cidr_range", "8.8.8.8/30"), + resource.TestCheckResourceAttr("google_alloydb_instance.default", "network_config.0.authorized_external_networks.#", "1"), + ), + }, + { + ResourceName: "google_alloydb_instance.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"cluster", "instance_id", "reconciling", "update_time"}, + }, + }, + }) +} + +func testAccAlloydbInstance_networkConfig(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_alloydb_instance" "default" { + cluster = google_alloydb_cluster.default.name + instance_id = "tf-test-alloydb-instance%{random_suffix}" + instance_type = "PRIMARY" + + network_config { + enable_public_ip = %{enable_public_ip} + %{authorized_external_networks} + } +} + +resource "google_alloydb_cluster" "default" { + cluster_id = "tf-test-alloydb-cluster%{random_suffix}" + location = "us-central1" + network = data.google_compute_network.default.id +} + +data "google_project" "project" {} + +data "google_compute_network" "default" { + name = "%{network_name}" +} +`, context) +} + +func testAccAlloydbInstance_networkConfigWithAnAuthNetwork(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_alloydb_instance" "default" { + cluster = google_alloydb_cluster.default.name + instance_id = "tf-test-alloydb-instance%{random_suffix}" + instance_type = "PRIMARY" + + network_config { + enable_public_ip = %{enable_public_ip} + authorized_external_networks { + cidr_range = "%{cidr_range}" + } + } +} + +resource "google_alloydb_cluster" "default" { + cluster_id = "tf-test-alloydb-cluster%{random_suffix}" + location = "us-central1" + network = data.google_compute_network.default.id +} + +data "google_project" "project" {} + +data "google_compute_network" "default" { + name = "%{network_name}" +} +`, context) +} diff --git a/google-beta/services/alloydb/resource_alloydb_secondary_cluster_test.go b/google-beta/services/alloydb/resource_alloydb_secondary_cluster_test.go index a8b05f17b0..9c263f72c7 100644 --- a/google-beta/services/alloydb/resource_alloydb_secondary_cluster_test.go +++ b/google-beta/services/alloydb/resource_alloydb_secondary_cluster_test.go @@ -357,107 +357,6 @@ data "google_compute_network" "default" { `, context) } -// Test if adding automatedBackupPolicy throws an error as it can not be enabled on secondary cluster -func TestAccAlloydbCluster_secondaryClusterAddAutomatedBackupPolicy(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydbinstance-network-config-1"), - "random_suffix": acctest.RandString(t, 10), - "hour": 23, - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckAlloydbClusterDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccAlloydbCluster_secondaryClusterMandatoryFields(context), - }, - { - ResourceName: "google_alloydb_cluster.secondary", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"initial_user", "restore_backup_source", "restore_continuous_backup_source", "cluster_id", "location", "labels", "annotations", "terraform_labels", "reconciling"}, - }, - { - // Invalid input check - can not add automated backup policy to a secondary cluster - Config: testAccAlloydbCluster_secondaryClusterAddAutomatedBackupPolicy(context), - ExpectError: regexp.MustCompile("cannot enable automated backups on secondary cluster until it is promoted"), - }, - }, - }) -} - -func testAccAlloydbCluster_secondaryClusterAddAutomatedBackupPolicy(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_alloydb_cluster" "primary" { - cluster_id = "tf-test-alloydb-primary-cluster%{random_suffix}" - location = "us-central1" - network = data.google_compute_network.default.id -} - -resource "google_alloydb_instance" "primary" { - cluster = google_alloydb_cluster.primary.name - instance_id = "tf-test-alloydb-primary-instance%{random_suffix}" - instance_type = "PRIMARY" - - machine_config { - cpu_count = 2 - } -} - -resource "google_alloydb_cluster" "secondary" { - cluster_id = "tf-test-alloydb-secondary-cluster%{random_suffix}" - location = "us-east1" - network = data.google_compute_network.default.id - cluster_type = "SECONDARY" - - continuous_backup_config { - enabled = false - } - - secondary_config { - primary_cluster_name = google_alloydb_cluster.primary.name - } - - automated_backup_policy { - location = "us-central1" - backup_window = "1800s" - enabled = true - - weekly_schedule { - days_of_week = ["MONDAY"] - - start_times { - hours = %{hour} - minutes = 0 - seconds = 0 - nanos = 0 - } - } - - quantity_based_retention { - count = 1 - } - - labels = { - test = "tf-test-alloydb-secondary-cluster%{random_suffix}" - } - } - - depends_on = [google_alloydb_instance.primary] -} - -data "google_project" "project" {} - -data "google_compute_network" "default" { - name = "%{network_name}" -} -`, context) -} - func TestAccAlloydbCluster_secondaryClusterUsingCMEK(t *testing.T) { t.Parallel() diff --git a/google-beta/services/alloydb/resource_alloydb_user.go b/google-beta/services/alloydb/resource_alloydb_user.go index 9615d7ba35..fa160daa9a 100644 --- a/google-beta/services/alloydb/resource_alloydb_user.go +++ b/google-beta/services/alloydb/resource_alloydb_user.go @@ -20,6 +20,7 @@ package alloydb import ( "fmt" "log" + "net/http" "reflect" "time" @@ -132,6 +133,7 @@ func resourceAlloydbUserCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -140,6 +142,7 @@ func resourceAlloydbUserCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating User: %s", err) @@ -179,12 +182,14 @@ func resourceAlloydbUserRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AlloydbUser %q", d.Id())) @@ -238,6 +243,7 @@ func resourceAlloydbUserUpdate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Updating User %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -252,6 +258,7 @@ func resourceAlloydbUserUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -284,6 +291,8 @@ func resourceAlloydbUserDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting User %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -293,6 +302,7 @@ func resourceAlloydbUserDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "User") diff --git a/google-beta/services/apigateway/resource_api_gateway_api.go b/google-beta/services/apigateway/resource_api_gateway_api.go index 23e8e9b1c2..a0a47d9830 100644 --- a/google-beta/services/apigateway/resource_api_gateway_api.go +++ b/google-beta/services/apigateway/resource_api_gateway_api.go @@ -20,6 +20,7 @@ package apigateway import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -164,6 +165,7 @@ func resourceApiGatewayApiCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -172,6 +174,7 @@ func resourceApiGatewayApiCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Api: %s", err) @@ -234,12 +237,14 @@ func resourceApiGatewayApiRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApiGatewayApi %q", d.Id())) @@ -309,6 +314,7 @@ func resourceApiGatewayApiUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Api %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -340,6 +346,7 @@ func resourceApiGatewayApiUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -387,6 +394,8 @@ func resourceApiGatewayApiDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Api %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -396,6 +405,7 @@ func resourceApiGatewayApiDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Api") diff --git a/google-beta/services/apigateway/resource_api_gateway_api_config.go b/google-beta/services/apigateway/resource_api_gateway_api_config.go index 331de5288e..b2c1f77bf5 100644 --- a/google-beta/services/apigateway/resource_api_gateway_api_config.go +++ b/google-beta/services/apigateway/resource_api_gateway_api_config.go @@ -20,6 +20,7 @@ package apigateway import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -341,6 +342,7 @@ func resourceApiGatewayApiConfigCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -349,6 +351,7 @@ func resourceApiGatewayApiConfigCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ApiConfig: %s", err) @@ -411,12 +414,14 @@ func resourceApiGatewayApiConfigRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApiGatewayApiConfig %q", d.Id())) @@ -512,6 +517,7 @@ func resourceApiGatewayApiConfigUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating ApiConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -555,6 +561,7 @@ func resourceApiGatewayApiConfigUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -602,6 +609,8 @@ func resourceApiGatewayApiConfigDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ApiConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -611,6 +620,7 @@ func resourceApiGatewayApiConfigDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ApiConfig") diff --git a/google-beta/services/apigateway/resource_api_gateway_gateway.go b/google-beta/services/apigateway/resource_api_gateway_gateway.go index e476ce318e..45875c5374 100644 --- a/google-beta/services/apigateway/resource_api_gateway_gateway.go +++ b/google-beta/services/apigateway/resource_api_gateway_gateway.go @@ -20,6 +20,7 @@ package apigateway import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -171,6 +172,7 @@ func resourceApiGatewayGatewayCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -179,6 +181,7 @@ func resourceApiGatewayGatewayCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Gateway: %s", err) @@ -241,12 +244,14 @@ func resourceApiGatewayGatewayRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApiGatewayGateway %q", d.Id())) @@ -322,6 +327,7 @@ func resourceApiGatewayGatewayUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Gateway %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -357,6 +363,7 @@ func resourceApiGatewayGatewayUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -404,6 +411,8 @@ func resourceApiGatewayGatewayDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Gateway %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -413,6 +422,7 @@ func resourceApiGatewayGatewayDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Gateway") diff --git a/google-beta/services/apigee/resource_apigee_addons_config.go b/google-beta/services/apigee/resource_apigee_addons_config.go index 6c043b6680..309ac510a6 100644 --- a/google-beta/services/apigee/resource_apigee_addons_config.go +++ b/google-beta/services/apigee/resource_apigee_addons_config.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -182,6 +183,7 @@ func resourceApigeeAddonsConfigCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -190,6 +192,7 @@ func resourceApigeeAddonsConfigCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AddonsConfig: %s", err) @@ -236,12 +239,14 @@ func resourceApigeeAddonsConfigRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeAddonsConfig %q", d.Id())) @@ -277,6 +282,7 @@ func resourceApigeeAddonsConfigUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating AddonsConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -291,6 +297,7 @@ func resourceApigeeAddonsConfigUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -331,6 +338,8 @@ func resourceApigeeAddonsConfigDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AddonsConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -340,6 +349,7 @@ func resourceApigeeAddonsConfigDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AddonsConfig") diff --git a/google-beta/services/apigee/resource_apigee_endpoint_attachment.go b/google-beta/services/apigee/resource_apigee_endpoint_attachment.go index e914230569..0bd5b8f847 100644 --- a/google-beta/services/apigee/resource_apigee_endpoint_attachment.go +++ b/google-beta/services/apigee/resource_apigee_endpoint_attachment.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -126,6 +127,7 @@ func resourceApigeeEndpointAttachmentCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -134,6 +136,7 @@ func resourceApigeeEndpointAttachmentCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EndpointAttachment: %s", err) @@ -194,12 +197,14 @@ func resourceApigeeEndpointAttachmentRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeEndpointAttachment %q", d.Id())) @@ -245,6 +250,8 @@ func resourceApigeeEndpointAttachmentDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EndpointAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -254,6 +261,7 @@ func resourceApigeeEndpointAttachmentDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EndpointAttachment") diff --git a/google-beta/services/apigee/resource_apigee_env_keystore.go b/google-beta/services/apigee/resource_apigee_env_keystore.go index 86cea48045..a8711b8685 100644 --- a/google-beta/services/apigee/resource_apigee_env_keystore.go +++ b/google-beta/services/apigee/resource_apigee_env_keystore.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -99,6 +100,7 @@ func resourceApigeeEnvKeystoreCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -107,6 +109,7 @@ func resourceApigeeEnvKeystoreCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EnvKeystore: %s", err) @@ -143,12 +146,14 @@ func resourceApigeeEnvKeystoreRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeEnvKeystore %q", d.Id())) @@ -185,6 +190,8 @@ func resourceApigeeEnvKeystoreDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EnvKeystore %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -194,6 +201,7 @@ func resourceApigeeEnvKeystoreDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EnvKeystore") diff --git a/google-beta/services/apigee/resource_apigee_env_references.go b/google-beta/services/apigee/resource_apigee_env_references.go index e32c8114a8..5dab2eb2e8 100644 --- a/google-beta/services/apigee/resource_apigee_env_references.go +++ b/google-beta/services/apigee/resource_apigee_env_references.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -127,6 +128,7 @@ func resourceApigeeEnvReferencesCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -135,6 +137,7 @@ func resourceApigeeEnvReferencesCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EnvReferences: %s", err) @@ -171,12 +174,14 @@ func resourceApigeeEnvReferencesRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeEnvReferences %q", d.Id())) @@ -219,6 +224,8 @@ func resourceApigeeEnvReferencesDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EnvReferences %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -228,6 +235,7 @@ func resourceApigeeEnvReferencesDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EnvReferences") diff --git a/google-beta/services/apigee/resource_apigee_envgroup.go b/google-beta/services/apigee/resource_apigee_envgroup.go index 30b5a02bfa..8b4c2b412c 100644 --- a/google-beta/services/apigee/resource_apigee_envgroup.go +++ b/google-beta/services/apigee/resource_apigee_envgroup.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -108,6 +109,7 @@ func resourceApigeeEnvgroupCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -116,6 +118,7 @@ func resourceApigeeEnvgroupCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Envgroup: %s", err) @@ -176,12 +179,14 @@ func resourceApigeeEnvgroupRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeEnvgroup %q", d.Id())) @@ -220,6 +225,7 @@ func resourceApigeeEnvgroupUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Envgroup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("hostnames") { @@ -247,6 +253,7 @@ func resourceApigeeEnvgroupUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -288,6 +295,8 @@ func resourceApigeeEnvgroupDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Envgroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -297,6 +306,7 @@ func resourceApigeeEnvgroupDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Envgroup") diff --git a/google-beta/services/apigee/resource_apigee_envgroup_attachment.go b/google-beta/services/apigee/resource_apigee_envgroup_attachment.go index a11b62395a..7ac7fc2d5c 100644 --- a/google-beta/services/apigee/resource_apigee_envgroup_attachment.go +++ b/google-beta/services/apigee/resource_apigee_envgroup_attachment.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -96,6 +97,7 @@ func resourceApigeeEnvgroupAttachmentCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -104,6 +106,7 @@ func resourceApigeeEnvgroupAttachmentCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EnvgroupAttachment: %s", err) @@ -164,12 +167,14 @@ func resourceApigeeEnvgroupAttachmentRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeEnvgroupAttachment %q", d.Id())) @@ -206,6 +211,8 @@ func resourceApigeeEnvgroupAttachmentDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EnvgroupAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -215,6 +222,7 @@ func resourceApigeeEnvgroupAttachmentDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EnvgroupAttachment") diff --git a/google-beta/services/apigee/resource_apigee_environment.go b/google-beta/services/apigee/resource_apigee_environment.go index dbba6b1c8a..165cbcb944 100644 --- a/google-beta/services/apigee/resource_apigee_environment.go +++ b/google-beta/services/apigee/resource_apigee_environment.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -97,6 +98,11 @@ Creating, updating, or deleting target servers. Possible values: ["DEPLOYMENT_TY ForceNew: true, Description: `Display name of the environment.`, }, + "forward_proxy_uri": { + Type: schema.TypeString, + Optional: true, + Description: `Optional. URI of the forward proxy to be applied to the runtime instances in this environment. Must be in the format of {scheme}://{hostname}:{port}. Note that the scheme must be one of "http" or "https", and the port must be supplied.`, + }, "node_config": { Type: schema.TypeList, Computed: true, @@ -193,6 +199,12 @@ func resourceApigeeEnvironmentCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("type"); !tpgresource.IsEmptyValue(reflect.ValueOf(typeProp)) && (ok || !reflect.DeepEqual(v, typeProp)) { obj["type"] = typeProp } + forwardProxyUriProp, err := expandApigeeEnvironmentForwardProxyUri(d.Get("forward_proxy_uri"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("forward_proxy_uri"); !tpgresource.IsEmptyValue(reflect.ValueOf(forwardProxyUriProp)) && (ok || !reflect.DeepEqual(v, forwardProxyUriProp)) { + obj["forwardProxyUri"] = forwardProxyUriProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ApigeeBasePath}}{{org_id}}/environments") if err != nil { @@ -207,6 +219,7 @@ func resourceApigeeEnvironmentCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -215,6 +228,7 @@ func resourceApigeeEnvironmentCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Environment: %s", err) @@ -275,12 +289,14 @@ func resourceApigeeEnvironmentRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeEnvironment %q", d.Id())) @@ -307,6 +323,9 @@ func resourceApigeeEnvironmentRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("type", flattenApigeeEnvironmentType(res["type"], d, config)); err != nil { return fmt.Errorf("Error reading Environment: %s", err) } + if err := d.Set("forward_proxy_uri", flattenApigeeEnvironmentForwardProxyUri(res["forwardProxyUri"], d, config)); err != nil { + return fmt.Errorf("Error reading Environment: %s", err) + } return nil } @@ -333,6 +352,12 @@ func resourceApigeeEnvironmentUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("type"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, typeProp)) { obj["type"] = typeProp } + forwardProxyUriProp, err := expandApigeeEnvironmentForwardProxyUri(d.Get("forward_proxy_uri"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("forward_proxy_uri"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, forwardProxyUriProp)) { + obj["forwardProxyUri"] = forwardProxyUriProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ApigeeBasePath}}{{org_id}}/environments/{{name}}") if err != nil { @@ -340,6 +365,7 @@ func resourceApigeeEnvironmentUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Environment %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("node_config") { @@ -349,6 +375,10 @@ func resourceApigeeEnvironmentUpdate(d *schema.ResourceData, meta interface{}) e if d.HasChange("type") { updateMask = append(updateMask, "type") } + + if d.HasChange("forward_proxy_uri") { + updateMask = append(updateMask, "forwardProxyUri") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -371,6 +401,7 @@ func resourceApigeeEnvironmentUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -412,6 +443,8 @@ func resourceApigeeEnvironmentDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Environment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -421,6 +454,7 @@ func resourceApigeeEnvironmentDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Environment") @@ -536,6 +570,10 @@ func flattenApigeeEnvironmentType(v interface{}, d *schema.ResourceData, config return v } +func flattenApigeeEnvironmentForwardProxyUri(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandApigeeEnvironmentName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -604,3 +642,7 @@ func expandApigeeEnvironmentNodeConfigCurrentAggregateNodeCount(v interface{}, d func expandApigeeEnvironmentType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandApigeeEnvironmentForwardProxyUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} diff --git a/google-beta/services/apigee/resource_apigee_environment_generated_test.go b/google-beta/services/apigee/resource_apigee_environment_generated_test.go index c4e858ec62..149415f92f 100644 --- a/google-beta/services/apigee/resource_apigee_environment_generated_test.go +++ b/google-beta/services/apigee/resource_apigee_environment_generated_test.go @@ -223,7 +223,7 @@ resource "google_apigee_environment" "apigee_environment" { `, context) } -func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExample(t *testing.T) { +func TestAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExample(t *testing.T) { acctest.SkipIfVcr(t) t.Parallel() @@ -239,7 +239,7 @@ func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExample(t *testing.T) { CheckDestroy: testAccCheckApigeeEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccApigeeEnvironment_apigeeEnvironmentTypeTestExample(context), + Config: testAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExample(context), }, { ResourceName: "google_apigee_environment.apigee_environment", @@ -251,7 +251,7 @@ func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExample(t *testing.T) { }) } -func testAccApigeeEnvironment_apigeeEnvironmentTypeTestExample(context map[string]interface{}) string { +func testAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExample(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_project" "project" { provider = google-beta @@ -374,6 +374,7 @@ resource "google_apigee_environment" "apigee_environment" { description = "Apigee Environment" display_name = "tf-test%{random_suffix}" type = "COMPREHENSIVE" + forward_proxy_uri = "http://test:123" } `, context) } diff --git a/google-beta/services/apigee/resource_apigee_environment_type_test.go b/google-beta/services/apigee/resource_apigee_environment_type_test.go index c1fccbfef0..e979abe9d6 100644 --- a/google-beta/services/apigee/resource_apigee_environment_type_test.go +++ b/google-beta/services/apigee/resource_apigee_environment_type_test.go @@ -10,7 +10,7 @@ import ( "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" ) -func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExampleUpdate(t *testing.T) { +func TestAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExampleUpdate(t *testing.T) { acctest.SkipIfVcr(t) t.Parallel() @@ -26,7 +26,7 @@ func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExampleUpdate(t *testing. CheckDestroy: testAccCheckApigeeEnvironmentDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccApigeeEnvironment_apigeeEnvironmentTypeTestExample(context), + Config: testAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExample(context), }, { ResourceName: "google_apigee_environment.apigee_environment", @@ -35,7 +35,7 @@ func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExampleUpdate(t *testing. ImportStateVerifyIgnore: []string{"org_id"}, }, { - Config: testAccApigeeEnvironment_apigeeEnvironmentTypeTestExampleUpdate(context), + Config: testAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExampleUpdate(context), }, { ResourceName: "google_apigee_environment.apigee_environment", @@ -47,7 +47,7 @@ func TestAccApigeeEnvironment_apigeeEnvironmentTypeTestExampleUpdate(t *testing. }) } -func testAccApigeeEnvironment_apigeeEnvironmentTypeTestExampleUpdate(context map[string]interface{}) string { +func testAccApigeeEnvironment_apigeeEnvironmentPatchUpdateTestExampleUpdate(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_project" "project" { provider = google-beta @@ -137,15 +137,13 @@ resource "google_project_service_identity" "apigee_sa" { service = google_project_service.apigee.service } -resource "google_kms_crypto_key_iam_binding" "apigee_sa_keyuser" { +resource "google_kms_crypto_key_iam_member" "apigee_sa_keyuser" { provider = google-beta crypto_key_id = google_kms_crypto_key.apigee_key.id role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - members = [ - "serviceAccount:${google_project_service_identity.apigee_sa.email}", - ] + member = "serviceAccount:${google_project_service_identity.apigee_sa.email}" } resource "google_apigee_organization" "apigee_org" { @@ -160,7 +158,7 @@ resource "google_apigee_organization" "apigee_org" { depends_on = [ google_service_networking_connection.apigee_vpc_connection, google_project_service.apigee, - google_kms_crypto_key_iam_binding.apigee_sa_keyuser, + google_kms_crypto_key_iam_member.apigee_sa_keyuser, ] } @@ -172,6 +170,7 @@ resource "google_apigee_environment" "apigee_environment" { description = "Apigee Environment" display_name = "tf-test%{random_suffix}" type = "INTERMEDIATE" + forward_proxy_uri = "http://test:456" } `, context) } diff --git a/google-beta/services/apigee/resource_apigee_instance.go b/google-beta/services/apigee/resource_apigee_instance.go index a467d741ca..caf99df03e 100644 --- a/google-beta/services/apigee/resource_apigee_instance.go +++ b/google-beta/services/apigee/resource_apigee_instance.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -242,6 +243,7 @@ func resourceApigeeInstanceCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -250,6 +252,7 @@ func resourceApigeeInstanceCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsApigeeRetryableError}, }) if err != nil { @@ -311,12 +314,14 @@ func resourceApigeeInstanceRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsApigeeRetryableError}, }) if err != nil { @@ -385,6 +390,8 @@ func resourceApigeeInstanceDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -394,6 +401,7 @@ func resourceApigeeInstanceDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsApigeeRetryableError}, }) if err != nil { diff --git a/google-beta/services/apigee/resource_apigee_instance_attachment.go b/google-beta/services/apigee/resource_apigee_instance_attachment.go index 38c7a9af22..9518b11077 100644 --- a/google-beta/services/apigee/resource_apigee_instance_attachment.go +++ b/google-beta/services/apigee/resource_apigee_instance_attachment.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -103,6 +104,7 @@ func resourceApigeeInstanceAttachmentCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -111,6 +113,7 @@ func resourceApigeeInstanceAttachmentCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InstanceAttachment: %s", err) @@ -171,12 +174,14 @@ func resourceApigeeInstanceAttachmentRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeInstanceAttachment %q", d.Id())) @@ -220,6 +225,8 @@ func resourceApigeeInstanceAttachmentDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting InstanceAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -229,6 +236,7 @@ func resourceApigeeInstanceAttachmentDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "InstanceAttachment") diff --git a/google-beta/services/apigee/resource_apigee_keystores_aliases_self_signed_cert.go b/google-beta/services/apigee/resource_apigee_keystores_aliases_self_signed_cert.go index 262a74e0dd..5469a58ac4 100644 --- a/google-beta/services/apigee/resource_apigee_keystores_aliases_self_signed_cert.go +++ b/google-beta/services/apigee/resource_apigee_keystores_aliases_self_signed_cert.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -304,6 +305,7 @@ func resourceApigeeKeystoresAliasesSelfSignedCertCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -312,6 +314,7 @@ func resourceApigeeKeystoresAliasesSelfSignedCertCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating KeystoresAliasesSelfSignedCert: %s", err) @@ -348,12 +351,14 @@ func resourceApigeeKeystoresAliasesSelfSignedCertRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeKeystoresAliasesSelfSignedCert %q", d.Id())) @@ -396,6 +401,8 @@ func resourceApigeeKeystoresAliasesSelfSignedCertDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting KeystoresAliasesSelfSignedCert %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -405,6 +412,7 @@ func resourceApigeeKeystoresAliasesSelfSignedCertDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "KeystoresAliasesSelfSignedCert") diff --git a/google-beta/services/apigee/resource_apigee_nat_address.go b/google-beta/services/apigee/resource_apigee_nat_address.go index 82560c30b8..33041feec9 100644 --- a/google-beta/services/apigee/resource_apigee_nat_address.go +++ b/google-beta/services/apigee/resource_apigee_nat_address.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -101,6 +102,7 @@ func resourceApigeeNatAddressCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -109,6 +111,7 @@ func resourceApigeeNatAddressCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NatAddress: %s", err) @@ -169,12 +172,14 @@ func resourceApigeeNatAddressRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeNatAddress %q", d.Id())) @@ -214,6 +219,8 @@ func resourceApigeeNatAddressDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NatAddress %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -223,6 +230,7 @@ func resourceApigeeNatAddressDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NatAddress") diff --git a/google-beta/services/apigee/resource_apigee_organization.go b/google-beta/services/apigee/resource_apigee_organization.go index 46afc933b2..fa5900d0ea 100644 --- a/google-beta/services/apigee/resource_apigee_organization.go +++ b/google-beta/services/apigee/resource_apigee_organization.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -61,6 +62,20 @@ func ResourceApigeeOrganization() *schema.Resource { ForceNew: true, Description: `Primary GCP region for analytics data storage. For valid values, see [Create an Apigee organization](https://cloud.google.com/apigee/docs/api-platform/get-started/create-org).`, }, + "api_consumer_data_encryption_key_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Cloud KMS key name used for encrypting API consumer data.`, + }, + "api_consumer_data_location": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `This field is needed only for customers using non-default data residency regions. +Apigee stores some control plane data only in single region. +This field determines which single region Apigee should use.`, + }, "authorized_network": { Type: schema.TypeString, Optional: true, @@ -75,6 +90,13 @@ Valid only when 'RuntimeType' is set to CLOUD. The value can be updated only whe ForceNew: true, Description: `Billing type of the Apigee organization. See [Apigee pricing](https://cloud.google.com/apigee/pricing).`, }, + "control_plane_encryption_key_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Cloud KMS key name used for encrypting control plane data that is stored in a multi region. +Only used for the data residency region "US" or "EU".`, + }, "description": { Type: schema.TypeString, Optional: true, @@ -204,6 +226,24 @@ func resourceApigeeOrganizationCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("analytics_region"); !tpgresource.IsEmptyValue(reflect.ValueOf(analyticsRegionProp)) && (ok || !reflect.DeepEqual(v, analyticsRegionProp)) { obj["analyticsRegion"] = analyticsRegionProp } + apiConsumerDataLocationProp, err := expandApigeeOrganizationApiConsumerDataLocation(d.Get("api_consumer_data_location"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("api_consumer_data_location"); !tpgresource.IsEmptyValue(reflect.ValueOf(apiConsumerDataLocationProp)) && (ok || !reflect.DeepEqual(v, apiConsumerDataLocationProp)) { + obj["apiConsumerDataLocation"] = apiConsumerDataLocationProp + } + apiConsumerDataEncryptionKeyNameProp, err := expandApigeeOrganizationApiConsumerDataEncryptionKeyName(d.Get("api_consumer_data_encryption_key_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("api_consumer_data_encryption_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(apiConsumerDataEncryptionKeyNameProp)) && (ok || !reflect.DeepEqual(v, apiConsumerDataEncryptionKeyNameProp)) { + obj["apiConsumerDataEncryptionKeyName"] = apiConsumerDataEncryptionKeyNameProp + } + controlPlaneEncryptionKeyNameProp, err := expandApigeeOrganizationControlPlaneEncryptionKeyName(d.Get("control_plane_encryption_key_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("control_plane_encryption_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(controlPlaneEncryptionKeyNameProp)) && (ok || !reflect.DeepEqual(v, controlPlaneEncryptionKeyNameProp)) { + obj["controlPlaneEncryptionKeyName"] = controlPlaneEncryptionKeyNameProp + } authorizedNetworkProp, err := expandApigeeOrganizationAuthorizedNetwork(d.Get("authorized_network"), d, config) if err != nil { return err @@ -259,6 +299,7 @@ func resourceApigeeOrganizationCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -267,6 +308,7 @@ func resourceApigeeOrganizationCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Organization: %s", err) @@ -327,12 +369,14 @@ func resourceApigeeOrganizationRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeOrganization %q", d.Id())) @@ -350,6 +394,15 @@ func resourceApigeeOrganizationRead(d *schema.ResourceData, meta interface{}) er if err := d.Set("analytics_region", flattenApigeeOrganizationAnalyticsRegion(res["analyticsRegion"], d, config)); err != nil { return fmt.Errorf("Error reading Organization: %s", err) } + if err := d.Set("api_consumer_data_location", flattenApigeeOrganizationApiConsumerDataLocation(res["apiConsumerDataLocation"], d, config)); err != nil { + return fmt.Errorf("Error reading Organization: %s", err) + } + if err := d.Set("api_consumer_data_encryption_key_name", flattenApigeeOrganizationApiConsumerDataEncryptionKeyName(res["apiConsumerDataEncryptionKeyName"], d, config)); err != nil { + return fmt.Errorf("Error reading Organization: %s", err) + } + if err := d.Set("control_plane_encryption_key_name", flattenApigeeOrganizationControlPlaneEncryptionKeyName(res["controlPlaneEncryptionKeyName"], d, config)); err != nil { + return fmt.Errorf("Error reading Organization: %s", err) + } if err := d.Set("authorized_network", flattenApigeeOrganizationAuthorizedNetwork(res["authorizedNetwork"], d, config)); err != nil { return fmt.Errorf("Error reading Organization: %s", err) } @@ -409,6 +462,24 @@ func resourceApigeeOrganizationUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("analytics_region"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, analyticsRegionProp)) { obj["analyticsRegion"] = analyticsRegionProp } + apiConsumerDataLocationProp, err := expandApigeeOrganizationApiConsumerDataLocation(d.Get("api_consumer_data_location"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("api_consumer_data_location"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, apiConsumerDataLocationProp)) { + obj["apiConsumerDataLocation"] = apiConsumerDataLocationProp + } + apiConsumerDataEncryptionKeyNameProp, err := expandApigeeOrganizationApiConsumerDataEncryptionKeyName(d.Get("api_consumer_data_encryption_key_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("api_consumer_data_encryption_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, apiConsumerDataEncryptionKeyNameProp)) { + obj["apiConsumerDataEncryptionKeyName"] = apiConsumerDataEncryptionKeyNameProp + } + controlPlaneEncryptionKeyNameProp, err := expandApigeeOrganizationControlPlaneEncryptionKeyName(d.Get("control_plane_encryption_key_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("control_plane_encryption_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, controlPlaneEncryptionKeyNameProp)) { + obj["controlPlaneEncryptionKeyName"] = controlPlaneEncryptionKeyNameProp + } authorizedNetworkProp, err := expandApigeeOrganizationAuthorizedNetwork(d.Get("authorized_network"), d, config) if err != nil { return err @@ -457,6 +528,7 @@ func resourceApigeeOrganizationUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Organization %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -471,6 +543,7 @@ func resourceApigeeOrganizationUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -511,6 +584,8 @@ func resourceApigeeOrganizationDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Organization %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -520,6 +595,7 @@ func resourceApigeeOrganizationDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Organization") @@ -588,6 +664,18 @@ func flattenApigeeOrganizationAnalyticsRegion(v interface{}, d *schema.ResourceD return v } +func flattenApigeeOrganizationApiConsumerDataLocation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenApigeeOrganizationApiConsumerDataEncryptionKeyName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenApigeeOrganizationControlPlaneEncryptionKeyName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenApigeeOrganizationAuthorizedNetwork(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -634,26 +722,28 @@ func flattenApigeeOrganizationPropertiesProperty(v interface{}, d *schema.Resour return v } l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) + apiData := make([]map[string]interface{}, 0, len(l)) for _, raw := range l { original := raw.(map[string]interface{}) if len(original) < 1 { // Do not include empty json objects coming back from the api continue } - transformed = append(transformed, map[string]interface{}{ - "name": flattenApigeeOrganizationPropertiesPropertyName(original["name"], d, config), - "value": flattenApigeeOrganizationPropertiesPropertyValue(original["value"], d, config), + apiData = append(apiData, map[string]interface{}{ + "name": original["name"], + "value": original["value"], }) } - return transformed -} -func flattenApigeeOrganizationPropertiesPropertyName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenApigeeOrganizationPropertiesPropertyValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + configData := []map[string]interface{}{} + for _, item := range d.Get("properties.0.property").([]interface{}) { + configData = append(configData, item.(map[string]interface{})) + } + sorted, err := tpgresource.SortMapsByConfigOrder(configData, apiData, "name") + if err != nil { + log.Printf("[ERROR] Could not support API response for properties.0.property: %s", err) + return apiData + } + return sorted } func flattenApigeeOrganizationApigeeProjectId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -672,6 +762,18 @@ func expandApigeeOrganizationAnalyticsRegion(v interface{}, d tpgresource.Terraf return v, nil } +func expandApigeeOrganizationApiConsumerDataLocation(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandApigeeOrganizationApiConsumerDataEncryptionKeyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandApigeeOrganizationControlPlaneEncryptionKeyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandApigeeOrganizationAuthorizedNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/apigee/resource_apigee_organization_generated_test.go b/google-beta/services/apigee/resource_apigee_organization_generated_test.go index 978fbf9f17..ab07e6b4b7 100644 --- a/google-beta/services/apigee/resource_apigee_organization_generated_test.go +++ b/google-beta/services/apigee/resource_apigee_organization_generated_test.go @@ -349,7 +349,7 @@ func TestAccApigeeOrganization_apigeeOrganizationCloudFullDisableVpcPeeringTestE ResourceName: "google_apigee_organization.org", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"project_id", "retention"}, + ImportStateVerifyIgnore: []string{"project_id", "retention", "properties"}, }, }, }) @@ -593,6 +593,157 @@ resource "google_apigee_organization" "org" { `, context) } +func TestAccApigeeOrganization_apigeeOrganizationDrzTestExample(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + context := map[string]interface{}{ + "org_id": envvar.GetTestOrgFromEnv(t), + "billing_account": envvar.GetTestBillingAccountFromEnv(t), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckApigeeOrganizationDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccApigeeOrganization_apigeeOrganizationDrzTestExample(context), + }, + { + ResourceName: "google_apigee_organization.org", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"project_id", "retention"}, + }, + }, + }) +} + +func testAccApigeeOrganization_apigeeOrganizationDrzTestExample(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google-beta" { + apigee_custom_endpoint = "https://eu-apigee.googleapis.com/v1/" +} + +resource "google_project" "project" { + provider = google-beta + + project_id = "tf-test%{random_suffix}" + name = "tf-test%{random_suffix}" + org_id = "%{org_id}" + billing_account = "%{billing_account}" +} + +resource "google_project_service" "apigee" { + provider = google-beta + + project = google_project.project.project_id + service = "apigee.googleapis.com" +} + +resource "google_project_service" "compute" { + provider = google-beta + + project = google_project.project.project_id + service = "compute.googleapis.com" +} + +resource "google_project_service" "servicenetworking" { + provider = google-beta + + project = google_project.project.project_id + service = "servicenetworking.googleapis.com" +} + +resource "google_project_service" "kms" { + provider = google-beta + + project = google_project.project.project_id + service = "cloudkms.googleapis.com" +} + +resource "google_compute_network" "apigee_network" { + provider = google-beta + + name = "apigee-network" + project = google_project.project.project_id + depends_on = [google_project_service.compute] +} + +resource "google_compute_global_address" "apigee_range" { + provider = google-beta + + name = "apigee-range" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 16 + network = google_compute_network.apigee_network.id + project = google_project.project.project_id +} + +resource "google_service_networking_connection" "apigee_vpc_connection" { + provider = google-beta + + network = google_compute_network.apigee_network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.apigee_range.name] + depends_on = [google_project_service.servicenetworking] +} + +resource "google_kms_key_ring" "apigee_keyring" { + provider = google-beta + + name = "apigee-keyring" + location = "europe-central2" + project = google_project.project.project_id + depends_on = [google_project_service.kms] +} + +resource "google_kms_crypto_key" "apigee_key" { + provider = google-beta + + name = "apigee-key" + key_ring = google_kms_key_ring.apigee_keyring.id +} + +resource "google_project_service_identity" "apigee_sa" { + provider = google-beta + + project = google_project.project.project_id + service = google_project_service.apigee.service +} + +resource "google_kms_crypto_key_iam_member" "apigee_sa_keyuser" { + provider = google-beta + + crypto_key_id = google_kms_crypto_key.apigee_key.id + role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" + + member = "serviceAccount:${google_project_service_identity.apigee_sa.email}" +} + +resource "google_apigee_organization" "org" { + provider = google-beta + + api_consumer_data_location = "europe-central2" + project_id = google_project.project.project_id + authorized_network = google_compute_network.apigee_network.id + billing_type = "PAYG" + runtime_database_encryption_key_name = google_kms_crypto_key.apigee_key.id + api_consumer_data_encryption_key_name = google_kms_crypto_key.apigee_key.id + retention = "MINIMUM" + + depends_on = [ + google_service_networking_connection.apigee_vpc_connection, + google_project_service.apigee, + google_kms_crypto_key_iam_member.apigee_sa_keyuser, + ] +} +`, context) +} + func testAccCheckApigeeOrganizationDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google-beta/services/apigee/resource_apigee_sync_authorization.go b/google-beta/services/apigee/resource_apigee_sync_authorization.go index ce4e79e2e8..4050d628bb 100644 --- a/google-beta/services/apigee/resource_apigee_sync_authorization.go +++ b/google-beta/services/apigee/resource_apigee_sync_authorization.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -112,6 +113,7 @@ func resourceApigeeSyncAuthorizationCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -120,6 +122,7 @@ func resourceApigeeSyncAuthorizationCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SyncAuthorization: %s", err) @@ -156,12 +159,14 @@ func resourceApigeeSyncAuthorizationRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeSyncAuthorization %q", d.Id())) @@ -206,6 +211,7 @@ func resourceApigeeSyncAuthorizationUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating SyncAuthorization %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -220,6 +226,7 @@ func resourceApigeeSyncAuthorizationUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/apigee/resource_apigee_sync_authorization_generated_test.go b/google-beta/services/apigee/resource_apigee_sync_authorization_generated_test.go index fb6c594999..86d167ecbf 100644 --- a/google-beta/services/apigee/resource_apigee_sync_authorization_generated_test.go +++ b/google-beta/services/apigee/resource_apigee_sync_authorization_generated_test.go @@ -79,12 +79,10 @@ resource "google_service_account" "service_account" { display_name = "Service Account" } -resource "google_project_iam_binding" "synchronizer-iam" { +resource "google_project_iam_member" "synchronizer-iam" { project = google_project.project.project_id role = "roles/apigee.synchronizerManager" - members = [ - "serviceAccount:${google_service_account.service_account.email}", - ] + member = "serviceAccount:${google_service_account.service_account.email}" } resource "google_apigee_sync_authorization" "apigee_sync_authorization" { @@ -92,7 +90,7 @@ resource "google_apigee_sync_authorization" "apigee_sync_authorization" { identities = [ "serviceAccount:${google_service_account.service_account.email}", ] - depends_on = [google_project_iam_binding.synchronizer-iam] + depends_on = [google_project_iam_member.synchronizer-iam] } `, context) } diff --git a/google-beta/services/apigee/resource_apigee_sync_authorization_test.go b/google-beta/services/apigee/resource_apigee_sync_authorization_test.go index 0882bb0a0b..b5b2378109 100644 --- a/google-beta/services/apigee/resource_apigee_sync_authorization_test.go +++ b/google-beta/services/apigee/resource_apigee_sync_authorization_test.go @@ -81,12 +81,10 @@ resource "google_service_account" "service_account" { display_name = "Service Account" } -resource "google_project_iam_binding" "synchronizer-iam" { +resource "google_project_iam_member" "synchronizer-iam" { project = google_project.project.project_id role = "roles/apigee.synchronizerManager" - members = [ - "serviceAccount:${google_service_account.service_account.email}", - ] + member = "serviceAccount:${google_service_account.service_account.email}" } resource "google_apigee_sync_authorization" "apigee_sync_authorization" { @@ -94,7 +92,7 @@ resource "google_apigee_sync_authorization" "apigee_sync_authorization" { identities = [ "serviceAccount:${google_service_account.service_account.email}", ] - depends_on = [google_project_iam_binding.synchronizer-iam] + depends_on = [google_project_iam_member.synchronizer-iam] } `, context) } diff --git a/google-beta/services/apigee/resource_apigee_target_server.go b/google-beta/services/apigee/resource_apigee_target_server.go index 0c81bf54ef..23ff3c2462 100644 --- a/google-beta/services/apigee/resource_apigee_target_server.go +++ b/google-beta/services/apigee/resource_apigee_target_server.go @@ -20,6 +20,7 @@ package apigee import ( "fmt" "log" + "net/http" "reflect" "time" @@ -235,6 +236,7 @@ func resourceApigeeTargetServerCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -243,6 +245,7 @@ func resourceApigeeTargetServerCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetServer: %s", err) @@ -279,12 +282,14 @@ func resourceApigeeTargetServerRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApigeeTargetServer %q", d.Id())) @@ -374,6 +379,7 @@ func resourceApigeeTargetServerUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating TargetServer %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -388,6 +394,7 @@ func resourceApigeeTargetServerUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -420,6 +427,8 @@ func resourceApigeeTargetServerDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetServer %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -429,6 +438,7 @@ func resourceApigeeTargetServerDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetServer") diff --git a/google-beta/services/appengine/resource_app_engine_application_url_dispatch_rules.go b/google-beta/services/appengine/resource_app_engine_application_url_dispatch_rules.go index 37704315dc..7f5042e8c3 100644 --- a/google-beta/services/appengine/resource_app_engine_application_url_dispatch_rules.go +++ b/google-beta/services/appengine/resource_app_engine_application_url_dispatch_rules.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "time" @@ -132,6 +133,7 @@ func resourceAppEngineApplicationUrlDispatchRulesCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -140,6 +142,7 @@ func resourceAppEngineApplicationUrlDispatchRulesCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { @@ -193,12 +196,14 @@ func resourceAppEngineApplicationUrlDispatchRulesRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { @@ -252,6 +257,7 @@ func resourceAppEngineApplicationUrlDispatchRulesUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating ApplicationUrlDispatchRules %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -266,6 +272,7 @@ func resourceAppEngineApplicationUrlDispatchRulesUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) @@ -320,6 +327,8 @@ func resourceAppEngineApplicationUrlDispatchRulesDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ApplicationUrlDispatchRules %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -329,6 +338,7 @@ func resourceAppEngineApplicationUrlDispatchRulesDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { diff --git a/google-beta/services/appengine/resource_app_engine_domain_mapping.go b/google-beta/services/appengine/resource_app_engine_domain_mapping.go index 346c9ea7b0..36e7c3338b 100644 --- a/google-beta/services/appengine/resource_app_engine_domain_mapping.go +++ b/google-beta/services/appengine/resource_app_engine_domain_mapping.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -209,6 +210,7 @@ func resourceAppEngineDomainMappingCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -217,6 +219,7 @@ func resourceAppEngineDomainMappingCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DomainMapping: %s", err) @@ -283,12 +286,14 @@ func resourceAppEngineDomainMappingRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AppEngineDomainMapping %q", d.Id())) @@ -350,6 +355,7 @@ func resourceAppEngineDomainMappingUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating DomainMapping %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("ssl_settings") { @@ -378,6 +384,7 @@ func resourceAppEngineDomainMappingUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -432,6 +439,8 @@ func resourceAppEngineDomainMappingDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DomainMapping %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -441,6 +450,7 @@ func resourceAppEngineDomainMappingDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DomainMapping") diff --git a/google-beta/services/appengine/resource_app_engine_firewall_rule.go b/google-beta/services/appengine/resource_app_engine_firewall_rule.go index 4273409bfd..9624fb6ba0 100644 --- a/google-beta/services/appengine/resource_app_engine_firewall_rule.go +++ b/google-beta/services/appengine/resource_app_engine_firewall_rule.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -150,6 +151,7 @@ func resourceAppEngineFirewallRuleCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -158,6 +160,7 @@ func resourceAppEngineFirewallRuleCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FirewallRule: %s", err) @@ -247,12 +250,14 @@ func resourceAppEngineFirewallRuleRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AppEngineFirewallRule %q", d.Id())) @@ -332,6 +337,7 @@ func resourceAppEngineFirewallRuleUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating FirewallRule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -371,6 +377,7 @@ func resourceAppEngineFirewallRuleUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -418,6 +425,8 @@ func resourceAppEngineFirewallRuleDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FirewallRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -427,6 +436,7 @@ func resourceAppEngineFirewallRuleDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FirewallRule") diff --git a/google-beta/services/appengine/resource_app_engine_flexible_app_version.go b/google-beta/services/appengine/resource_app_engine_flexible_app_version.go index 4728a465ed..37a43d0530 100644 --- a/google-beta/services/appengine/resource_app_engine_flexible_app_version.go +++ b/google-beta/services/appengine/resource_app_engine_flexible_app_version.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "time" @@ -1067,6 +1068,7 @@ func resourceAppEngineFlexibleAppVersionCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1075,6 +1077,7 @@ func resourceAppEngineFlexibleAppVersionCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { @@ -1128,12 +1131,14 @@ func resourceAppEngineFlexibleAppVersionRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { @@ -1209,9 +1214,6 @@ func resourceAppEngineFlexibleAppVersionRead(d *schema.ResourceData, meta interf if err := d.Set("nobuild_files_regex", flattenAppEngineFlexibleAppVersionNobuildFilesRegex(res["nobuildFilesRegex"], d, config)); err != nil { return fmt.Errorf("Error reading FlexibleAppVersion: %s", err) } - if err := d.Set("deployment", flattenAppEngineFlexibleAppVersionDeployment(res["deployment"], d, config)); err != nil { - return fmt.Errorf("Error reading FlexibleAppVersion: %s", err) - } if err := d.Set("endpoints_api_service", flattenAppEngineFlexibleAppVersionEndpointsApiService(res["endpointsApiService"], d, config)); err != nil { return fmt.Errorf("Error reading FlexibleAppVersion: %s", err) } @@ -1413,6 +1415,7 @@ func resourceAppEngineFlexibleAppVersionUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating FlexibleAppVersion %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -1427,6 +1430,7 @@ func resourceAppEngineFlexibleAppVersionUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) @@ -2016,61 +2020,6 @@ func flattenAppEngineFlexibleAppVersionNobuildFilesRegex(v interface{}, d *schem return v } -func flattenAppEngineFlexibleAppVersionDeployment(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - transformed["zip"] = d.Get("deployment.0.zip") - transformed["files"] = d.Get("deployment.0.files") - transformed["container"] = - flattenAppEngineFlexibleAppVersionDeploymentContainer(original["container"], d, config) - transformed["cloud_build_options"] = - flattenAppEngineFlexibleAppVersionDeploymentCloudBuildOptions(original["cloudBuildOptions"], d, config) - - return []interface{}{transformed} -} - -func flattenAppEngineFlexibleAppVersionDeploymentContainer(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["image"] = - flattenAppEngineFlexibleAppVersionDeploymentContainerImage(original["image"], d, config) - return []interface{}{transformed} -} - -func flattenAppEngineFlexibleAppVersionDeploymentContainerImage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenAppEngineFlexibleAppVersionDeploymentCloudBuildOptions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["app_yaml_path"] = - flattenAppEngineFlexibleAppVersionDeploymentCloudBuildOptionsAppYamlPath(original["appYamlPath"], d, config) - transformed["cloud_build_timeout"] = - flattenAppEngineFlexibleAppVersionDeploymentCloudBuildOptionsCloudBuildTimeout(original["cloudBuildTimeout"], d, config) - return []interface{}{transformed} -} - -func flattenAppEngineFlexibleAppVersionDeploymentCloudBuildOptionsAppYamlPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenAppEngineFlexibleAppVersionDeploymentCloudBuildOptionsCloudBuildTimeout(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - func flattenAppEngineFlexibleAppVersionEndpointsApiService(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil diff --git a/google-beta/services/appengine/resource_app_engine_flexible_app_version_generated_test.go b/google-beta/services/appengine/resource_app_engine_flexible_app_version_generated_test.go index c184378b31..5f544055bf 100644 --- a/google-beta/services/appengine/resource_app_engine_flexible_app_version_generated_test.go +++ b/google-beta/services/appengine/resource_app_engine_flexible_app_version_generated_test.go @@ -50,7 +50,7 @@ func TestAccAppEngineFlexibleAppVersion_appEngineFlexibleAppVersionExample(t *te ResourceName: "google_app_engine_flexible_app_version.myapp_v1", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"beta_settings", "env_variables", "entrypoint", "service", "noop_on_destroy", "deployment.0.zip"}, + ImportStateVerifyIgnore: []string{"beta_settings", "env_variables", "deployment", "entrypoint", "service", "noop_on_destroy", "deployment.0.zip"}, }, }, }) diff --git a/google-beta/services/appengine/resource_app_engine_flexible_app_version_test.go b/google-beta/services/appengine/resource_app_engine_flexible_app_version_test.go index 27a7dae69d..6be1b9ba7a 100644 --- a/google-beta/services/appengine/resource_app_engine_flexible_app_version_test.go +++ b/google-beta/services/appengine/resource_app_engine_flexible_app_version_test.go @@ -31,7 +31,7 @@ func TestAccAppEngineFlexibleAppVersion_update(t *testing.T) { ResourceName: "google_app_engine_flexible_app_version.foo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"env_variables", "deployment.0.files", "entrypoint", "service", "noop_on_destroy"}, + ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "noop_on_destroy"}, }, { Config: testAccAppEngineFlexibleAppVersion_pythonUpdate(context), @@ -40,7 +40,7 @@ func TestAccAppEngineFlexibleAppVersion_update(t *testing.T) { ResourceName: "google_app_engine_flexible_app_version.foo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"env_variables", "deployment.0.files", "entrypoint", "service", "noop_on_destroy"}, + ImportStateVerifyIgnore: []string{"env_variables", "deployment", "entrypoint", "service", "noop_on_destroy"}, }, }, }) diff --git a/google-beta/services/appengine/resource_app_engine_service_network_settings.go b/google-beta/services/appengine/resource_app_engine_service_network_settings.go index ec040fdf3f..382855c5c4 100644 --- a/google-beta/services/appengine/resource_app_engine_service_network_settings.go +++ b/google-beta/services/appengine/resource_app_engine_service_network_settings.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -134,6 +135,7 @@ func resourceAppEngineServiceNetworkSettingsCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -142,6 +144,7 @@ func resourceAppEngineServiceNetworkSettingsCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceNetworkSettings: %s", err) @@ -194,12 +197,14 @@ func resourceAppEngineServiceNetworkSettingsRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AppEngineServiceNetworkSettings %q", d.Id())) @@ -261,6 +266,7 @@ func resourceAppEngineServiceNetworkSettingsUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating ServiceNetworkSettings %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("service") { @@ -292,6 +298,7 @@ func resourceAppEngineServiceNetworkSettingsUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/appengine/resource_app_engine_service_split_traffic.go b/google-beta/services/appengine/resource_app_engine_service_split_traffic.go index 2ec2e4c5a5..291896eaf6 100644 --- a/google-beta/services/appengine/resource_app_engine_service_split_traffic.go +++ b/google-beta/services/appengine/resource_app_engine_service_split_traffic.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -144,6 +145,7 @@ func resourceAppEngineServiceSplitTrafficCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -152,6 +154,7 @@ func resourceAppEngineServiceSplitTrafficCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceSplitTraffic: %s", err) @@ -204,12 +207,14 @@ func resourceAppEngineServiceSplitTrafficRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AppEngineServiceSplitTraffic %q", d.Id())) @@ -268,6 +273,7 @@ func resourceAppEngineServiceSplitTrafficUpdate(d *schema.ResourceData, meta int } log.Printf("[DEBUG] Updating ServiceSplitTraffic %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("service") { @@ -299,6 +305,7 @@ func resourceAppEngineServiceSplitTrafficUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/appengine/resource_app_engine_standard_app_version.go b/google-beta/services/appengine/resource_app_engine_standard_app_version.go index 4fd6fe32b3..6b54b5695e 100644 --- a/google-beta/services/appengine/resource_app_engine_standard_app_version.go +++ b/google-beta/services/appengine/resource_app_engine_standard_app_version.go @@ -20,6 +20,7 @@ package appengine import ( "fmt" "log" + "net/http" "reflect" "time" @@ -613,6 +614,7 @@ func resourceAppEngineStandardAppVersionCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -621,6 +623,7 @@ func resourceAppEngineStandardAppVersionCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { @@ -674,12 +677,14 @@ func resourceAppEngineStandardAppVersionRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) if err != nil { @@ -879,6 +884,7 @@ func resourceAppEngineStandardAppVersionUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating StandardAppVersion %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -893,6 +899,7 @@ func resourceAppEngineStandardAppVersionUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsAppEngineRetryableError}, }) @@ -1247,6 +1254,35 @@ func flattenAppEngineStandardAppVersionAutomaticScaling(v interface{}, d *schema flattenAppEngineStandardAppVersionAutomaticScalingMinPendingLatency(original["minPendingLatency"], d, config) transformed["standard_scheduler_settings"] = flattenAppEngineStandardAppVersionAutomaticScalingStandardSchedulerSettings(original["standardSchedulerSettings"], d, config) + + // begin handwritten code (all other parts of this file are forked from generated code) + // solve for the following diff when no scaling settings are configured: + // + // - automatic_scaling { + // - max_concurrent_requests = 0 -> null + // - max_idle_instances = 0 -> null + // - min_idle_instances = 0 -> null + // } + // + // this happens because the field is returned as: + // + //"automaticScaling": { + // "standardSchedulerSettings": {} + // }, + // + // this is hacky but avoids marking the field as computed, since it's in a oneof + // if any new fields are added to the block or explicit defaults start getting + // returned, it will need to be updated + if transformed["max_concurrent_requests"] == nil && // even primitives are nil at this stage if they're not returned by the API + transformed["max_idle_instances"] == nil && + transformed["max_pending_latency"] == nil && + transformed["min_idle_instances"] == nil && + transformed["min_pending_latency"] == nil && + transformed["standard_scheduler_settings"] == nil { + return nil + } + // end handwritten code + return []interface{}{transformed} } func flattenAppEngineStandardAppVersionAutomaticScalingMaxConcurrentRequests(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { diff --git a/google-beta/services/apphub/resource_apphub_application.go b/google-beta/services/apphub/resource_apphub_application.go index bcffc3d88e..d886437232 100644 --- a/google-beta/services/apphub/resource_apphub_application.go +++ b/google-beta/services/apphub/resource_apphub_application.go @@ -20,6 +20,7 @@ package apphub import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -287,6 +288,7 @@ func resourceApphubApplicationCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -295,6 +297,7 @@ func resourceApphubApplicationCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Application: %s", err) @@ -361,12 +364,14 @@ func resourceApphubApplicationRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApphubApplication %q", d.Id())) @@ -454,6 +459,7 @@ func resourceApphubApplicationUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Application %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -493,6 +499,7 @@ func resourceApphubApplicationUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -540,6 +547,8 @@ func resourceApphubApplicationDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Application %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -549,6 +558,7 @@ func resourceApphubApplicationDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Application") diff --git a/google-beta/services/apphub/resource_apphub_service.go b/google-beta/services/apphub/resource_apphub_service.go index eab8184b83..cc46121aee 100644 --- a/google-beta/services/apphub/resource_apphub_service.go +++ b/google-beta/services/apphub/resource_apphub_service.go @@ -20,6 +20,7 @@ package apphub import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -317,6 +318,7 @@ func resourceApphubServiceCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -325,6 +327,7 @@ func resourceApphubServiceCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Service: %s", err) @@ -391,12 +394,14 @@ func resourceApphubServiceRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApphubService %q", d.Id())) @@ -484,6 +489,7 @@ func resourceApphubServiceUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Service %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -519,6 +525,7 @@ func resourceApphubServiceUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -566,6 +573,8 @@ func resourceApphubServiceDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Service %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -575,6 +584,7 @@ func resourceApphubServiceDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Service") diff --git a/google-beta/services/apphub/resource_apphub_service_project_attachment.go b/google-beta/services/apphub/resource_apphub_service_project_attachment.go index b4fa6593bc..f40a342b6d 100644 --- a/google-beta/services/apphub/resource_apphub_service_project_attachment.go +++ b/google-beta/services/apphub/resource_apphub_service_project_attachment.go @@ -20,6 +20,7 @@ package apphub import ( "fmt" "log" + "net/http" "reflect" "time" @@ -135,6 +136,7 @@ func resourceApphubServiceProjectAttachmentCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -143,6 +145,7 @@ func resourceApphubServiceProjectAttachmentCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceProjectAttachment: %s", err) @@ -209,12 +212,14 @@ func resourceApphubServiceProjectAttachmentRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApphubServiceProjectAttachment %q", d.Id())) @@ -270,6 +275,8 @@ func resourceApphubServiceProjectAttachmentDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServiceProjectAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -279,6 +286,7 @@ func resourceApphubServiceProjectAttachmentDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServiceProjectAttachment") diff --git a/google-beta/services/apphub/resource_apphub_workload.go b/google-beta/services/apphub/resource_apphub_workload.go index d653387a9d..501d705ffa 100644 --- a/google-beta/services/apphub/resource_apphub_workload.go +++ b/google-beta/services/apphub/resource_apphub_workload.go @@ -20,6 +20,7 @@ package apphub import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -314,6 +315,7 @@ func resourceApphubWorkloadCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -322,6 +324,7 @@ func resourceApphubWorkloadCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Workload: %s", err) @@ -388,12 +391,14 @@ func resourceApphubWorkloadRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ApphubWorkload %q", d.Id())) @@ -481,6 +486,7 @@ func resourceApphubWorkloadUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Workload %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -516,6 +522,7 @@ func resourceApphubWorkloadUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -563,6 +570,8 @@ func resourceApphubWorkloadDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Workload %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -572,6 +581,7 @@ func resourceApphubWorkloadDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Workload") diff --git a/google-beta/services/artifactregistry/resource_artifact_registry_repository.go b/google-beta/services/artifactregistry/resource_artifact_registry_repository.go index fb278d01b6..410955974d 100644 --- a/google-beta/services/artifactregistry/resource_artifact_registry_repository.go +++ b/google-beta/services/artifactregistry/resource_artifact_registry_repository.go @@ -20,6 +20,7 @@ package artifactregistry import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -346,6 +347,12 @@ snapshot versions.`, ForceNew: true, Description: `The description of the remote source.`, }, + "disable_upstream_validation": { + Type: schema.TypeBool, + Optional: true, + Description: `If true, the remote repository upstream and upstream credentials will +not be validated.`, + }, "docker_repository": { Type: schema.TypeList, Optional: true, @@ -354,14 +361,32 @@ snapshot versions.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "custom_repository": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Settings for a remote repository with a custom uri.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "uri": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specific uri to the registry, e.g. '"https://registry-1.docker.io"'`, + }, + }, + }, + ConflictsWith: []string{"remote_repository_config.0.docker_repository.0.public_repository"}, + }, "public_repository": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"DOCKER_HUB", ""}), - Description: `Address of the remote repository. Default value: "DOCKER_HUB" Possible values: ["DOCKER_HUB"]`, - Default: "DOCKER_HUB", - ExactlyOneOf: []string{"remote_repository_config.0.docker_repository.0.public_repository"}, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"DOCKER_HUB", ""}), + Description: `Address of the remote repository. Default value: "DOCKER_HUB" Possible values: ["DOCKER_HUB"]`, + Default: "DOCKER_HUB", + ConflictsWith: []string{"remote_repository_config.0.docker_repository.0.custom_repository"}, }, }, }, @@ -375,14 +400,32 @@ snapshot versions.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "custom_repository": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Settings for a remote repository with a custom uri.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "uri": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specific uri to the registry, e.g. '"https://repo.maven.apache.org/maven2"'`, + }, + }, + }, + ConflictsWith: []string{"remote_repository_config.0.maven_repository.0.public_repository"}, + }, "public_repository": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"MAVEN_CENTRAL", ""}), - Description: `Address of the remote repository. Default value: "MAVEN_CENTRAL" Possible values: ["MAVEN_CENTRAL"]`, - Default: "MAVEN_CENTRAL", - ExactlyOneOf: []string{"remote_repository_config.0.maven_repository.0.public_repository"}, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"MAVEN_CENTRAL", ""}), + Description: `Address of the remote repository. Default value: "MAVEN_CENTRAL" Possible values: ["MAVEN_CENTRAL"]`, + Default: "MAVEN_CENTRAL", + ConflictsWith: []string{"remote_repository_config.0.maven_repository.0.custom_repository"}, }, }, }, @@ -396,14 +439,32 @@ snapshot versions.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "custom_repository": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Settings for a remote repository with a custom uri.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "uri": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specific uri to the registry, e.g. '"https://registry.npmjs.org"'`, + }, + }, + }, + ConflictsWith: []string{"remote_repository_config.0.npm_repository.0.public_repository"}, + }, "public_repository": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"NPMJS", ""}), - Description: `Address of the remote repository. Default value: "NPMJS" Possible values: ["NPMJS"]`, - Default: "NPMJS", - ExactlyOneOf: []string{"remote_repository_config.0.npm_repository.0.public_repository"}, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"NPMJS", ""}), + Description: `Address of the remote repository. Default value: "NPMJS" Possible values: ["NPMJS"]`, + Default: "NPMJS", + ConflictsWith: []string{"remote_repository_config.0.npm_repository.0.custom_repository"}, }, }, }, @@ -417,14 +478,32 @@ snapshot versions.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "custom_repository": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Settings for a remote repository with a custom uri.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "uri": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specific uri to the registry, e.g. '"https://pypi.io"'`, + }, + }, + }, + ConflictsWith: []string{"remote_repository_config.0.python_repository.0.public_repository"}, + }, "public_repository": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"PYPI", ""}), - Description: `Address of the remote repository. Default value: "PYPI" Possible values: ["PYPI"]`, - Default: "PYPI", - ExactlyOneOf: []string{"remote_repository_config.0.python_repository.0.public_repository"}, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"PYPI", ""}), + Description: `Address of the remote repository. Default value: "PYPI" Possible values: ["PYPI"]`, + Default: "PYPI", + ConflictsWith: []string{"remote_repository_config.0.python_repository.0.custom_repository"}, }, }, }, @@ -683,6 +762,25 @@ func resourceArtifactRegistryRepositoryCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + // This file should be deleted in the next major terraform release, alongside + // the default values for 'publicRepository'. + + // deletePublicRepoIfCustom deletes the publicRepository key for a given + // pkg type from the remote repository config if customRepository is set. + deletePublicRepoIfCustom := func(pkgType string) { + if _, ok := d.GetOk(fmt.Sprintf("remote_repository_config.0.%s_repository.0.custom_repository", pkgType)); ok { + rrcfg := obj["remoteRepositoryConfig"].(map[string]interface{}) + repo := rrcfg[fmt.Sprintf("%sRepository", pkgType)].(map[string]interface{}) + delete(repo, "publicRepository") + } + } + + // Call above func for all pkg types that support custom remote repos. + deletePublicRepoIfCustom("docker") + deletePublicRepoIfCustom("maven") + deletePublicRepoIfCustom("npm") + deletePublicRepoIfCustom("python") res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -691,6 +789,7 @@ func resourceArtifactRegistryRepositoryCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Repository: %s", err) @@ -757,12 +856,14 @@ func resourceArtifactRegistryRepositoryRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ArtifactRegistryRepository %q", d.Id())) @@ -894,6 +995,7 @@ func resourceArtifactRegistryRepositoryUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating Repository %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -945,6 +1047,7 @@ func resourceArtifactRegistryRepositoryUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -985,6 +1088,8 @@ func resourceArtifactRegistryRepositoryDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Repository %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -994,6 +1099,7 @@ func resourceArtifactRegistryRepositoryDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Repository") @@ -1303,6 +1409,8 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfig(v interface{}, d *s flattenArtifactRegistryRepositoryRemoteRepositoryConfigYumRepository(original["yumRepository"], d, config) transformed["upstream_credentials"] = flattenArtifactRegistryRepositoryRemoteRepositoryConfigUpstreamCredentials(original["upstreamCredentials"], d, config) + transformed["disable_upstream_validation"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigDisableUpstreamValidation(original["disableUpstreamValidation"], d, config) return []interface{}{transformed} } func flattenArtifactRegistryRepositoryRemoteRepositoryConfigDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1356,9 +1464,32 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepository(v i transformed := make(map[string]interface{}) transformed["public_repository"] = flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryPublicRepository(original["publicRepository"], d, config) + transformed["custom_repository"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepository(original["customRepository"], d, config) return []interface{}{transformed} } func flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryPublicRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil || tpgresource.IsEmptyValue(reflect.ValueOf(v)) { + return "DOCKER_HUB" + } + + return v +} + +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["uri"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepositoryUri(original["uri"], d, config) + return []interface{}{transformed} +} +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepositoryUri(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1373,9 +1504,32 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepository(v in transformed := make(map[string]interface{}) transformed["public_repository"] = flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryPublicRepository(original["publicRepository"], d, config) + transformed["custom_repository"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepository(original["customRepository"], d, config) return []interface{}{transformed} } func flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryPublicRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil || tpgresource.IsEmptyValue(reflect.ValueOf(v)) { + return "MAVEN_CENTRAL" + } + + return v +} + +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["uri"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepositoryUri(original["uri"], d, config) + return []interface{}{transformed} +} +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepositoryUri(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1390,9 +1544,32 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepository(v inte transformed := make(map[string]interface{}) transformed["public_repository"] = flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryPublicRepository(original["publicRepository"], d, config) + transformed["custom_repository"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepository(original["customRepository"], d, config) return []interface{}{transformed} } func flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryPublicRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil || tpgresource.IsEmptyValue(reflect.ValueOf(v)) { + return "NPMJS" + } + + return v +} + +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["uri"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepositoryUri(original["uri"], d, config) + return []interface{}{transformed} +} +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepositoryUri(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1407,9 +1584,32 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepository(v i transformed := make(map[string]interface{}) transformed["public_repository"] = flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryPublicRepository(original["publicRepository"], d, config) + transformed["custom_repository"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepository(original["customRepository"], d, config) return []interface{}{transformed} } func flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryPublicRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil || tpgresource.IsEmptyValue(reflect.ValueOf(v)) { + return "PYPI" + } + + return v +} + +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepository(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["uri"] = + flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepositoryUri(original["uri"], d, config) + return []interface{}{transformed} +} +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepositoryUri(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1485,6 +1685,10 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfigUpstreamCredentialsU return v } +func flattenArtifactRegistryRepositoryRemoteRepositoryConfigDisableUpstreamValidation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return d.Get("remote_repository_config.0.disable_upstream_validation") +} + func flattenArtifactRegistryRepositoryCleanupPolicyDryRun(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1878,6 +2082,13 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfig(v interface{}, d tpg transformed["upstreamCredentials"] = transformedUpstreamCredentials } + transformedDisableUpstreamValidation, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigDisableUpstreamValidation(original["disable_upstream_validation"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDisableUpstreamValidation); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["disableUpstreamValidation"] = transformedDisableUpstreamValidation + } + return transformed, nil } @@ -1954,6 +2165,13 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepository(v in transformed["publicRepository"] = transformedPublicRepository } + transformedCustomRepository, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepository(original["custom_repository"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCustomRepository); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["customRepository"] = transformedCustomRepository + } + return transformed, nil } @@ -1961,6 +2179,29 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryPubli return v, nil } +func expandArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedUri, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepositoryUri(original["uri"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedUri); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["uri"] = transformedUri + } + + return transformed, nil +} + +func expandArtifactRegistryRepositoryRemoteRepositoryConfigDockerRepositoryCustomRepositoryUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -1977,6 +2218,13 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepository(v int transformed["publicRepository"] = transformedPublicRepository } + transformedCustomRepository, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepository(original["custom_repository"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCustomRepository); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["customRepository"] = transformedCustomRepository + } + return transformed, nil } @@ -1984,6 +2232,29 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryPublic return v, nil } +func expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedUri, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepositoryUri(original["uri"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedUri); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["uri"] = transformedUri + } + + return transformed, nil +} + +func expandArtifactRegistryRepositoryRemoteRepositoryConfigMavenRepositoryCustomRepositoryUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -2000,6 +2271,13 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepository(v inter transformed["publicRepository"] = transformedPublicRepository } + transformedCustomRepository, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepository(original["custom_repository"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCustomRepository); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["customRepository"] = transformedCustomRepository + } + return transformed, nil } @@ -2007,6 +2285,29 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryPublicRe return v, nil } +func expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedUri, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepositoryUri(original["uri"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedUri); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["uri"] = transformedUri + } + + return transformed, nil +} + +func expandArtifactRegistryRepositoryRemoteRepositoryConfigNpmRepositoryCustomRepositoryUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -2023,6 +2324,13 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepository(v in transformed["publicRepository"] = transformedPublicRepository } + transformedCustomRepository, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepository(original["custom_repository"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCustomRepository); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["customRepository"] = transformedCustomRepository + } + return transformed, nil } @@ -2030,6 +2338,29 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryPubli return v, nil } +func expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedUri, err := expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepositoryUri(original["uri"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedUri); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["uri"] = transformedUri + } + + return transformed, nil +} + +func expandArtifactRegistryRepositoryRemoteRepositoryConfigPythonRepositoryCustomRepositoryUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandArtifactRegistryRepositoryRemoteRepositoryConfigYumRepository(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -2136,6 +2467,10 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigUpstreamCredentialsUs return v, nil } +func expandArtifactRegistryRepositoryRemoteRepositoryConfigDisableUpstreamValidation(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandArtifactRegistryRepositoryCleanupPolicyDryRun(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/artifactregistry/resource_artifact_registry_repository_generated_test.go b/google-beta/services/artifactregistry/resource_artifact_registry_repository_generated_test.go index 87b67357a6..9e8b84cfd8 100644 --- a/google-beta/services/artifactregistry/resource_artifact_registry_repository_generated_test.go +++ b/google-beta/services/artifactregistry/resource_artifact_registry_repository_generated_test.go @@ -417,7 +417,7 @@ resource "google_artifact_registry_repository" "my-repo" { `, context) } -func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteCustomExample(t *testing.T) { +func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteDockerhubAuthExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ @@ -430,55 +430,344 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteCustomExa CheckDestroy: testAccCheckArtifactRegistryRepositoryDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteCustomExample(context), + Config: testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteDockerhubAuthExample(context), }, { ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "remote_repository_config.0.disable_upstream_validation", "labels", "terraform_labels"}, }, }, }) } -func testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteCustomExample(context map[string]interface{}) string { +func testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteDockerhubAuthExample(context map[string]interface{}) string { return acctest.Nprintf(` data "google_project" "project" {} -resource "google_secret_manager_secret" "tf-test-example-custom-remote-secret%{random_suffix}" { +resource "google_secret_manager_secret" "tf-test-example-remote-secret%{random_suffix}" { secret_id = "tf-test-example-secret%{random_suffix}" replication { auto {} } } -resource "google_secret_manager_secret_version" "tf-test-example-custom-remote-secret%{random_suffix}_version" { - secret = google_secret_manager_secret.tf-test-example-custom-remote-secret%{random_suffix}.id +resource "google_secret_manager_secret_version" "tf-test-example-remote-secret%{random_suffix}_version" { + secret = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id secret_data = "tf-test-remote-password%{random_suffix}" } resource "google_secret_manager_secret_iam_member" "secret-access" { - secret_id = google_secret_manager_secret.tf-test-example-custom-remote-secret%{random_suffix}.id + secret_id = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id role = "roles/secretmanager.secretAccessor" member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" } resource "google_artifact_registry_repository" "my-repo" { location = "us-central1" - repository_id = "tf-test-example-custom-remote%{random_suffix}" - description = "example remote docker repository with credentials%{random_suffix}" + repository_id = "tf-test-example-dockerhub-remote%{random_suffix}" + description = "example remote dockerhub repository with credentials%{random_suffix}" format = "DOCKER" mode = "REMOTE_REPOSITORY" remote_repository_config { description = "docker hub with custom credentials" + disable_upstream_validation = true docker_repository { public_repository = "DOCKER_HUB" } upstream_credentials { username_password_credentials { username = "tf-test-remote-username%{random_suffix}" - password_secret_version = google_secret_manager_secret_version.tf-test-example-custom-remote-secret%{random_suffix}_version.name + password_secret_version = google_secret_manager_secret_version.tf-test-example-remote-secret%{random_suffix}_version.name + } + } + } +} +`, context) +} + +func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteDockerCustomWithAuthExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckArtifactRegistryRepositoryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteDockerCustomWithAuthExample(context), + }, + { + ResourceName: "google_artifact_registry_repository.my-repo", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"repository_id", "location", "remote_repository_config.0.disable_upstream_validation", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteDockerCustomWithAuthExample(context map[string]interface{}) string { + return acctest.Nprintf(` +data "google_project" "project" {} + +resource "google_secret_manager_secret" "tf-test-example-remote-secret%{random_suffix}" { + secret_id = "tf-test-example-secret%{random_suffix}" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "tf-test-example-remote-secret%{random_suffix}_version" { + secret = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + secret_data = "tf-test-remote-password%{random_suffix}" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "tf-test-example-docker-custom-remote%{random_suffix}" + description = "example remote custom docker repository with credentia%{random_suffix}" + format = "DOCKER" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom docker remote with credentials" + disable_upstream_validation = true + docker_repository { + custom_repository { + uri = "https://registry-1.docker.io" + } + } + upstream_credentials { + username_password_credentials { + username = "tf-test-remote-username%{random_suffix}" + password_secret_version = google_secret_manager_secret_version.tf-test-example-remote-secret%{random_suffix}_version.name + } + } + } +} +`, context) +} + +func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteMavenCustomWithAuthExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckArtifactRegistryRepositoryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteMavenCustomWithAuthExample(context), + }, + { + ResourceName: "google_artifact_registry_repository.my-repo", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"repository_id", "location", "remote_repository_config.0.disable_upstream_validation", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteMavenCustomWithAuthExample(context map[string]interface{}) string { + return acctest.Nprintf(` +data "google_project" "project" {} + +resource "google_secret_manager_secret" "tf-test-example-remote-secret%{random_suffix}" { + secret_id = "tf-test-example-secret%{random_suffix}" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "tf-test-example-remote-secret%{random_suffix}_version" { + secret = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + secret_data = "tf-test-remote-password%{random_suffix}" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "tf-test-example-maven-custom-remote%{random_suffix}" + description = "example remote custom maven repository with credential%{random_suffix}" + format = "MAVEN" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom maven remote with credentials" + disable_upstream_validation = true + maven_repository { + custom_repository { + uri = "https://my.maven.registry" + } + } + upstream_credentials { + username_password_credentials { + username = "tf-test-remote-username%{random_suffix}" + password_secret_version = google_secret_manager_secret_version.tf-test-example-remote-secret%{random_suffix}_version.name + } + } + } +} +`, context) +} + +func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteNpmCustomWithAuthExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckArtifactRegistryRepositoryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteNpmCustomWithAuthExample(context), + }, + { + ResourceName: "google_artifact_registry_repository.my-repo", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"repository_id", "location", "remote_repository_config.0.disable_upstream_validation", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteNpmCustomWithAuthExample(context map[string]interface{}) string { + return acctest.Nprintf(` +data "google_project" "project" {} + +resource "google_secret_manager_secret" "tf-test-example-remote-secret%{random_suffix}" { + secret_id = "tf-test-example-secret%{random_suffix}" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "tf-test-example-remote-secret%{random_suffix}_version" { + secret = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + secret_data = "tf-test-remote-password%{random_suffix}" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "tf-test-example-npm-custom-remote%{random_suffix}" + description = "example remote custom npm repository with credentials%{random_suffix}" + format = "NPM" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom npm with credentials" + disable_upstream_validation = true + npm_repository { + custom_repository { + uri = "https://my.npm.registry" + } + } + upstream_credentials { + username_password_credentials { + username = "tf-test-remote-username%{random_suffix}" + password_secret_version = google_secret_manager_secret_version.tf-test-example-remote-secret%{random_suffix}_version.name + } + } + } +} +`, context) +} + +func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemotePythonCustomWithAuthExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckArtifactRegistryRepositoryDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccArtifactRegistryRepository_artifactRegistryRepositoryRemotePythonCustomWithAuthExample(context), + }, + { + ResourceName: "google_artifact_registry_repository.my-repo", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"repository_id", "location", "remote_repository_config.0.disable_upstream_validation", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccArtifactRegistryRepository_artifactRegistryRepositoryRemotePythonCustomWithAuthExample(context map[string]interface{}) string { + return acctest.Nprintf(` +data "google_project" "project" {} + +resource "google_secret_manager_secret" "tf-test-example-remote-secret%{random_suffix}" { + secret_id = "tf-test-example-secret%{random_suffix}" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "tf-test-example-remote-secret%{random_suffix}_version" { + secret = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + secret_data = "tf-test-remote-password%{random_suffix}" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.tf-test-example-remote-secret%{random_suffix}.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "tf-test-example-python-custom-remote%{random_suffix}" + description = "example remote custom python repository with credentia%{random_suffix}" + format = "PYTHON" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom npm with credentials" + disable_upstream_validation = true + python_repository { + custom_repository { + uri = "https://my.python.registry" + } + } + upstream_credentials { + username_password_credentials { + username = "tf-test-remote-username%{random_suffix}" + password_secret_version = google_secret_manager_secret_version.tf-test-example-remote-secret%{random_suffix}_version.name } } } diff --git a/google-beta/services/artifactregistry/resource_artifact_registry_vpcsc_config.go b/google-beta/services/artifactregistry/resource_artifact_registry_vpcsc_config.go index dd52524614..95728d4c69 100644 --- a/google-beta/services/artifactregistry/resource_artifact_registry_vpcsc_config.go +++ b/google-beta/services/artifactregistry/resource_artifact_registry_vpcsc_config.go @@ -20,6 +20,7 @@ package artifactregistry import ( "fmt" "log" + "net/http" "reflect" "time" @@ -122,6 +123,7 @@ func resourceArtifactRegistryVPCSCConfigCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -130,6 +132,7 @@ func resourceArtifactRegistryVPCSCConfigCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VPCSCConfig: %s", err) @@ -172,12 +175,14 @@ func resourceArtifactRegistryVPCSCConfigRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ArtifactRegistryVPCSCConfig %q", d.Id())) @@ -231,6 +236,7 @@ func resourceArtifactRegistryVPCSCConfigUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating VPCSCConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -245,6 +251,7 @@ func resourceArtifactRegistryVPCSCConfigUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/backupdr/data_source_backup_dr_management_server_test.go b/google-beta/services/backupdr/data_source_backup_dr_management_server_test.go index 620301f019..39bed36676 100644 --- a/google-beta/services/backupdr/data_source_backup_dr_management_server_test.go +++ b/google-beta/services/backupdr/data_source_backup_dr_management_server_test.go @@ -3,10 +3,14 @@ package backupdr_test import ( - "testing" - + "fmt" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "strings" + "testing" ) func TestAccDataSourceGoogleBackupDRManagementServer_basic(t *testing.T) { @@ -20,6 +24,7 @@ func TestAccDataSourceGoogleBackupDRManagementServer_basic(t *testing.T) { acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBackupDRManagementServerDestroyProducer(t), Steps: []resource.TestStep{ { Config: testAccDataSourceGoogleBackupDRManagementServer_basic(context), @@ -31,6 +36,45 @@ func TestAccDataSourceGoogleBackupDRManagementServer_basic(t *testing.T) { }) } +func testAccCheckBackupDRManagementServerDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_backup_dr_management_server" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } + + config := acctest.GoogleProviderConfig(t) + + url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{BackupDRBasePath}}projects/{{project}}/locations/{{location}}/managementServers/{{name}}") + if err != nil { + return err + } + + billingProject := "" + + if config.BillingProject != "" { + billingProject = config.BillingProject + } + + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: config.UserAgent, + }) + if err == nil { + return fmt.Errorf("BackupDRManagementServer still exists at %s", url) + } + } + + return nil + } +} + func testAccDataSourceGoogleBackupDRManagementServer_basic(context map[string]interface{}) string { return acctest.Nprintf(` data "google_compute_network" "default" { diff --git a/google-beta/services/backupdr/resource_backup_dr_management_server.go b/google-beta/services/backupdr/resource_backup_dr_management_server.go index 6d28b7910a..9681d72225 100644 --- a/google-beta/services/backupdr/resource_backup_dr_management_server.go +++ b/google-beta/services/backupdr/resource_backup_dr_management_server.go @@ -20,6 +20,7 @@ package backupdr import ( "fmt" "log" + "net/http" "reflect" "time" @@ -170,6 +171,7 @@ func resourceBackupDRManagementServerCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -178,6 +180,7 @@ func resourceBackupDRManagementServerCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ManagementServer: %s", err) @@ -240,12 +243,14 @@ func resourceBackupDRManagementServerRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BackupDRManagementServer %q", d.Id())) @@ -298,6 +303,8 @@ func resourceBackupDRManagementServerDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ManagementServer %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -307,6 +314,7 @@ func resourceBackupDRManagementServerDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ManagementServer") diff --git a/google-beta/services/beyondcorp/resource_beyondcorp_app_connection.go b/google-beta/services/beyondcorp/resource_beyondcorp_app_connection.go index 8ef84e5bdd..4220e219cf 100644 --- a/google-beta/services/beyondcorp/resource_beyondcorp_app_connection.go +++ b/google-beta/services/beyondcorp/resource_beyondcorp_app_connection.go @@ -20,6 +20,7 @@ package beyondcorp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -238,6 +239,7 @@ func resourceBeyondcorpAppConnectionCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -246,6 +248,7 @@ func resourceBeyondcorpAppConnectionCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AppConnection: %s", err) @@ -308,12 +311,14 @@ func resourceBeyondcorpAppConnectionRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BeyondcorpAppConnection %q", d.Id())) @@ -404,6 +409,7 @@ func resourceBeyondcorpAppConnectionUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating AppConnection %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -447,6 +453,7 @@ func resourceBeyondcorpAppConnectionUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -494,6 +501,8 @@ func resourceBeyondcorpAppConnectionDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AppConnection %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -503,6 +512,7 @@ func resourceBeyondcorpAppConnectionDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AppConnection") diff --git a/google-beta/services/beyondcorp/resource_beyondcorp_app_connector.go b/google-beta/services/beyondcorp/resource_beyondcorp_app_connector.go index d54e3e49dd..a61e7153ed 100644 --- a/google-beta/services/beyondcorp/resource_beyondcorp_app_connector.go +++ b/google-beta/services/beyondcorp/resource_beyondcorp_app_connector.go @@ -20,6 +20,7 @@ package beyondcorp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -181,6 +182,7 @@ func resourceBeyondcorpAppConnectorCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -189,6 +191,7 @@ func resourceBeyondcorpAppConnectorCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AppConnector: %s", err) @@ -251,12 +254,14 @@ func resourceBeyondcorpAppConnectorRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BeyondcorpAppConnector %q", d.Id())) @@ -329,6 +334,7 @@ func resourceBeyondcorpAppConnectorUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating AppConnector %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -364,6 +370,7 @@ func resourceBeyondcorpAppConnectorUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -411,6 +418,8 @@ func resourceBeyondcorpAppConnectorDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AppConnector %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -420,6 +429,7 @@ func resourceBeyondcorpAppConnectorDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AppConnector") diff --git a/google-beta/services/beyondcorp/resource_beyondcorp_app_gateway.go b/google-beta/services/beyondcorp/resource_beyondcorp_app_gateway.go index 7f47a64cb9..288e65191f 100644 --- a/google-beta/services/beyondcorp/resource_beyondcorp_app_gateway.go +++ b/google-beta/services/beyondcorp/resource_beyondcorp_app_gateway.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -214,6 +215,7 @@ func resourceBeyondcorpAppGatewayCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -222,6 +224,7 @@ func resourceBeyondcorpAppGatewayCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AppGateway: %s", err) @@ -284,12 +287,14 @@ func resourceBeyondcorpAppGatewayRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BeyondcorpAppGateway %q", d.Id())) @@ -362,6 +367,8 @@ func resourceBeyondcorpAppGatewayDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AppGateway %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -371,6 +378,7 @@ func resourceBeyondcorpAppGatewayDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AppGateway") diff --git a/google-beta/services/biglake/resource_biglake_catalog.go b/google-beta/services/biglake/resource_biglake_catalog.go index aed91aa9b8..922fc03295 100644 --- a/google-beta/services/biglake/resource_biglake_catalog.go +++ b/google-beta/services/biglake/resource_biglake_catalog.go @@ -20,6 +20,7 @@ package biglake import ( "fmt" "log" + "net/http" "time" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" @@ -130,6 +131,7 @@ func resourceBiglakeCatalogCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -138,6 +140,7 @@ func resourceBiglakeCatalogCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Catalog: %s", err) @@ -180,12 +183,14 @@ func resourceBiglakeCatalogRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BiglakeCatalog %q", d.Id())) @@ -238,6 +243,8 @@ func resourceBiglakeCatalogDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Catalog %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -247,6 +254,7 @@ func resourceBiglakeCatalogDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Catalog") diff --git a/google-beta/services/biglake/resource_biglake_database.go b/google-beta/services/biglake/resource_biglake_database.go index 048dbb566c..a042f5550f 100644 --- a/google-beta/services/biglake/resource_biglake_database.go +++ b/google-beta/services/biglake/resource_biglake_database.go @@ -20,6 +20,7 @@ package biglake import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -159,6 +160,7 @@ func resourceBiglakeDatabaseCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -167,6 +169,7 @@ func resourceBiglakeDatabaseCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Database: %s", err) @@ -203,12 +206,14 @@ func resourceBiglakeDatabaseRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BiglakeDatabase %q", d.Id())) @@ -265,6 +270,7 @@ func resourceBiglakeDatabaseUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Database %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("type") { @@ -296,6 +302,7 @@ func resourceBiglakeDatabaseUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -330,6 +337,8 @@ func resourceBiglakeDatabaseDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Database %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -339,6 +348,7 @@ func resourceBiglakeDatabaseDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Database") diff --git a/google-beta/services/biglake/resource_biglake_table.go b/google-beta/services/biglake/resource_biglake_table.go index 184a640a1c..1cd9016a95 100644 --- a/google-beta/services/biglake/resource_biglake_table.go +++ b/google-beta/services/biglake/resource_biglake_table.go @@ -20,6 +20,7 @@ package biglake import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -195,6 +196,7 @@ func resourceBiglakeTableCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -203,6 +205,7 @@ func resourceBiglakeTableCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Table: %s", err) @@ -239,12 +242,14 @@ func resourceBiglakeTableRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BiglakeTable %q", d.Id())) @@ -304,6 +309,7 @@ func resourceBiglakeTableUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Table %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("type") { @@ -335,6 +341,7 @@ func resourceBiglakeTableUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -369,6 +376,8 @@ func resourceBiglakeTableDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Table %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -378,6 +387,7 @@ func resourceBiglakeTableDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Table") diff --git a/google-beta/services/bigquery/resource_bigquery_dataset.go b/google-beta/services/bigquery/resource_bigquery_dataset.go index 74d859fd8d..2596f80b00 100644 --- a/google-beta/services/bigquery/resource_bigquery_dataset.go +++ b/google-beta/services/bigquery/resource_bigquery_dataset.go @@ -20,6 +20,7 @@ package bigquery import ( "fmt" "log" + "net/http" "reflect" "regexp" "time" @@ -581,6 +582,7 @@ func resourceBigQueryDatasetCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -589,6 +591,7 @@ func resourceBigQueryDatasetCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Dataset: %s", err) @@ -631,12 +634,14 @@ func resourceBigQueryDatasetRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BigQueryDataset %q", d.Id())) @@ -835,6 +840,7 @@ func resourceBigQueryDatasetUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Dataset %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -849,6 +855,7 @@ func resourceBigQueryDatasetUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -887,6 +894,8 @@ func resourceBigQueryDatasetDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Dataset %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -896,6 +905,7 @@ func resourceBigQueryDatasetDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Dataset") diff --git a/google-beta/services/bigquery/resource_bigquery_dataset_access.go b/google-beta/services/bigquery/resource_bigquery_dataset_access.go index f30b0f3845..48012f9820 100644 --- a/google-beta/services/bigquery/resource_bigquery_dataset_access.go +++ b/google-beta/services/bigquery/resource_bigquery_dataset_access.go @@ -20,6 +20,7 @@ package bigquery import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -470,6 +471,7 @@ func resourceBigQueryDatasetAccessCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -478,6 +480,7 @@ func resourceBigQueryDatasetAccessCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsBigqueryIAMQuotaError}, }) if err != nil { @@ -549,12 +552,14 @@ func resourceBigQueryDatasetAccessRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsBigqueryIAMQuotaError}, }) if err != nil { @@ -647,6 +652,8 @@ func resourceBigQueryDatasetAccessDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DatasetAccess %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -656,6 +663,7 @@ func resourceBigQueryDatasetAccessDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsBigqueryIAMQuotaError}, }) if err != nil { diff --git a/google-beta/services/bigquery/resource_bigquery_job.go b/google-beta/services/bigquery/resource_bigquery_job.go index 5e8d9c7d20..259fbad329 100644 --- a/google-beta/services/bigquery/resource_bigquery_job.go +++ b/google-beta/services/bigquery/resource_bigquery_job.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "regexp" "time" @@ -1058,6 +1059,7 @@ func resourceBigQueryJobCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1066,6 +1068,7 @@ func resourceBigQueryJobCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Job: %s", err) @@ -1155,12 +1158,14 @@ func resourceBigQueryJobRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BigQueryJob %q", d.Id())) diff --git a/google-beta/services/bigquery/resource_bigquery_routine.go b/google-beta/services/bigquery/resource_bigquery_routine.go index f15397ce77..9f37337181 100644 --- a/google-beta/services/bigquery/resource_bigquery_routine.go +++ b/google-beta/services/bigquery/resource_bigquery_routine.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "time" @@ -436,6 +437,7 @@ func resourceBigQueryRoutineCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -444,6 +446,7 @@ func resourceBigQueryRoutineCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Routine: %s", err) @@ -486,12 +489,14 @@ func resourceBigQueryRoutineRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("BigQueryRoutine %q", d.Id())) @@ -663,6 +668,7 @@ func resourceBigQueryRoutineUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Routine %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -677,6 +683,7 @@ func resourceBigQueryRoutineUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -715,6 +722,8 @@ func resourceBigQueryRoutineDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Routine %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -724,6 +733,7 @@ func resourceBigQueryRoutineDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Routine") diff --git a/google-beta/services/bigquery/resource_bigquery_table.go b/google-beta/services/bigquery/resource_bigquery_table.go index b1a4a5f412..3e6e2a9b01 100644 --- a/google-beta/services/bigquery/resource_bigquery_table.go +++ b/google-beta/services/bigquery/resource_bigquery_table.go @@ -258,19 +258,17 @@ func bigQueryTableNormalizePolicyTags(val interface{}) interface{} { // Compares two existing schema implementations and decides if // it is changeable.. pairs with a force new on not changeable -func resourceBigQueryTableSchemaIsChangeable(old, new interface{}) (bool, error) { +func resourceBigQueryTableSchemaIsChangeable(old, new interface{}, isExternalTable bool, topLevel bool) (bool, error) { switch old.(type) { case []interface{}: arrayOld := old.([]interface{}) arrayNew, ok := new.([]interface{}) + sameNameColumns := 0 + droppedColumns := 0 if !ok { // if not both arrays not changeable return false, nil } - if len(arrayOld) > len(arrayNew) { - // if not growing not changeable - return false, nil - } if err := bigQueryTablecheckNameExists(arrayOld); err != nil { return false, err } @@ -291,16 +289,28 @@ func resourceBigQueryTableSchemaIsChangeable(old, new interface{}) (bool, error) } } for key := range mapOld { - // all old keys should be represented in the new config + // dropping top level columns can happen in-place + // but this doesn't apply to external tables if _, ok := mapNew[key]; !ok { - return false, nil + if !topLevel || isExternalTable { + return false, nil + } + droppedColumns += 1 + continue } - if isChangable, err := - resourceBigQueryTableSchemaIsChangeable(mapOld[key], mapNew[key]); err != nil || !isChangable { + + isChangable, err := resourceBigQueryTableSchemaIsChangeable(mapOld[key], mapNew[key], isExternalTable, false) + if err != nil || !isChangable { return false, err + } else if isChangable && topLevel { + // top level column that exists in the new schema + sameNameColumns += 1 } } - return true, nil + // in-place column dropping alongside column additions is not allowed + // as of now because user intention can be ambiguous (e.g. column renaming) + newColumns := len(arrayNew) - sameNameColumns + return (droppedColumns == 0) || (newColumns == 0), nil case map[string]interface{}: objectOld := old.(map[string]interface{}) objectNew, ok := new.(map[string]interface{}) @@ -339,7 +349,7 @@ func resourceBigQueryTableSchemaIsChangeable(old, new interface{}) (bool, error) return false, nil } case "fields": - return resourceBigQueryTableSchemaIsChangeable(valOld, valNew) + return resourceBigQueryTableSchemaIsChangeable(valOld, valNew, isExternalTable, false) // other parameters: description, policyTags and // policyTags.names[] are changeable @@ -378,7 +388,8 @@ func resourceBigQueryTableSchemaCustomizeDiffFunc(d tpgresource.TerraformResourc // same as above log.Printf("[DEBUG] unable to unmarshal json customized diff - %v", err) } - isChangeable, err := resourceBigQueryTableSchemaIsChangeable(old, new) + _, isExternalTable := d.GetOk("external_data_configuration") + isChangeable, err := resourceBigQueryTableSchemaIsChangeable(old, new, isExternalTable, true) if err != nil { return err } @@ -1296,6 +1307,12 @@ func ResourceBigQueryTable() *schema.Resource { }, }, }, + "resource_tags": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The tags attached to this table. Tag keys are globally unique. Tag key is expected to be in the namespaced format, for example "123456789012/environment" where 123456789012 is the ID of the parent organization or project resource for this tag key. Tag value is expected to be the short name, for example "Production".`, + }, }, UseJSONNumber: true, } @@ -1410,6 +1427,8 @@ func resourceTable(d *schema.ResourceData, meta interface{}) (*bigquery.Table, e table.TableConstraints = tableConstraints } + table.ResourceTags = tpgresource.ExpandStringMap(d, "resource_tags") + return table, nil } @@ -1683,6 +1702,10 @@ func resourceBigQueryTableRead(d *schema.ResourceData, meta interface{}) error { } } + if err := d.Set("resource_tags", res.ResourceTags); err != nil { + return fmt.Errorf("Error setting resource tags: %s", err) + } + // TODO: Update when the Get API fields for TableReplicationInfo are available in the client library. url, err := tpgresource.ReplaceVars(d, config, "{{BigQueryBasePath}}projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}}") if err != nil { @@ -1712,6 +1735,12 @@ func resourceBigQueryTableRead(d *schema.ResourceData, meta interface{}) error { return nil } +type TableReference struct { + project string + datasetID string + tableID string +} + func resourceBigQueryTableUpdate(d *schema.ResourceData, meta interface{}) error { config := meta.(*transport_tpg.Config) userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) @@ -1734,6 +1763,16 @@ func resourceBigQueryTableUpdate(d *schema.ResourceData, meta interface{}) error datasetID := d.Get("dataset_id").(string) tableID := d.Get("table_id").(string) + tableReference := &TableReference{ + project: project, + datasetID: datasetID, + tableID: tableID, + } + + if err = resourceBigQueryTableColumnDrop(config, userAgent, table, tableReference); err != nil { + return err + } + if _, err = config.NewBigQueryClient(userAgent).Tables.Update(project, datasetID, tableID, table).Do(); err != nil { return err } @@ -1741,10 +1780,63 @@ func resourceBigQueryTableUpdate(d *schema.ResourceData, meta interface{}) error return resourceBigQueryTableRead(d, meta) } +func resourceBigQueryTableColumnDrop(config *transport_tpg.Config, userAgent string, table *bigquery.Table, tableReference *TableReference) error { + oldTable, err := config.NewBigQueryClient(userAgent).Tables.Get(tableReference.project, tableReference.datasetID, tableReference.tableID).Do() + if err != nil { + return err + } + + if table.Schema == nil { + return nil + } + + newTableFields := map[string]bool{} + for _, field := range table.Schema.Fields { + newTableFields[field.Name] = true + } + + droppedColumns := []string{} + for _, field := range oldTable.Schema.Fields { + if !newTableFields[field.Name] { + droppedColumns = append(droppedColumns, field.Name) + } + } + + if len(droppedColumns) > 0 { + droppedColumnsString := strings.Join(droppedColumns, ", DROP COLUMN ") + + dropColumnsDDL := fmt.Sprintf("ALTER TABLE `%s.%s.%s` DROP COLUMN %s", tableReference.project, tableReference.datasetID, tableReference.tableID, droppedColumnsString) + log.Printf("[INFO] Dropping columns in-place: %s", dropColumnsDDL) + + useLegacySQL := false + req := &bigquery.QueryRequest{ + Query: dropColumnsDDL, + UseLegacySql: &useLegacySQL, + } + + _, err = config.NewBigQueryClient(userAgent).Jobs.Query(tableReference.project, req).Do() + if err != nil { + return err + } + } + + return nil +} + func resourceBigQueryTableDelete(d *schema.ResourceData, meta interface{}) error { if d.Get("deletion_protection").(bool) { - return fmt.Errorf("cannot destroy instance without setting deletion_protection=false and running `terraform apply`") + return fmt.Errorf("cannot destroy table %v without setting deletion_protection=false and running `terraform apply`", d.Id()) } + if v, ok := d.GetOk("resource_tags"); ok { + var resourceTags []string + + for k, v := range v.(map[string]interface{}) { + resourceTags = append(resourceTags, fmt.Sprintf("%s:%s", k, v.(string))) + } + + return fmt.Errorf("cannot destroy table %v without clearing the following resource tags: %v", d.Id(), resourceTags) + } + config := meta.(*transport_tpg.Config) userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) if err != nil { @@ -2357,6 +2449,11 @@ func expandPrimaryKey(configured interface{}) *bigquery.TableConstraintsPrimaryK columns := []string{} for _, rawColumn := range raw["columns"].([]interface{}) { + if rawColumn == nil { + // Terraform reads "" as nil, which ends up crashing when we cast below + // sending "" to the API triggers a 400, which is okay. + rawColumn = "" + } columns = append(columns, rawColumn.(string)) } if len(columns) > 0 { diff --git a/google-beta/services/bigquery/resource_bigquery_table_internal_test.go b/google-beta/services/bigquery/resource_bigquery_table_internal_test.go index 256aa0501f..1c481ef831 100644 --- a/google-beta/services/bigquery/resource_bigquery_table_internal_test.go +++ b/google-beta/services/bigquery/resource_bigquery_table_internal_test.go @@ -391,10 +391,11 @@ func TestBigQueryTableSchemaDiffSuppress(t *testing.T) { } type testUnitBigQueryDataTableJSONChangeableTestCase struct { - name string - jsonOld string - jsonNew string - changeable bool + name string + jsonOld string + jsonNew string + isExternalTable bool + changeable bool } func (testcase *testUnitBigQueryDataTableJSONChangeableTestCase) check(t *testing.T) { @@ -405,7 +406,7 @@ func (testcase *testUnitBigQueryDataTableJSONChangeableTestCase) check(t *testin if err := json.Unmarshal([]byte(testcase.jsonNew), &new); err != nil { t.Fatalf("unable to unmarshal json - %v", err) } - changeable, err := resourceBigQueryTableSchemaIsChangeable(old, new) + changeable, err := resourceBigQueryTableSchemaIsChangeable(old, new, testcase.isExternalTable, true) if err != nil { t.Errorf("%s failed unexpectedly: %s", testcase.name, err) } @@ -421,6 +422,11 @@ func (testcase *testUnitBigQueryDataTableJSONChangeableTestCase) check(t *testin d.Before["schema"] = testcase.jsonOld d.After["schema"] = testcase.jsonNew + if testcase.isExternalTable { + d.Before["external_data_configuration"] = "" + d.After["external_data_configuration"] = "" + } + err = resourceBigQueryTableSchemaCustomizeDiffFunc(d) if err != nil { t.Errorf("error on testcase %s - %v", testcase.name, err) @@ -430,7 +436,7 @@ func (testcase *testUnitBigQueryDataTableJSONChangeableTestCase) check(t *testin } } -var testUnitBigQueryDataTableIsChangableTestCases = []testUnitBigQueryDataTableJSONChangeableTestCase{ +var testUnitBigQueryDataTableIsChangeableTestCases = []testUnitBigQueryDataTableJSONChangeableTestCase{ { name: "defaultEquality", jsonOld: "[{\"name\": \"someValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", @@ -447,7 +453,14 @@ var testUnitBigQueryDataTableIsChangableTestCases = []testUnitBigQueryDataTableJ name: "arraySizeDecreases", jsonOld: "[{\"name\": \"someValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }, {\"name\": \"asomeValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", jsonNew: "[{\"name\": \"someValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", - changeable: false, + changeable: true, + }, + { + name: "externalArraySizeDecreases", + jsonOld: "[{\"name\": \"someValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }, {\"name\": \"asomeValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", + jsonNew: "[{\"name\": \"someValue\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", + isExternalTable: true, + changeable: false, }, { name: "descriptionChanges", @@ -525,6 +538,24 @@ var testUnitBigQueryDataTableIsChangableTestCases = []testUnitBigQueryDataTableJ jsonNew: "[{\"name\": \"value3\", \"type\" : \"BOOLEAN\", \"mode\" : \"NULLABLE\", \"description\" : \"newVal\" }, {\"name\": \"value1\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", changeable: false, }, + { + name: "renameRequiredColumn", + jsonOld: "[{\"name\": \"value1\", \"type\" : \"INTEGER\", \"mode\" : \"REQUIRED\", \"description\" : \"someVal\" }]", + jsonNew: "[{\"name\": \"value3\", \"type\" : \"INTEGER\", \"mode\" : \"REQUIRED\", \"description\" : \"someVal\" }]", + changeable: false, + }, + { + name: "renameNullableColumn", + jsonOld: "[{\"name\": \"value1\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", + jsonNew: "[{\"name\": \"value3\", \"type\" : \"INTEGER\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", + changeable: false, + }, + { + name: "typeModeReqToNullAndColumnDropped", + jsonOld: "[{\"name\": \"someValue\", \"type\" : \"BOOLEAN\", \"mode\" : \"REQUIRED\", \"description\" : \"someVal\" }, {\"name\": \"someValue2\", \"type\" : \"BOOLEAN\", \"mode\" : \"NULLABLE\", \"description\" : \"someVal\" }]", + jsonNew: "[{\"name\": \"someValue\", \"type\" : \"BOOLEAN\", \"mode\" : \"NULLABLE\", \"description\" : \"some new value\" }]", + changeable: true, + }, { name: "policyTags", jsonOld: `[ @@ -550,15 +581,29 @@ var testUnitBigQueryDataTableIsChangableTestCases = []testUnitBigQueryDataTableJ }, } -func TestUnitBigQueryDataTable_schemaIsChangable(t *testing.T) { +func TestUnitBigQueryDataTable_schemaIsChangeable(t *testing.T) { t.Parallel() - for _, testcase := range testUnitBigQueryDataTableIsChangableTestCases { + for _, testcase := range testUnitBigQueryDataTableIsChangeableTestCases { testcase.check(t) + } +} + +func TestUnitBigQueryDataTable_schemaIsChangeableNested(t *testing.T) { + t.Parallel() + // Only top level column drops are changeable + customNestedValues := map[string]bool{"arraySizeDecreases": false, "typeModeReqToNullAndColumnDropped": false} + for _, testcase := range testUnitBigQueryDataTableIsChangeableTestCases { + changeable := testcase.changeable + if overrideValue, ok := customNestedValues[testcase.name]; ok { + changeable = overrideValue + } + testcaseNested := &testUnitBigQueryDataTableJSONChangeableTestCase{ testcase.name + "Nested", fmt.Sprintf("[{\"name\": \"someValue\", \"type\" : \"INTEGER\", \"fields\" : %s }]", testcase.jsonOld), fmt.Sprintf("[{\"name\": \"someValue\", \"type\" : \"INT64\", \"fields\" : %s }]", testcase.jsonNew), - testcase.changeable, + testcase.isExternalTable, + changeable, } testcaseNested.check(t) } diff --git a/google-beta/services/bigquery/resource_bigquery_table_test.go b/google-beta/services/bigquery/resource_bigquery_table_test.go index 3ccd89ce02..37a2c5a76a 100644 --- a/google-beta/services/bigquery/resource_bigquery_table_test.go +++ b/google-beta/services/bigquery/resource_bigquery_table_test.go @@ -47,6 +47,39 @@ func TestAccBigQueryTable_Basic(t *testing.T) { }) } +func TestAccBigQueryTable_DropColumns(t *testing.T) { + t.Parallel() + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableTimePartitioningDropColumns(datasetID, tableID), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + { + Config: testAccBigQueryTableTimePartitioningDropColumnsUpdate(datasetID, tableID), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + }, + }) +} + func TestAccBigQueryTable_Kms(t *testing.T) { t.Parallel() resourceName := "google_bigquery_table.test" @@ -1521,6 +1554,56 @@ func TestAccBigQueryTable_TableReplicationInfo_WithReplicationInterval(t *testin }) } +func TestAccBigQueryTable_ResourceTags(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project_id": envvar.GetTestProjectFromEnv(), + "dataset_id": fmt.Sprintf("tf_test_dataset_%s", acctest.RandString(t, 10)), + "table_id": fmt.Sprintf("tf_test_table_%s", acctest.RandString(t, 10)), + "tag_key_name1": fmt.Sprintf("tf_test_tag_key1_%s", acctest.RandString(t, 10)), + "tag_value_name1": fmt.Sprintf("tf_test_tag_value1_%s", acctest.RandString(t, 10)), + "tag_key_name2": fmt.Sprintf("tf_test_tag_key2_%s", acctest.RandString(t, 10)), + "tag_value_name2": fmt.Sprintf("tf_test_tag_value2_%s", acctest.RandString(t, 10)), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableWithResourceTags(context), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + { + Config: testAccBigQueryTableWithResourceTagsUpdate(context), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + // testAccBigQueryTableWithResourceTagsDestroy must be called at the end of this test to clear the resource tag bindings of the table before deletion. + { + Config: testAccBigQueryTableWithResourceTagsDestroy(context), + }, + { + ResourceName: "google_bigquery_table.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + }, + }) +} + func testAccCheckBigQueryExtData(t *testing.T, expectedQuoteChar string) resource.TestCheckFunc { return func(s *terraform.State) error { for _, rs := range s.RootModule().Resources { @@ -1761,6 +1844,62 @@ EOH `, datasetID, tableID, partitioningType) } +func testAccBigQueryTableTimePartitioningDropColumns(datasetID, tableID string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" +} + +resource "google_bigquery_table" "test" { + deletion_protection = false + table_id = "%s" + dataset_id = google_bigquery_dataset.test.dataset_id + + schema = <[^/]+)/locations/(?P[^/]+)/environments/(?P[^/]+)/userWorkloadsSecrets/(?P[^/]+)", "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", "(?P[^/]+)"}, d, config); err != nil { + return nil, err + } + + // Replace import id for the resource id + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{region}}/environments/{{environment}}/userWorkloadsSecrets/{{name}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + // retrieve "data" in advance, because Read function won't do it. + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return nil, err + } + + res, err := config.NewComposerClient(userAgent).Projects.Locations.Environments.UserWorkloadsSecrets.Get(id).Do() + if err != nil { + return nil, err + } + + if err := d.Set("data", res.Data); err != nil { + return nil, fmt.Errorf("Error setting UserWorkloadsSecret Data: %s", err) + } + + return []*schema.ResourceData{d}, nil +} + +func resourceComposerUserWorkloadsSecretName(d *schema.ResourceData, config *transport_tpg.Config) (*UserWorkloadsSecretsName, error) { + project, err := tpgresource.GetProject(d, config) + if err != nil { + return nil, err + } + + region, err := tpgresource.GetRegion(d, config) + if err != nil { + return nil, err + } + + return &UserWorkloadsSecretsName{ + Project: project, + Region: region, + Environment: d.Get("environment").(string), + Secret: d.Get("name").(string), + }, nil +} + +type UserWorkloadsSecretsName struct { + Project string + Region string + Environment string + Secret string +} + +func (n *UserWorkloadsSecretsName) ResourceName() string { + return fmt.Sprintf("projects/%s/locations/%s/environments/%s/userWorkloadsSecrets/%s", n.Project, n.Region, n.Environment, n.Secret) +} + +func (n *UserWorkloadsSecretsName) ParentName() string { + return fmt.Sprintf("projects/%s/locations/%s/environments/%s", n.Project, n.Region, n.Environment) +} diff --git a/google-beta/services/composer/resource_composer_user_workloads_secret_test.go b/google-beta/services/composer/resource_composer_user_workloads_secret_test.go new file mode 100644 index 0000000000..8e1b2d5ad5 --- /dev/null +++ b/google-beta/services/composer/resource_composer_user_workloads_secret_test.go @@ -0,0 +1,179 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package composer_test + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/composer" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" +) + +const testComposerUserWorkloadsSecretPrefix = "tf-test-composer-secret" + +func TestAccComposerUserWorkloadsSecret_basic(t *testing.T) { + t.Parallel() + + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, acctest.RandInt(t)) + secretName := fmt.Sprintf("%s-%d", testComposerUserWorkloadsSecretPrefix, acctest.RandInt(t)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + // CheckDestroy: testAccComposerUserWorkloadsSecretDestroy(t), + Steps: []resource.TestStep{ + { + Config: testAccComposerUserWorkloadsSecret_basic(envName, secretName, envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv()), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("google_composer_user_workloads_secret.test", "data.username"), + resource.TestCheckResourceAttrSet("google_composer_user_workloads_secret.test", "data.password"), + ), + }, + { + ResourceName: "google_composer_user_workloads_secret.test", + ImportState: true, + }, + }, + }) +} + +func TestAccComposerUserWorkloadsSecret_update(t *testing.T) { + t.Parallel() + + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, acctest.RandInt(t)) + secretName := fmt.Sprintf("%s-%d", testComposerUserWorkloadsSecretPrefix, acctest.RandInt(t)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccComposerUserWorkloadsSecret_basic(envName, secretName, envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv()), + }, + { + Config: testAccComposerUserWorkloadsSecret_update(envName, secretName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("google_composer_user_workloads_secret.test", "data.email"), + resource.TestCheckResourceAttrSet("google_composer_user_workloads_secret.test", "data.password"), + resource.TestCheckNoResourceAttr("google_composer_user_workloads_secret.test", "data.username"), + ), + }, + }, + }) +} + +func TestAccComposerUserWorkloadsSecret_delete(t *testing.T) { + t.Parallel() + + envName := fmt.Sprintf("%s-%d", testComposerEnvironmentPrefix, acctest.RandInt(t)) + secretName := fmt.Sprintf("%s-%d", testComposerUserWorkloadsSecretPrefix, acctest.RandInt(t)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccComposerUserWorkloadsSecret_basic(envName, secretName, envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv()), + }, + { + Config: testAccComposerUserWorkloadsSecret_delete(envName), + Check: resource.ComposeTestCheckFunc( + testAccComposerUserWorkloadsSecretDestroyed(t), + ), + }, + }, + }) +} + +func testAccComposerUserWorkloadsSecret_basic(envName, secretName, project, region string) string { + return fmt.Sprintf(` +resource "google_composer_environment" "test" { + name = "%s" + config { + software_config { + image_version = "composer-3-airflow-2" + } + } +} +resource "google_composer_user_workloads_secret" "test" { + environment = google_composer_environment.test.name + name = "%s" + project = "%s" + region = "%s" + data = { + username: base64encode("username"), + password: base64encode("password"), + } +} +`, envName, secretName, project, region) +} + +func testAccComposerUserWorkloadsSecret_update(envName, secretName string) string { + return fmt.Sprintf(` +resource "google_composer_environment" "test" { + name = "%s" + config { + software_config { + image_version = "composer-3-airflow-2" + } + } +} +resource "google_composer_user_workloads_secret" "test" { + environment = google_composer_environment.test.name + name = "%s" + data = { + email: base64encode("email"), + password: base64encode("password"), + } +} +`, envName, secretName) +} + +func testAccComposerUserWorkloadsSecret_delete(envName string) string { + return fmt.Sprintf(` +resource "google_composer_environment" "test" { + name = "%s" + config { + software_config { + image_version = "composer-3-airflow-2" + } + } +} +`, envName) +} + +func testAccComposerUserWorkloadsSecretDestroyed(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + config := acctest.GoogleProviderConfig(t) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "google_composer_user_workloads_secret" { + continue + } + + idTokens := strings.Split(rs.Primary.ID, "/") + if len(idTokens) != 8 { + return fmt.Errorf("Invalid ID %q, expected format projects/{project}/regions/{region}/environments/{environment}/userWorkloadsSecrets/{name}", rs.Primary.ID) + } + secretName := &composer.UserWorkloadsSecretsName{ + Project: idTokens[1], + Region: idTokens[3], + Environment: idTokens[5], + Secret: idTokens[7], + } + + _, err := config.NewComposerClient(config.UserAgent).Projects.Locations.Environments.UserWorkloadsSecrets.Get(secretName.ResourceName()).Do() + if err == nil { + return fmt.Errorf("secret %s still exists", secretName.ResourceName()) + } + } + + return nil + } +} diff --git a/google-beta/services/compute/compute_instance_helpers.go b/google-beta/services/compute/compute_instance_helpers.go index cbf8a09885..5c753a2e6b 100644 --- a/google-beta/services/compute/compute_instance_helpers.go +++ b/google-beta/services/compute/compute_instance_helpers.go @@ -544,8 +544,7 @@ func expandConfidentialInstanceConfig(d tpgresource.TerraformResourceData) *comp prefix := "confidential_instance_config.0" return &compute.ConfidentialInstanceConfig{ EnableConfidentialCompute: d.Get(prefix + ".enable_confidential_compute").(bool), - - ConfidentialInstanceType: d.Get(prefix + ".confidential_instance_type").(string), + ConfidentialInstanceType: d.Get(prefix + ".confidential_instance_type").(string), } } @@ -556,8 +555,7 @@ func flattenConfidentialInstanceConfig(ConfidentialInstanceConfig *compute.Confi return []map[string]interface{}{{ "enable_confidential_compute": ConfidentialInstanceConfig.EnableConfidentialCompute, - - "confidential_instance_type": ConfidentialInstanceConfig.ConfidentialInstanceType, + "confidential_instance_type": ConfidentialInstanceConfig.ConfidentialInstanceType, }} } diff --git a/google-beta/services/compute/resource_compute_address.go b/google-beta/services/compute/resource_compute_address.go index fbb0735a4f..af782bfe73 100644 --- a/google-beta/services/compute/resource_compute_address.go +++ b/google-beta/services/compute/resource_compute_address.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -348,6 +349,7 @@ func resourceComputeAddressCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -356,6 +358,7 @@ func resourceComputeAddressCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Address: %s", err) @@ -468,12 +471,14 @@ func resourceComputeAddressRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeAddress %q", d.Id())) @@ -582,6 +587,8 @@ func resourceComputeAddressUpdate(d *schema.ResourceData, meta interface{}) erro return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -595,6 +602,7 @@ func resourceComputeAddressUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Address %q: %s", d.Id(), err) @@ -642,6 +650,8 @@ func resourceComputeAddressDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Address %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -651,6 +661,7 @@ func resourceComputeAddressDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Address") diff --git a/google-beta/services/compute/resource_compute_autoscaler.go b/google-beta/services/compute/resource_compute_autoscaler.go index e29463818a..68339ad6a1 100644 --- a/google-beta/services/compute/resource_compute_autoscaler.go +++ b/google-beta/services/compute/resource_compute_autoscaler.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -497,6 +498,7 @@ func resourceComputeAutoscalerCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -505,6 +507,7 @@ func resourceComputeAutoscalerCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Autoscaler: %s", err) @@ -557,12 +560,14 @@ func resourceComputeAutoscalerRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeAutoscaler %q", d.Id())) @@ -655,6 +660,7 @@ func resourceComputeAutoscalerUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Autoscaler %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -669,6 +675,7 @@ func resourceComputeAutoscalerUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -715,6 +722,8 @@ func resourceComputeAutoscalerDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Autoscaler %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -724,6 +733,7 @@ func resourceComputeAutoscalerDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Autoscaler") diff --git a/google-beta/services/compute/resource_compute_backend_bucket.go b/google-beta/services/compute/resource_compute_backend_bucket.go index c50053fc75..e2b62c740e 100644 --- a/google-beta/services/compute/resource_compute_backend_bucket.go +++ b/google-beta/services/compute/resource_compute_backend_bucket.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -339,6 +340,7 @@ func resourceComputeBackendBucketCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -347,6 +349,7 @@ func resourceComputeBackendBucketCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BackendBucket: %s", err) @@ -419,12 +422,14 @@ func resourceComputeBackendBucketRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeBackendBucket %q", d.Id())) @@ -544,6 +549,7 @@ func resourceComputeBackendBucketUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating BackendBucket %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -558,6 +564,7 @@ func resourceComputeBackendBucketUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -623,6 +630,8 @@ func resourceComputeBackendBucketDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BackendBucket %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -632,6 +641,7 @@ func resourceComputeBackendBucketDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BackendBucket") diff --git a/google-beta/services/compute/resource_compute_backend_bucket_signed_url_key.go b/google-beta/services/compute/resource_compute_backend_bucket_signed_url_key.go index dc734c8c49..d28e5d10f7 100644 --- a/google-beta/services/compute/resource_compute_backend_bucket_signed_url_key.go +++ b/google-beta/services/compute/resource_compute_backend_bucket_signed_url_key.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -133,6 +134,7 @@ func resourceComputeBackendBucketSignedUrlKeyCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -141,6 +143,7 @@ func resourceComputeBackendBucketSignedUrlKeyCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BackendBucketSignedUrlKey: %s", err) @@ -193,12 +196,14 @@ func resourceComputeBackendBucketSignedUrlKeyRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeBackendBucketSignedUrlKey %q", d.Id())) @@ -261,6 +266,8 @@ func resourceComputeBackendBucketSignedUrlKeyDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BackendBucketSignedUrlKey %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -270,6 +277,7 @@ func resourceComputeBackendBucketSignedUrlKeyDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BackendBucketSignedUrlKey") diff --git a/google-beta/services/compute/resource_compute_backend_service.go b/google-beta/services/compute/resource_compute_backend_service.go index a248e131bd..f871a150ae 100644 --- a/google-beta/services/compute/resource_compute_backend_service.go +++ b/google-beta/services/compute/resource_compute_backend_service.go @@ -21,6 +21,7 @@ import ( "bytes" "fmt" "log" + "net/http" "reflect" "regexp" "time" @@ -1458,6 +1459,7 @@ func resourceComputeBackendServiceCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1466,6 +1468,7 @@ func resourceComputeBackendServiceCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BackendService: %s", err) @@ -1557,12 +1560,14 @@ func resourceComputeBackendServiceRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeBackendService %q", d.Id())) @@ -1880,6 +1885,7 @@ func resourceComputeBackendServiceUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating BackendService %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -1894,6 +1900,7 @@ func resourceComputeBackendServiceUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1978,6 +1985,8 @@ func resourceComputeBackendServiceDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BackendService %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1987,6 +1996,7 @@ func resourceComputeBackendServiceDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BackendService") diff --git a/google-beta/services/compute/resource_compute_backend_service_signed_url_key.go b/google-beta/services/compute/resource_compute_backend_service_signed_url_key.go index 698ad2ede7..ee7f573f58 100644 --- a/google-beta/services/compute/resource_compute_backend_service_signed_url_key.go +++ b/google-beta/services/compute/resource_compute_backend_service_signed_url_key.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -133,6 +134,7 @@ func resourceComputeBackendServiceSignedUrlKeyCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -141,6 +143,7 @@ func resourceComputeBackendServiceSignedUrlKeyCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BackendServiceSignedUrlKey: %s", err) @@ -193,12 +196,14 @@ func resourceComputeBackendServiceSignedUrlKeyRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeBackendServiceSignedUrlKey %q", d.Id())) @@ -261,6 +266,8 @@ func resourceComputeBackendServiceSignedUrlKeyDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BackendServiceSignedUrlKey %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -270,6 +277,7 @@ func resourceComputeBackendServiceSignedUrlKeyDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BackendServiceSignedUrlKey") diff --git a/google-beta/services/compute/resource_compute_disk.go b/google-beta/services/compute/resource_compute_disk.go index c83c0347e5..1ebc8b8262 100644 --- a/google-beta/services/compute/resource_compute_disk.go +++ b/google-beta/services/compute/resource_compute_disk.go @@ -22,6 +22,7 @@ import ( "errors" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -754,8 +755,7 @@ used.`, Description: `Links to the users of the disk (attached instances) in form: project/zones/zone/instances/instance`, Elem: &schema.Schema{ - Type: schema.TypeString, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Type: schema.TypeString, }, }, "project": { @@ -952,6 +952,7 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -960,6 +961,7 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Disk: %s", err) @@ -1012,12 +1014,14 @@ func resourceComputeDiskRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeDisk %q", d.Id())) @@ -1185,6 +1189,8 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1198,6 +1204,7 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Disk %q: %s", d.Id(), err) @@ -1232,6 +1239,8 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1245,6 +1254,7 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Disk %q: %s", d.Id(), err) @@ -1279,6 +1289,8 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1292,6 +1304,7 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Disk %q: %s", d.Id(), err) @@ -1326,6 +1339,8 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1339,6 +1354,7 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Disk %q: %s", d.Id(), err) @@ -1386,6 +1402,7 @@ func resourceComputeDiskDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) readRes, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", @@ -1457,6 +1474,7 @@ func resourceComputeDiskDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Disk") diff --git a/google-beta/services/compute/resource_compute_disk_resource_policy_attachment.go b/google-beta/services/compute/resource_compute_disk_resource_policy_attachment.go index 5c031adeba..74332e5692 100644 --- a/google-beta/services/compute/resource_compute_disk_resource_policy_attachment.go +++ b/google-beta/services/compute/resource_compute_disk_resource_policy_attachment.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -123,6 +124,7 @@ func resourceComputeDiskResourcePolicyAttachmentCreate(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -131,6 +133,7 @@ func resourceComputeDiskResourcePolicyAttachmentCreate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DiskResourcePolicyAttachment: %s", err) @@ -183,12 +186,14 @@ func resourceComputeDiskResourcePolicyAttachmentRead(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeDiskResourcePolicyAttachment %q", d.Id())) @@ -264,6 +269,7 @@ func resourceComputeDiskResourcePolicyAttachmentDelete(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) obj = make(map[string]interface{}) zone, err := tpgresource.GetZone(d, config) @@ -299,6 +305,7 @@ func resourceComputeDiskResourcePolicyAttachmentDelete(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DiskResourcePolicyAttachment") diff --git a/google-beta/services/compute/resource_compute_external_vpn_gateway.go b/google-beta/services/compute/resource_compute_external_vpn_gateway.go index a68de6bea7..ebd02c8eaf 100644 --- a/google-beta/services/compute/resource_compute_external_vpn_gateway.go +++ b/google-beta/services/compute/resource_compute_external_vpn_gateway.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -215,6 +216,7 @@ func resourceComputeExternalVpnGatewayCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -223,6 +225,7 @@ func resourceComputeExternalVpnGatewayCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ExternalVpnGateway: %s", err) @@ -275,12 +278,14 @@ func resourceComputeExternalVpnGatewayRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeExternalVpnGateway %q", d.Id())) @@ -359,6 +364,8 @@ func resourceComputeExternalVpnGatewayUpdate(d *schema.ResourceData, meta interf return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -372,6 +379,7 @@ func resourceComputeExternalVpnGatewayUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating ExternalVpnGateway %q: %s", d.Id(), err) @@ -419,6 +427,8 @@ func resourceComputeExternalVpnGatewayDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ExternalVpnGateway %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -428,6 +438,7 @@ func resourceComputeExternalVpnGatewayDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ExternalVpnGateway") diff --git a/google-beta/services/compute/resource_compute_firewall.go b/google-beta/services/compute/resource_compute_firewall.go index 35cba0018a..3769f21d68 100644 --- a/google-beta/services/compute/resource_compute_firewall.go +++ b/google-beta/services/compute/resource_compute_firewall.go @@ -22,6 +22,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "sort" "strings" @@ -560,6 +561,7 @@ func resourceComputeFirewallCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -568,6 +570,7 @@ func resourceComputeFirewallCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Firewall: %s", err) @@ -620,12 +623,14 @@ func resourceComputeFirewallRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeFirewall %q", d.Id())) @@ -791,6 +796,7 @@ func resourceComputeFirewallUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Firewall %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -805,6 +811,7 @@ func resourceComputeFirewallUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -851,6 +858,8 @@ func resourceComputeFirewallDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Firewall %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -860,6 +869,7 @@ func resourceComputeFirewallDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Firewall") diff --git a/google-beta/services/compute/resource_compute_forwarding_rule.go b/google-beta/services/compute/resource_compute_forwarding_rule.go index 59d74a4aef..0bbead5354 100644 --- a/google-beta/services/compute/resource_compute_forwarding_rule.go +++ b/google-beta/services/compute/resource_compute_forwarding_rule.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -700,6 +701,7 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -708,6 +710,7 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ForwardingRule: %s", err) @@ -820,12 +823,14 @@ func resourceComputeForwardingRuleRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeForwardingRule %q", d.Id())) @@ -970,6 +975,8 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -983,6 +990,7 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating ForwardingRule %q: %s", d.Id(), err) @@ -1012,6 +1020,8 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1025,6 +1035,7 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating ForwardingRule %q: %s", d.Id(), err) @@ -1060,6 +1071,8 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1073,6 +1086,7 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating ForwardingRule %q: %s", d.Id(), err) @@ -1125,6 +1139,8 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1138,6 +1154,7 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating ForwardingRule %q: %s", d.Id(), err) @@ -1185,6 +1202,8 @@ func resourceComputeForwardingRuleDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ForwardingRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1194,6 +1213,7 @@ func resourceComputeForwardingRuleDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ForwardingRule") diff --git a/google-beta/services/compute/resource_compute_global_address.go b/google-beta/services/compute/resource_compute_global_address.go index e9d40e5c7f..c381a85149 100644 --- a/google-beta/services/compute/resource_compute_global_address.go +++ b/google-beta/services/compute/resource_compute_global_address.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -270,6 +271,7 @@ func resourceComputeGlobalAddressCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) // Note: Global external IP addresses and internal IP addresses are always Premium Tier. // An address with type INTERNAL cannot have a network tier if addressTypeProp != "INTERNAL" { @@ -283,6 +285,7 @@ func resourceComputeGlobalAddressCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GlobalAddress: %s", err) @@ -395,12 +398,14 @@ func resourceComputeGlobalAddressRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeGlobalAddress %q", d.Id())) @@ -494,6 +499,8 @@ func resourceComputeGlobalAddressUpdate(d *schema.ResourceData, meta interface{} return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -507,6 +514,7 @@ func resourceComputeGlobalAddressUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating GlobalAddress %q: %s", d.Id(), err) @@ -554,6 +562,8 @@ func resourceComputeGlobalAddressDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GlobalAddress %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -563,6 +573,7 @@ func resourceComputeGlobalAddressDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GlobalAddress") diff --git a/google-beta/services/compute/resource_compute_global_forwarding_rule.go b/google-beta/services/compute/resource_compute_global_forwarding_rule.go index a232c81ed3..bf6e04d677 100644 --- a/google-beta/services/compute/resource_compute_global_forwarding_rule.go +++ b/google-beta/services/compute/resource_compute_global_forwarding_rule.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -547,6 +548,7 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -555,6 +557,7 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GlobalForwardingRule: %s", err) @@ -667,12 +670,14 @@ func resourceComputeGlobalForwardingRuleRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeGlobalForwardingRule %q", d.Id())) @@ -790,6 +795,8 @@ func resourceComputeGlobalForwardingRuleUpdate(d *schema.ResourceData, meta inte return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -803,6 +810,7 @@ func resourceComputeGlobalForwardingRuleUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating GlobalForwardingRule %q: %s", d.Id(), err) @@ -832,6 +840,8 @@ func resourceComputeGlobalForwardingRuleUpdate(d *schema.ResourceData, meta inte return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -845,6 +855,7 @@ func resourceComputeGlobalForwardingRuleUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating GlobalForwardingRule %q: %s", d.Id(), err) @@ -892,6 +903,8 @@ func resourceComputeGlobalForwardingRuleDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GlobalForwardingRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -901,6 +914,7 @@ func resourceComputeGlobalForwardingRuleDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GlobalForwardingRule") diff --git a/google-beta/services/compute/resource_compute_global_network_endpoint.go b/google-beta/services/compute/resource_compute_global_network_endpoint.go index c5b5c75923..37c20ca12f 100644 --- a/google-beta/services/compute/resource_compute_global_network_endpoint.go +++ b/google-beta/services/compute/resource_compute_global_network_endpoint.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -148,6 +149,7 @@ func resourceComputeGlobalNetworkEndpointCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -156,6 +158,7 @@ func resourceComputeGlobalNetworkEndpointCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GlobalNetworkEndpoint: %s", err) @@ -208,12 +211,14 @@ func resourceComputeGlobalNetworkEndpointRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeGlobalNetworkEndpoint %q", d.Id())) @@ -294,6 +299,7 @@ func resourceComputeGlobalNetworkEndpointDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) toDelete := make(map[string]interface{}) portProp, err := expandNestedComputeGlobalNetworkEndpointPort(d.Get("port"), d, config) if err != nil { @@ -332,6 +338,7 @@ func resourceComputeGlobalNetworkEndpointDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GlobalNetworkEndpoint") diff --git a/google-beta/services/compute/resource_compute_global_network_endpoint_group.go b/google-beta/services/compute/resource_compute_global_network_endpoint_group.go index dfc7e6b214..7b96619e49 100644 --- a/google-beta/services/compute/resource_compute_global_network_endpoint_group.go +++ b/google-beta/services/compute/resource_compute_global_network_endpoint_group.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -152,6 +153,7 @@ func resourceComputeGlobalNetworkEndpointGroupCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -160,6 +162,7 @@ func resourceComputeGlobalNetworkEndpointGroupCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GlobalNetworkEndpointGroup: %s", err) @@ -212,12 +215,14 @@ func resourceComputeGlobalNetworkEndpointGroupRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeGlobalNetworkEndpointGroup %q", d.Id())) @@ -273,6 +278,8 @@ func resourceComputeGlobalNetworkEndpointGroupDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GlobalNetworkEndpointGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -282,6 +289,7 @@ func resourceComputeGlobalNetworkEndpointGroupDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GlobalNetworkEndpointGroup") diff --git a/google-beta/services/compute/resource_compute_ha_vpn_gateway.go b/google-beta/services/compute/resource_compute_ha_vpn_gateway.go index 1e44d5c085..00e34a28f5 100644 --- a/google-beta/services/compute/resource_compute_ha_vpn_gateway.go +++ b/google-beta/services/compute/resource_compute_ha_vpn_gateway.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -208,6 +209,7 @@ func resourceComputeHaVpnGatewayCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -216,6 +218,7 @@ func resourceComputeHaVpnGatewayCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating HaVpnGateway: %s", err) @@ -268,12 +271,14 @@ func resourceComputeHaVpnGatewayRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeHaVpnGateway %q", d.Id())) @@ -335,6 +340,8 @@ func resourceComputeHaVpnGatewayDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting HaVpnGateway %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -344,6 +351,7 @@ func resourceComputeHaVpnGatewayDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "HaVpnGateway") diff --git a/google-beta/services/compute/resource_compute_health_check.go b/google-beta/services/compute/resource_compute_health_check.go index 9fdcb584d7..1687184946 100644 --- a/google-beta/services/compute/resource_compute_health_check.go +++ b/google-beta/services/compute/resource_compute_health_check.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strconv" "strings" @@ -779,6 +780,7 @@ func resourceComputeHealthCheckCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -787,6 +789,7 @@ func resourceComputeHealthCheckCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating HealthCheck: %s", err) @@ -839,12 +842,14 @@ func resourceComputeHealthCheckRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeHealthCheck %q", d.Id())) @@ -1012,6 +1017,7 @@ func resourceComputeHealthCheckUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating HealthCheck %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -1026,6 +1032,7 @@ func resourceComputeHealthCheckUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1072,6 +1079,8 @@ func resourceComputeHealthCheckDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting HealthCheck %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1081,6 +1090,7 @@ func resourceComputeHealthCheckDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "HealthCheck") diff --git a/google-beta/services/compute/resource_compute_http_health_check.go b/google-beta/services/compute/resource_compute_http_health_check.go index 75856df762..86aa023f11 100644 --- a/google-beta/services/compute/resource_compute_http_health_check.go +++ b/google-beta/services/compute/resource_compute_http_health_check.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -222,6 +223,7 @@ func resourceComputeHttpHealthCheckCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -230,6 +232,7 @@ func resourceComputeHttpHealthCheckCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating HttpHealthCheck: %s", err) @@ -282,12 +285,14 @@ func resourceComputeHttpHealthCheckRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeHttpHealthCheck %q", d.Id())) @@ -411,6 +416,7 @@ func resourceComputeHttpHealthCheckUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating HttpHealthCheck %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -425,6 +431,7 @@ func resourceComputeHttpHealthCheckUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -471,6 +478,8 @@ func resourceComputeHttpHealthCheckDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting HttpHealthCheck %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -480,6 +489,7 @@ func resourceComputeHttpHealthCheckDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "HttpHealthCheck") diff --git a/google-beta/services/compute/resource_compute_https_health_check.go b/google-beta/services/compute/resource_compute_https_health_check.go index b1aeed6747..a93a3df48a 100644 --- a/google-beta/services/compute/resource_compute_https_health_check.go +++ b/google-beta/services/compute/resource_compute_https_health_check.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -222,6 +223,7 @@ func resourceComputeHttpsHealthCheckCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -230,6 +232,7 @@ func resourceComputeHttpsHealthCheckCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating HttpsHealthCheck: %s", err) @@ -282,12 +285,14 @@ func resourceComputeHttpsHealthCheckRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeHttpsHealthCheck %q", d.Id())) @@ -411,6 +416,7 @@ func resourceComputeHttpsHealthCheckUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating HttpsHealthCheck %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -425,6 +431,7 @@ func resourceComputeHttpsHealthCheckUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -471,6 +478,8 @@ func resourceComputeHttpsHealthCheckDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting HttpsHealthCheck %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -480,6 +489,7 @@ func resourceComputeHttpsHealthCheckDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "HttpsHealthCheck") diff --git a/google-beta/services/compute/resource_compute_image.go b/google-beta/services/compute/resource_compute_image.go index 41d1140b06..a8930813c1 100644 --- a/google-beta/services/compute/resource_compute_image.go +++ b/google-beta/services/compute/resource_compute_image.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -409,6 +410,7 @@ func resourceComputeImageCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -417,6 +419,7 @@ func resourceComputeImageCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Image: %s", err) @@ -469,12 +472,14 @@ func resourceComputeImageRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeImage %q", d.Id())) @@ -580,6 +585,8 @@ func resourceComputeImageUpdate(d *schema.ResourceData, meta interface{}) error return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -593,6 +600,7 @@ func resourceComputeImageUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Image %q: %s", d.Id(), err) @@ -640,6 +648,8 @@ func resourceComputeImageDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Image %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -649,6 +659,7 @@ func resourceComputeImageDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Image") diff --git a/google-beta/services/compute/resource_compute_instance.go b/google-beta/services/compute/resource_compute_instance.go index 6223a6463b..39b6ba85fb 100644 --- a/google-beta/services/compute/resource_compute_instance.go +++ b/google-beta/services/compute/resource_compute_instance.go @@ -979,7 +979,6 @@ be from 0 to 999,999,999 inclusive.`, Description: `The Confidential VM config being used by the instance. on_host_maintenance has to be set to TERMINATE or this will fail to create.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "enable_confidential_compute": { Type: schema.TypeBool, Optional: true, diff --git a/google-beta/services/compute/resource_compute_instance_group_manager.go b/google-beta/services/compute/resource_compute_instance_group_manager.go index eee7296776..afa023a780 100644 --- a/google-beta/services/compute/resource_compute_instance_group_manager.go +++ b/google-beta/services/compute/resource_compute_instance_group_manager.go @@ -5,7 +5,6 @@ package compute import ( "fmt" "log" - "sort" "strings" "time" @@ -342,6 +341,24 @@ func ResourceComputeInstanceGroupManager() *schema.Resource { }, }, }, + "params": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Description: `Input only additional params for instance group manager creation.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_manager_tags": { + Type: schema.TypeMap, + Optional: true, + // This field is intentionally not updatable. The API overrides all existing tags on the field when updated. + ForceNew: true, + Description: `Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456.`, + }, + }, + }, + }, "wait_for_instances": { Type: schema.TypeBool, Optional: true, @@ -589,6 +606,7 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte InstanceLifecyclePolicy: expandInstanceLifecyclePolicy(d.Get("instance_lifecycle_policy").([]interface{})), AllInstancesConfig: expandAllInstancesConfig(nil, d.Get("all_instances_config").([]interface{})), StatefulPolicy: expandStatefulPolicy(d), + Params: expandInstanceGroupManagerParams(d), // Force send TargetSize to allow a value of 0. ForceSendFields: []string{"TargetSize"}, @@ -1251,6 +1269,16 @@ func expandUpdatePolicy(configured []interface{}) *compute.InstanceGroupManagerU return updatePolicy } +func expandInstanceGroupManagerParams(d *schema.ResourceData) *compute.InstanceGroupManagerParams { + params := &compute.InstanceGroupManagerParams{} + + if _, ok := d.GetOk("params.0.resource_manager_tags"); ok { + params.ResourceManagerTags = tpgresource.ExpandStringMap(d, "params.0.resource_manager_tags") + } + + return params +} + func flattenAutoHealingPolicies(autoHealingPolicies []*compute.InstanceGroupManagerAutoHealingPolicy) []map[string]interface{} { autoHealingPoliciesSchema := make([]map[string]interface{}, 0, len(autoHealingPolicies)) for _, autoHealingPolicy := range autoHealingPolicies { @@ -1297,54 +1325,29 @@ func flattenStatefulPolicyStatefulExternalIps(d *schema.ResourceData, statefulPo } func flattenStatefulPolicyStatefulIps(d *schema.ResourceData, ipfieldName string, ips map[string]compute.StatefulPolicyPreservedStateNetworkIp) []map[string]interface{} { - // statefulPolicy.PreservedState.ExternalIPs and statefulPolicy.PreservedState.InternalIPs are affected by API-side reordering // of external/internal IPs, where ordering is done by the interface_name value. - // Below we intend to reorder the IPs to match the order in the config. + // Below we reorder the IPs to match the order in the config. // Also, data is converted from a map (client library's statefulPolicy.PreservedState.ExternalIPs, or .InternalIPs) to a slice (stored in state). // Any IPs found from the API response that aren't in the config are appended to the end of the slice. - - configIpOrder := d.Get(ipfieldName).([]interface{}) - order := map[string]int{} // record map of interface name to index - for i, el := range configIpOrder { - ip := el.(map[string]interface{}) - interfaceName := ip["interface_name"].(string) - order[interfaceName] = i + configData := []map[string]interface{}{} + for _, item := range d.Get(ipfieldName).([]interface{}) { + configData = append(configData, item.(map[string]interface{})) } - - orderedResult := make([]map[string]interface{}, len(configIpOrder)) - unexpectedIps := []map[string]interface{}{} + apiData := []map[string]interface{}{} for interfaceName, ip := range ips { data := map[string]interface{}{ "interface_name": interfaceName, "delete_rule": ip.AutoDelete, } - - index, found := order[interfaceName] - if !found { - unexpectedIps = append(unexpectedIps, data) - continue - } - orderedResult[index] = data // Put elements from API response in order that matches the config + apiData = append(apiData, data) } - sort.Slice(unexpectedIps, func(i, j int) bool { - return unexpectedIps[i]["interface_name"].(string) < unexpectedIps[j]["interface_name"].(string) - }) - - // Remove any nils from the ordered list. This can occur if the API doesn't include an interface present in the config. - finalResult := []map[string]interface{}{} - for _, item := range orderedResult { - if item != nil { - finalResult = append(finalResult, item) - } - } - - if len(unexpectedIps) > 0 { - // Additional IPs returned from API but not in the config are appended to the end of the slice - finalResult = append(finalResult, unexpectedIps...) + sorted, err := tpgresource.SortMapsByConfigOrder(configData, apiData, "interface_name") + if err != nil { + log.Printf("[ERROR] Could not sort API response for %s: %s", ipfieldName, err) + return apiData } - - return finalResult + return sorted } func flattenUpdatePolicy(updatePolicy *compute.InstanceGroupManagerUpdatePolicy) []map[string]interface{} { diff --git a/google-beta/services/compute/resource_compute_instance_group_manager_test.go b/google-beta/services/compute/resource_compute_instance_group_manager_test.go index 892b29360f..629f8621b2 100644 --- a/google-beta/services/compute/resource_compute_instance_group_manager_test.go +++ b/google-beta/services/compute/resource_compute_instance_group_manager_test.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" ) func TestAccInstanceGroupManager_basic(t *testing.T) { @@ -425,6 +426,32 @@ func TestAccInstanceGroupManager_waitForStatus(t *testing.T) { }) } +func TestAccInstanceGroupManager_resourceManagerTags(t *testing.T) { + t.Parallel() + + tag_name := fmt.Sprintf("tf-test-igm-%s", acctest.RandString(t, 10)) + template_name := fmt.Sprintf("tf-test-igm-%s", acctest.RandString(t, 10)) + igm_name := fmt.Sprintf("tf-test-igm-%s", acctest.RandString(t, 10)) + project_id := envvar.GetTestProjectFromEnv() + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccInstanceGroupManager_resourceManagerTags(template_name, tag_name, igm_name, project_id), + }, + { + ResourceName: "google_compute_instance_group_manager.igm-tags", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"status", "params"}, + }, + }, + }) +} + func testAccCheckInstanceGroupManagerDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { config := acctest.GoogleProviderConfig(t) @@ -1815,3 +1842,57 @@ resource "google_compute_per_instance_config" "per-instance" { } `, template, target, igm, perInstanceConfig) } + +func testAccInstanceGroupManager_resourceManagerTags(template_name, tag_name, igm_name, project_id string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-11" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "igm-tags" { + name = "%s" + description = "Terraform test instance template." + machine_type = "e2-medium" + + disk { + source_image = data.google_compute_image.my_image.self_link + } + + network_interface { + network = "default" + } +} + +resource "google_tags_tag_key" "igm-key" { + description = "Terraform test tag key." + parent = "projects/%s" + short_name = "%s" +} + +resource "google_tags_tag_value" "igm-value" { + description = "Terraform test tag value." + parent = "tagKeys/${google_tags_tag_key.igm-key.name}" + short_name = "%s" +} + +resource "google_compute_instance_group_manager" "igm-tags" { + description = "Terraform test instance group manager." + name = "%s" + base_instance_name = "tf-igm-tags-test" + zone = "us-central1-a" + target_size = 0 + + version { + name = "prod" + instance_template = google_compute_instance_template.igm-tags.self_link + } + + params { + resource_manager_tags = { + "tagKeys/${google_tags_tag_key.igm-key.name}" = "tagValues/${google_tags_tag_value.igm-value.name}" + } + } +} +`, template_name, project_id, tag_name, tag_name, igm_name) +} diff --git a/google-beta/services/compute/resource_compute_instance_group_membership.go b/google-beta/services/compute/resource_compute_instance_group_membership.go index bebea020bb..d41c7ac412 100644 --- a/google-beta/services/compute/resource_compute_instance_group_membership.go +++ b/google-beta/services/compute/resource_compute_instance_group_membership.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -128,6 +129,7 @@ func resourceComputeInstanceGroupMembershipCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -136,6 +138,7 @@ func resourceComputeInstanceGroupMembershipCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InstanceGroupMembership: %s", err) @@ -188,12 +191,14 @@ func resourceComputeInstanceGroupMembershipRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeInstanceGroupMembership %q", d.Id())) @@ -256,6 +261,7 @@ func resourceComputeInstanceGroupMembershipDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) toDelete := make(map[string]interface{}) // Instance @@ -278,6 +284,7 @@ func resourceComputeInstanceGroupMembershipDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "InstanceGroupMembership") diff --git a/google-beta/services/compute/resource_compute_instance_group_named_port.go b/google-beta/services/compute/resource_compute_instance_group_named_port.go index 0bbe06e880..13055cde47 100644 --- a/google-beta/services/compute/resource_compute_instance_group_named_port.go +++ b/google-beta/services/compute/resource_compute_instance_group_named_port.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -147,6 +148,7 @@ func resourceComputeInstanceGroupNamedPortCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -155,6 +157,7 @@ func resourceComputeInstanceGroupNamedPortCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InstanceGroupNamedPort: %s", err) @@ -207,12 +210,14 @@ func resourceComputeInstanceGroupNamedPortRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeInstanceGroupNamedPort %q", d.Id())) @@ -291,6 +296,8 @@ func resourceComputeInstanceGroupNamedPortDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting InstanceGroupNamedPort %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -300,6 +307,7 @@ func resourceComputeInstanceGroupNamedPortDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "InstanceGroupNamedPort") diff --git a/google-beta/services/compute/resource_compute_instance_settings.go b/google-beta/services/compute/resource_compute_instance_settings.go index efd9a5a2f3..e735889243 100644 --- a/google-beta/services/compute/resource_compute_instance_settings.go +++ b/google-beta/services/compute/resource_compute_instance_settings.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -137,6 +138,7 @@ func resourceComputeInstanceSettingsCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -145,6 +147,7 @@ func resourceComputeInstanceSettingsCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InstanceSettings: %s", err) @@ -197,12 +200,14 @@ func resourceComputeInstanceSettingsRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeInstanceSettings %q", d.Id())) @@ -266,6 +271,7 @@ func resourceComputeInstanceSettingsUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating InstanceSettings %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -280,6 +286,7 @@ func resourceComputeInstanceSettingsUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/compute/resource_compute_instance_settings_generated_test.go b/google-beta/services/compute/resource_compute_instance_settings_generated_test.go index 968d7823de..b1d4c3b6c2 100644 --- a/google-beta/services/compute/resource_compute_instance_settings_generated_test.go +++ b/google-beta/services/compute/resource_compute_instance_settings_generated_test.go @@ -37,7 +37,7 @@ func TestAccComputeInstanceSettings_instanceSettingsBasicExample(t *testing.T) { acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), CheckDestroy: testAccCheckComputeInstanceSettingsDestroyProducer(t), Steps: []resource.TestStep{ { @@ -57,7 +57,6 @@ func testAccComputeInstanceSettings_instanceSettingsBasicExample(context map[str return acctest.Nprintf(` resource "google_compute_instance_settings" "gce_instance_settings" { - provider = google-beta zone = "us-east7-b" metadata { items = { diff --git a/google-beta/services/compute/resource_compute_instance_settings_test.go b/google-beta/services/compute/resource_compute_instance_settings_test.go index c93afeb968..3117175b32 100644 --- a/google-beta/services/compute/resource_compute_instance_settings_test.go +++ b/google-beta/services/compute/resource_compute_instance_settings_test.go @@ -19,7 +19,7 @@ func TestAccComputeInstanceSettings_update(t *testing.T) { acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), CheckDestroy: testAccCheckComputeInstanceSettingsDestroyProducer(t), Steps: []resource.TestStep{ { @@ -57,7 +57,6 @@ func testAccComputeInstanceSettings_basic(context map[string]interface{}) string return acctest.Nprintf(` resource "google_compute_instance_settings" "gce_instance_settings" { - provider = google-beta zone = "us-east7-b" metadata { items = { @@ -73,7 +72,6 @@ func testAccComputeInstanceSettings_update(context map[string]interface{}) strin return acctest.Nprintf(` resource "google_compute_instance_settings" "gce_instance_settings" { - provider = google-beta zone = "us-east7-b" metadata { items = { @@ -90,7 +88,6 @@ func testAccComputeInstanceSettings_delete(context map[string]interface{}) strin return acctest.Nprintf(` resource "google_compute_instance_settings" "gce_instance_settings" { - provider = google-beta zone = "us-east7-b" metadata { items = { diff --git a/google-beta/services/compute/resource_compute_instance_template.go b/google-beta/services/compute/resource_compute_instance_template.go index 85a871d50b..cab14563a8 100644 --- a/google-beta/services/compute/resource_compute_instance_template.go +++ b/google-beta/services/compute/resource_compute_instance_template.go @@ -858,7 +858,6 @@ be from 0 to 999,999,999 inclusive.`, Description: `The Confidential VM config being used by the instance. on_host_maintenance has to be set to TERMINATE or this will fail to create.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "enable_confidential_compute": { Type: schema.TypeBool, Optional: true, diff --git a/google-beta/services/compute/resource_compute_instance_template_test.go b/google-beta/services/compute/resource_compute_instance_template_test.go index 7686010022..0ae42398d0 100644 --- a/google-beta/services/compute/resource_compute_instance_template_test.go +++ b/google-beta/services/compute/resource_compute_instance_template_test.go @@ -766,7 +766,6 @@ func TestAccComputeInstanceTemplate_ConfidentialInstanceConfigMain(t *testing.T) t.Parallel() var instanceTemplate compute.InstanceTemplate - var instanceTemplate2 compute.InstanceTemplate acctest.VcrTest(t, resource.TestCase{ @@ -779,12 +778,10 @@ func TestAccComputeInstanceTemplate_ConfidentialInstanceConfigMain(t *testing.T) Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar", &instanceTemplate), testAccCheckComputeInstanceTemplateHasConfidentialInstanceConfig(&instanceTemplate, true, "SEV"), - testAccCheckComputeInstanceTemplateExists(t, "google_compute_instance_template.foobar2", &instanceTemplate2), testAccCheckComputeInstanceTemplateHasConfidentialInstanceConfig(&instanceTemplate2, true, ""), ), }, - { Config: testAccComputeInstanceTemplateConfidentialInstanceConfigNoEnable(acctest.RandString(t, 10), "AMD Milan", "SEV_SNP"), Check: resource.ComposeTestCheckFunc( @@ -1775,7 +1772,6 @@ func testAccCheckComputeInstanceTemplateHasConfidentialInstanceConfig(instanceTe if instanceTemplate.Properties.ConfidentialInstanceConfig.EnableConfidentialCompute != EnableConfidentialCompute { return fmt.Errorf("Wrong ConfidentialInstanceConfig EnableConfidentialCompute: expected %t, got, %t", EnableConfidentialCompute, instanceTemplate.Properties.ConfidentialInstanceConfig.EnableConfidentialCompute) } - if instanceTemplate.Properties.ConfidentialInstanceConfig.ConfidentialInstanceType != ConfidentialInstanceType { return fmt.Errorf("Wrong ConfidentialInstanceConfig ConfidentialInstanceType: expected %s, got, %s", ConfidentialInstanceType, instanceTemplate.Properties.ConfidentialInstanceConfig.ConfidentialInstanceType) } @@ -3097,9 +3093,7 @@ resource "google_compute_instance_template" "foobar" { confidential_instance_config { enable_confidential_compute = true - confidential_instance_type = %q - } scheduling { @@ -3131,10 +3125,7 @@ resource "google_compute_instance_template" "foobar2" { } } - - `, suffix, confidentialInstanceType, suffix) - } func testAccComputeInstanceTemplateConfidentialInstanceConfigNoEnable(suffix string, minCpuPlatform, confidentialInstanceType string) string { diff --git a/google-beta/services/compute/resource_compute_instance_test.go b/google-beta/services/compute/resource_compute_instance_test.go index 7ff4a50517..ec35c47e87 100644 --- a/google-beta/services/compute/resource_compute_instance_test.go +++ b/google-beta/services/compute/resource_compute_instance_test.go @@ -1801,9 +1801,7 @@ func TestAccComputeInstanceConfidentialInstanceConfigMain(t *testing.T) { t.Parallel() var instance compute.Instance - var instance2 compute.Instance - instanceName := fmt.Sprintf("tf-test-%s", acctest.RandString(t, 10)) acctest.VcrTest(t, resource.TestCase{ @@ -1816,12 +1814,10 @@ func TestAccComputeInstanceConfidentialInstanceConfigMain(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar", &instance), testAccCheckComputeInstanceHasConfidentialInstanceConfig(&instance, true, "SEV"), - testAccCheckComputeInstanceExists(t, "google_compute_instance.foobar2", &instance2), testAccCheckComputeInstanceHasConfidentialInstanceConfig(&instance2, true, ""), ), }, - { Config: testAccComputeInstanceConfidentialInstanceConfigNoEnable(instanceName, "AMD Milan", "SEV_SNP"), Check: resource.ComposeTestCheckFunc( @@ -1845,6 +1841,7 @@ func TestAccComputeInstance_confidentialHyperDiskBootDisk(t *testing.T) { "key_ring": kms.KeyRing.Name, "key_name": kms.CryptoKey.Name, "zone": "us-central1-a", + "machine_type": "n2-standard-16", } context_2 := map[string]interface{}{ @@ -1853,6 +1850,7 @@ func TestAccComputeInstance_confidentialHyperDiskBootDisk(t *testing.T) { "key_ring": context_1["key_ring"], "key_name": context_1["key_name"], "zone": context_1["zone"], + "machine_type": "c3d-standard-16", } acctest.VcrTest(t, resource.TestCase{ @@ -4003,7 +4001,6 @@ func testAccCheckComputeInstanceHasConfidentialInstanceConfig(instance *compute. if instance.ConfidentialInstanceConfig.EnableConfidentialCompute != EnableConfidentialCompute { return fmt.Errorf("Wrong ConfidentialInstanceConfig EnableConfidentialCompute: expected %t, got, %t", EnableConfidentialCompute, instance.ConfidentialInstanceConfig.EnableConfidentialCompute) } - if instance.ConfidentialInstanceConfig.ConfidentialInstanceType != ConfidentialInstanceType { return fmt.Errorf("Wrong ConfidentialInstanceConfig ConfidentialInstanceType: expected %s, got, %s", ConfidentialInstanceType, instance.ConfidentialInstanceConfig.ConfidentialInstanceType) } @@ -7196,9 +7193,7 @@ resource "google_compute_instance" "foobar" { confidential_instance_config { enable_confidential_compute = true - confidential_instance_type = %q - } scheduling { @@ -7231,10 +7226,7 @@ resource "google_compute_instance" "foobar2" { } } - - `, instance, confidentialInstanceType, instance) - } func testAccComputeInstanceConfidentialInstanceConfigNoEnable(instance string, minCpuPlatform, confidentialInstanceType string) string { @@ -7391,7 +7383,7 @@ resource "google_kms_crypto_key_iam_member" "crypto_key" { resource "google_compute_instance" "foobar" { name = "%{instance_name}" - machine_type = "h3-standard-88" + machine_type = "%{machine_type}" zone = "%{zone}" boot_disk { diff --git a/google-beta/services/compute/resource_compute_interconnect_attachment.go b/google-beta/services/compute/resource_compute_interconnect_attachment.go index 47f6d544a1..8da27ebda1 100644 --- a/google-beta/services/compute/resource_compute_interconnect_attachment.go +++ b/google-beta/services/compute/resource_compute_interconnect_attachment.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -446,6 +447,7 @@ func resourceComputeInterconnectAttachmentCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -454,6 +456,7 @@ func resourceComputeInterconnectAttachmentCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InterconnectAttachment: %s", err) @@ -510,12 +513,14 @@ func resourceComputeInterconnectAttachmentRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeInterconnectAttachment %q", d.Id())) @@ -663,6 +668,7 @@ func resourceComputeInterconnectAttachmentUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating InterconnectAttachment %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -677,6 +683,7 @@ func resourceComputeInterconnectAttachmentUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -723,6 +730,7 @@ func resourceComputeInterconnectAttachmentDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) if err := waitForAttachmentToBeProvisioned(d, config, d.Timeout(schema.TimeoutCreate)); err != nil { return fmt.Errorf("Error waiting for InterconnectAttachment %q to be provisioned: %q", d.Get("name").(string), err) } @@ -736,6 +744,7 @@ func resourceComputeInterconnectAttachmentDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "InterconnectAttachment") diff --git a/google-beta/services/compute/resource_compute_machine_image.go b/google-beta/services/compute/resource_compute_machine_image.go index e64c592b03..be37d2a894 100644 --- a/google-beta/services/compute/resource_compute_machine_image.go +++ b/google-beta/services/compute/resource_compute_machine_image.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -199,6 +200,7 @@ func resourceComputeMachineImageCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -207,6 +209,7 @@ func resourceComputeMachineImageCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating MachineImage: %s", err) @@ -259,12 +262,14 @@ func resourceComputeMachineImageRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeMachineImage %q", d.Id())) @@ -326,6 +331,8 @@ func resourceComputeMachineImageDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MachineImage %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -335,6 +342,7 @@ func resourceComputeMachineImageDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "MachineImage") diff --git a/google-beta/services/compute/resource_compute_managed_ssl_certificate.go b/google-beta/services/compute/resource_compute_managed_ssl_certificate.go index 8ea1807034..4bfe4a436b 100644 --- a/google-beta/services/compute/resource_compute_managed_ssl_certificate.go +++ b/google-beta/services/compute/resource_compute_managed_ssl_certificate.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -196,6 +197,7 @@ func resourceComputeManagedSslCertificateCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -204,6 +206,7 @@ func resourceComputeManagedSslCertificateCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ManagedSslCertificate: %s", err) @@ -256,12 +259,14 @@ func resourceComputeManagedSslCertificateRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeManagedSslCertificate %q", d.Id())) @@ -329,6 +334,8 @@ func resourceComputeManagedSslCertificateDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ManagedSslCertificate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -338,6 +345,7 @@ func resourceComputeManagedSslCertificateDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ManagedSslCertificate") diff --git a/google-beta/services/compute/resource_compute_network.go b/google-beta/services/compute/resource_compute_network.go index 4e88d9525a..1cadb0f4aa 100644 --- a/google-beta/services/compute/resource_compute_network.go +++ b/google-beta/services/compute/resource_compute_network.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -249,6 +250,7 @@ func resourceComputeNetworkCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -257,6 +259,7 @@ func resourceComputeNetworkCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Network: %s", err) @@ -341,12 +344,14 @@ func resourceComputeNetworkRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetwork %q", d.Id())) @@ -466,6 +471,8 @@ func resourceComputeNetworkUpdate(d *schema.ResourceData, meta interface{}) erro return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -479,6 +486,7 @@ func resourceComputeNetworkUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Network %q: %s", d.Id(), err) @@ -526,6 +534,8 @@ func resourceComputeNetworkDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Network %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -535,6 +545,7 @@ func resourceComputeNetworkDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Network") diff --git a/google-beta/services/compute/resource_compute_network_attachment.go b/google-beta/services/compute/resource_compute_network_attachment.go index e043297075..fd84c27aca 100644 --- a/google-beta/services/compute/resource_compute_network_attachment.go +++ b/google-beta/services/compute/resource_compute_network_attachment.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -264,6 +265,7 @@ func resourceComputeNetworkAttachmentCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -272,6 +274,7 @@ func resourceComputeNetworkAttachmentCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkAttachment: %s", err) @@ -324,12 +327,14 @@ func resourceComputeNetworkAttachmentRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkAttachment %q", d.Id())) @@ -415,6 +420,8 @@ func resourceComputeNetworkAttachmentDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NetworkAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -424,6 +431,7 @@ func resourceComputeNetworkAttachmentDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkAttachment") diff --git a/google-beta/services/compute/resource_compute_network_edge_security_service.go b/google-beta/services/compute/resource_compute_network_edge_security_service.go index 85637881f4..6c597a1935 100644 --- a/google-beta/services/compute/resource_compute_network_edge_security_service.go +++ b/google-beta/services/compute/resource_compute_network_edge_security_service.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -165,6 +166,7 @@ func resourceComputeNetworkEdgeSecurityServiceCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -173,6 +175,7 @@ func resourceComputeNetworkEdgeSecurityServiceCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkEdgeSecurityService: %s", err) @@ -225,12 +228,14 @@ func resourceComputeNetworkEdgeSecurityServiceRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkEdgeSecurityService %q", d.Id())) @@ -309,6 +314,7 @@ func resourceComputeNetworkEdgeSecurityServiceUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating NetworkEdgeSecurityService %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -344,6 +350,7 @@ func resourceComputeNetworkEdgeSecurityServiceUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -391,6 +398,8 @@ func resourceComputeNetworkEdgeSecurityServiceDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NetworkEdgeSecurityService %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -400,6 +409,7 @@ func resourceComputeNetworkEdgeSecurityServiceDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkEdgeSecurityService") diff --git a/google-beta/services/compute/resource_compute_network_edge_security_service_sweeper.go b/google-beta/services/compute/resource_compute_network_edge_security_service_sweeper.go index a587e55c35..2376ee2035 100644 --- a/google-beta/services/compute/resource_compute_network_edge_security_service_sweeper.go +++ b/google-beta/services/compute/resource_compute_network_edge_security_service_sweeper.go @@ -1,20 +1,5 @@ // Copyright (c) HashiCorp, Inc. // SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - package compute import ( @@ -53,86 +38,92 @@ func testSweepComputeNetworkEdgeSecurityService(region string) error { t := &testing.T{} billingId := envvar.GetTestBillingAccountFromEnv(t) - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://compute.googleapis.com/compute/beta/projects/{{project}}/regions/{{region}}/networkEdgeSecurityServices", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["networkEdgeSecurityServices"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - if obj["name"] == nil { - log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) - return nil - } - - name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue + regions := []string{"us-central1", "us-west2", "us-south1", "southamerica-west1", "europe-west1"} + for _, r := range regions { + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s in %s", resourceName, r) + + // Setup variables to replace in list template + d := &tpgresource.ResourceDataMock{ + FieldsInSchema: map[string]interface{}{ + "project": config.Project, + "region": r, + "location": r, + "zone": "-", + "billing_account": billingId, + }, } - deleteTemplate := "https://compute.googleapis.com/compute/beta/projects/{{project}}/regions/{{region}}/networkEdgeSecurityServices/{{name}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + listTemplate := strings.Split("https://compute.googleapis.com/compute/beta/projects/{{project}}/regions/{{region}}/networkEdgeSecurityServices", "?")[0] + listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) return nil } - deleteUrl = deleteUrl + name - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, - Method: "DELETE", + Method: "GET", Project: config.Project, - RawURL: deleteUrl, + RawURL: listUrl, UserAgent: config.UserAgent, }) if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) + return nil + } + + resourceList, ok := res["networkEdgeSecurityServices"] + if !ok { + log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") + return nil + } + + rl := resourceList.([]interface{}) + + log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s in %s list response.", len(rl), r, resourceName) + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for _, ri := range rl { + obj := ri.(map[string]interface{}) + if obj["name"] == nil { + log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) + return nil + } + + name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) + // Skip resources that shouldn't be sweeped + if !sweeper.IsSweepableTestResource(name) { + nonPrefixCount++ + continue + } + + deleteTemplate := "https://compute.googleapis.com/compute/beta/projects/{{project}}/regions/{{region}}/networkEdgeSecurityServices/{{name}}" + deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + return nil + } + deleteUrl = deleteUrl + name + + // Don't wait on operations as we may have a lot to delete + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: config.Project, + RawURL: deleteUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items in %s were non-sweepable and skipped.", nonPrefixCount, r) } - } - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) } return nil diff --git a/google-beta/services/compute/resource_compute_network_endpoint.go b/google-beta/services/compute/resource_compute_network_endpoint.go index 31aa28b80d..4c5b4639b0 100644 --- a/google-beta/services/compute/resource_compute_network_endpoint.go +++ b/google-beta/services/compute/resource_compute_network_endpoint.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -160,6 +161,7 @@ func resourceComputeNetworkEndpointCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -168,6 +170,7 @@ func resourceComputeNetworkEndpointCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkEndpoint: %s", err) @@ -220,12 +223,14 @@ func resourceComputeNetworkEndpointRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkEndpoint %q", d.Id())) @@ -314,6 +319,7 @@ func resourceComputeNetworkEndpointDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) toDelete := make(map[string]interface{}) instanceProp, err := expandNestedComputeNetworkEndpointInstance(d.Get("instance"), d, config) if err != nil { @@ -350,6 +356,7 @@ func resourceComputeNetworkEndpointDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkEndpoint") diff --git a/google-beta/services/compute/resource_compute_network_endpoint_group.go b/google-beta/services/compute/resource_compute_network_endpoint_group.go index be953d2db4..54cc72f94c 100644 --- a/google-beta/services/compute/resource_compute_network_endpoint_group.go +++ b/google-beta/services/compute/resource_compute_network_endpoint_group.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -208,6 +209,7 @@ func resourceComputeNetworkEndpointGroupCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -216,6 +218,7 @@ func resourceComputeNetworkEndpointGroupCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkEndpointGroup: %s", err) @@ -268,12 +271,14 @@ func resourceComputeNetworkEndpointGroupRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkEndpointGroup %q", d.Id())) @@ -346,6 +351,8 @@ func resourceComputeNetworkEndpointGroupDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NetworkEndpointGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -355,6 +362,7 @@ func resourceComputeNetworkEndpointGroupDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkEndpointGroup") diff --git a/google-beta/services/compute/resource_compute_network_endpoints.go b/google-beta/services/compute/resource_compute_network_endpoints.go index b85dbd738b..792e477648 100644 --- a/google-beta/services/compute/resource_compute_network_endpoints.go +++ b/google-beta/services/compute/resource_compute_network_endpoints.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -261,6 +262,7 @@ func resourceComputeNetworkEndpointsCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) chunkSize := 500 // API only accepts 500 endpoints at a time lastPage, err := networkEndpointsPaginatedMutate(d, obj["networkEndpoints"].([]interface{}), config, userAgent, url, project, billingProject, chunkSize, true) if err != nil { @@ -276,6 +278,7 @@ func resourceComputeNetworkEndpointsCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkEndpoints: %s", err) @@ -328,12 +331,14 @@ func resourceComputeNetworkEndpointsRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkEndpoints %q", d.Id())) @@ -411,6 +416,7 @@ func resourceComputeNetworkEndpointsUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating NetworkEndpoints %q: %#v", d.Id(), obj) + headers := make(http.Header) detachUrl, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/networkEndpointGroups/{{network_endpoint_group}}/detachNetworkEndpoints") o, n := d.GetChange("network_endpoints") @@ -486,6 +492,7 @@ func resourceComputeNetworkEndpointsUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -539,6 +546,7 @@ func resourceComputeNetworkEndpointsDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) var endpointsToDelete []interface{} endpoints := d.Get("network_endpoints").(*schema.Set).List() @@ -590,6 +598,7 @@ func resourceComputeNetworkEndpointsDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkEndpoints") diff --git a/google-beta/services/compute/resource_compute_network_firewall_policy.go b/google-beta/services/compute/resource_compute_network_firewall_policy.go index b5db81ee27..0bc2ff1ae4 100644 --- a/google-beta/services/compute/resource_compute_network_firewall_policy.go +++ b/google-beta/services/compute/resource_compute_network_firewall_policy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -151,6 +152,7 @@ func resourceComputeNetworkFirewallPolicyCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -159,6 +161,7 @@ func resourceComputeNetworkFirewallPolicyCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkFirewallPolicy: %s", err) @@ -211,12 +214,14 @@ func resourceComputeNetworkFirewallPolicyRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkFirewallPolicy %q", d.Id())) @@ -289,6 +294,7 @@ func resourceComputeNetworkFirewallPolicyUpdate(d *schema.ResourceData, meta int } log.Printf("[DEBUG] Updating NetworkFirewallPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -303,6 +309,7 @@ func resourceComputeNetworkFirewallPolicyUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -349,6 +356,8 @@ func resourceComputeNetworkFirewallPolicyDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NetworkFirewallPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -358,6 +367,7 @@ func resourceComputeNetworkFirewallPolicyDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkFirewallPolicy") diff --git a/google-beta/services/compute/resource_compute_network_peering_routes_config.go b/google-beta/services/compute/resource_compute_network_peering_routes_config.go index a7cb64f023..5587f3d42b 100644 --- a/google-beta/services/compute/resource_compute_network_peering_routes_config.go +++ b/google-beta/services/compute/resource_compute_network_peering_routes_config.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -142,6 +143,7 @@ func resourceComputeNetworkPeeringRoutesConfigCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -150,6 +152,7 @@ func resourceComputeNetworkPeeringRoutesConfigCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkPeeringRoutesConfig: %s", err) @@ -202,12 +205,14 @@ func resourceComputeNetworkPeeringRoutesConfigRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNetworkPeeringRoutesConfig %q", d.Id())) @@ -295,6 +300,7 @@ func resourceComputeNetworkPeeringRoutesConfigUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating NetworkPeeringRoutesConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -309,6 +315,7 @@ func resourceComputeNetworkPeeringRoutesConfigUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/compute/resource_compute_node_group.go b/google-beta/services/compute/resource_compute_node_group.go index 44ef51c21c..55ca1a033c 100644 --- a/google-beta/services/compute/resource_compute_node_group.go +++ b/google-beta/services/compute/resource_compute_node_group.go @@ -21,6 +21,7 @@ import ( "errors" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -296,6 +297,7 @@ func resourceComputeNodeGroupCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) var sizeParam string if v, ok := d.GetOkExists("initial_size"); ok { sizeParam = fmt.Sprintf("%v", v) @@ -316,6 +318,7 @@ func resourceComputeNodeGroupCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NodeGroup: %s", err) @@ -368,12 +371,14 @@ func resourceComputeNodeGroupRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNodeGroup %q", d.Id())) @@ -494,6 +499,7 @@ func resourceComputeNodeGroupUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating NodeGroup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -549,6 +555,7 @@ func resourceComputeNodeGroupUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -582,6 +589,8 @@ func resourceComputeNodeGroupUpdate(d *schema.ResourceData, meta interface{}) er return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -595,6 +604,7 @@ func resourceComputeNodeGroupUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating NodeGroup %q: %s", d.Id(), err) @@ -642,6 +652,8 @@ func resourceComputeNodeGroupDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NodeGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -651,6 +663,7 @@ func resourceComputeNodeGroupDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NodeGroup") diff --git a/google-beta/services/compute/resource_compute_node_template.go b/google-beta/services/compute/resource_compute_node_template.go index 952a55d831..e6e084cccf 100644 --- a/google-beta/services/compute/resource_compute_node_template.go +++ b/google-beta/services/compute/resource_compute_node_template.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -257,6 +258,7 @@ func resourceComputeNodeTemplateCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -265,6 +267,7 @@ func resourceComputeNodeTemplateCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NodeTemplate: %s", err) @@ -317,12 +320,14 @@ func resourceComputeNodeTemplateRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeNodeTemplate %q", d.Id())) @@ -393,6 +398,8 @@ func resourceComputeNodeTemplateDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NodeTemplate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -402,6 +409,7 @@ func resourceComputeNodeTemplateDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NodeTemplate") diff --git a/google-beta/services/compute/resource_compute_organization_security_policy.go b/google-beta/services/compute/resource_compute_organization_security_policy.go index a55ea7b5d4..9517966494 100644 --- a/google-beta/services/compute/resource_compute_organization_security_policy.go +++ b/google-beta/services/compute/resource_compute_organization_security_policy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -144,6 +145,7 @@ func resourceComputeOrganizationSecurityPolicyCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -152,6 +154,7 @@ func resourceComputeOrganizationSecurityPolicyCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OrganizationSecurityPolicy: %s", err) @@ -215,12 +218,14 @@ func resourceComputeOrganizationSecurityPolicyRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeOrganizationSecurityPolicy %q", d.Id())) @@ -277,6 +282,7 @@ func resourceComputeOrganizationSecurityPolicyUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating OrganizationSecurityPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -291,6 +297,7 @@ func resourceComputeOrganizationSecurityPolicyUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -333,6 +340,8 @@ func resourceComputeOrganizationSecurityPolicyDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting OrganizationSecurityPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -342,6 +351,7 @@ func resourceComputeOrganizationSecurityPolicyDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "OrganizationSecurityPolicy") diff --git a/google-beta/services/compute/resource_compute_organization_security_policy_association.go b/google-beta/services/compute/resource_compute_organization_security_policy_association.go index 10973c0672..c53005fda3 100644 --- a/google-beta/services/compute/resource_compute_organization_security_policy_association.go +++ b/google-beta/services/compute/resource_compute_organization_security_policy_association.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -107,6 +108,7 @@ func resourceComputeOrganizationSecurityPolicyAssociationCreate(d *schema.Resour billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -115,6 +117,7 @@ func resourceComputeOrganizationSecurityPolicyAssociationCreate(d *schema.Resour UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OrganizationSecurityPolicyAssociation: %s", err) @@ -181,12 +184,14 @@ func resourceComputeOrganizationSecurityPolicyAssociationRead(d *schema.Resource billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(transformSecurityPolicyAssociationReadError(err), d, fmt.Sprintf("ComputeOrganizationSecurityPolicyAssociation %q", d.Id())) @@ -226,6 +231,8 @@ func resourceComputeOrganizationSecurityPolicyAssociationDelete(d *schema.Resour billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting OrganizationSecurityPolicyAssociation %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -235,6 +242,7 @@ func resourceComputeOrganizationSecurityPolicyAssociationDelete(d *schema.Resour UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "OrganizationSecurityPolicyAssociation") diff --git a/google-beta/services/compute/resource_compute_organization_security_policy_rule.go b/google-beta/services/compute/resource_compute_organization_security_policy_rule.go index d15b04e40f..2d4457aedf 100644 --- a/google-beta/services/compute/resource_compute_organization_security_policy_rule.go +++ b/google-beta/services/compute/resource_compute_organization_security_policy_rule.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -277,6 +278,7 @@ func resourceComputeOrganizationSecurityPolicyRuleCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -285,6 +287,7 @@ func resourceComputeOrganizationSecurityPolicyRuleCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OrganizationSecurityPolicyRule: %s", err) @@ -351,12 +354,14 @@ func resourceComputeOrganizationSecurityPolicyRuleRead(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeOrganizationSecurityPolicyRule %q", d.Id())) @@ -464,6 +469,7 @@ func resourceComputeOrganizationSecurityPolicyRuleUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating OrganizationSecurityPolicyRule %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -478,6 +484,7 @@ func resourceComputeOrganizationSecurityPolicyRuleUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -539,6 +546,8 @@ func resourceComputeOrganizationSecurityPolicyRuleDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting OrganizationSecurityPolicyRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -548,6 +557,7 @@ func resourceComputeOrganizationSecurityPolicyRuleDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "OrganizationSecurityPolicyRule") diff --git a/google-beta/services/compute/resource_compute_packet_mirroring.go b/google-beta/services/compute/resource_compute_packet_mirroring.go index 96f3e3f7b7..a4b9371b5d 100644 --- a/google-beta/services/compute/resource_compute_packet_mirroring.go +++ b/google-beta/services/compute/resource_compute_packet_mirroring.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -291,6 +292,7 @@ func resourceComputePacketMirroringCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -299,6 +301,7 @@ func resourceComputePacketMirroringCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PacketMirroring: %s", err) @@ -351,12 +354,14 @@ func resourceComputePacketMirroringRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputePacketMirroring %q", d.Id())) @@ -453,6 +458,7 @@ func resourceComputePacketMirroringUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating PacketMirroring %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -467,6 +473,7 @@ func resourceComputePacketMirroringUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -513,6 +520,8 @@ func resourceComputePacketMirroringDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PacketMirroring %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -522,6 +531,7 @@ func resourceComputePacketMirroringDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PacketMirroring") diff --git a/google-beta/services/compute/resource_compute_per_instance_config.go b/google-beta/services/compute/resource_compute_per_instance_config.go index 3ad6bd5419..6bee3eb9fd 100644 --- a/google-beta/services/compute/resource_compute_per_instance_config.go +++ b/google-beta/services/compute/resource_compute_per_instance_config.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -306,6 +307,7 @@ func resourceComputePerInstanceConfigCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -314,6 +316,7 @@ func resourceComputePerInstanceConfigCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PerInstanceConfig: %s", err) @@ -366,12 +369,14 @@ func resourceComputePerInstanceConfigRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputePerInstanceConfig %q", d.Id())) @@ -479,6 +484,7 @@ func resourceComputePerInstanceConfigUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating PerInstanceConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -493,6 +499,7 @@ func resourceComputePerInstanceConfigUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/compute/resource_compute_public_advertised_prefix.go b/google-beta/services/compute/resource_compute_public_advertised_prefix.go index a347b3d33c..4484a46224 100644 --- a/google-beta/services/compute/resource_compute_public_advertised_prefix.go +++ b/google-beta/services/compute/resource_compute_public_advertised_prefix.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -146,6 +147,7 @@ func resourceComputePublicAdvertisedPrefixCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -154,6 +156,7 @@ func resourceComputePublicAdvertisedPrefixCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PublicAdvertisedPrefix: %s", err) @@ -206,12 +209,14 @@ func resourceComputePublicAdvertisedPrefixRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputePublicAdvertisedPrefix %q", d.Id())) @@ -267,6 +272,8 @@ func resourceComputePublicAdvertisedPrefixDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PublicAdvertisedPrefix %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -276,6 +283,7 @@ func resourceComputePublicAdvertisedPrefixDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PublicAdvertisedPrefix") diff --git a/google-beta/services/compute/resource_compute_public_delegated_prefix.go b/google-beta/services/compute/resource_compute_public_delegated_prefix.go index ae0c762ba5..ce29948cf3 100644 --- a/google-beta/services/compute/resource_compute_public_delegated_prefix.go +++ b/google-beta/services/compute/resource_compute_public_delegated_prefix.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -165,6 +166,7 @@ func resourceComputePublicDelegatedPrefixCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -173,6 +175,7 @@ func resourceComputePublicDelegatedPrefixCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PublicDelegatedPrefix: %s", err) @@ -225,12 +228,14 @@ func resourceComputePublicDelegatedPrefixRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputePublicDelegatedPrefix %q", d.Id())) @@ -289,6 +294,8 @@ func resourceComputePublicDelegatedPrefixDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PublicDelegatedPrefix %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -298,6 +305,7 @@ func resourceComputePublicDelegatedPrefixDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PublicDelegatedPrefix") diff --git a/google-beta/services/compute/resource_compute_region_autoscaler.go b/google-beta/services/compute/resource_compute_region_autoscaler.go index 60838bb253..33a80f5602 100644 --- a/google-beta/services/compute/resource_compute_region_autoscaler.go +++ b/google-beta/services/compute/resource_compute_region_autoscaler.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -495,6 +496,7 @@ func resourceComputeRegionAutoscalerCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -503,6 +505,7 @@ func resourceComputeRegionAutoscalerCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionAutoscaler: %s", err) @@ -555,12 +558,14 @@ func resourceComputeRegionAutoscalerRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionAutoscaler %q", d.Id())) @@ -653,6 +658,7 @@ func resourceComputeRegionAutoscalerUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating RegionAutoscaler %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -667,6 +673,7 @@ func resourceComputeRegionAutoscalerUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -713,6 +720,8 @@ func resourceComputeRegionAutoscalerDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionAutoscaler %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -722,6 +731,7 @@ func resourceComputeRegionAutoscalerDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionAutoscaler") diff --git a/google-beta/services/compute/resource_compute_region_backend_service.go b/google-beta/services/compute/resource_compute_region_backend_service.go index a14afc57e7..afa71ed122 100644 --- a/google-beta/services/compute/resource_compute_region_backend_service.go +++ b/google-beta/services/compute/resource_compute_region_backend_service.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -1360,6 +1361,7 @@ func resourceComputeRegionBackendServiceCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1368,6 +1370,7 @@ func resourceComputeRegionBackendServiceCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionBackendService: %s", err) @@ -1458,12 +1461,14 @@ func resourceComputeRegionBackendServiceRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionBackendService %q", d.Id())) @@ -1766,6 +1771,7 @@ func resourceComputeRegionBackendServiceUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating RegionBackendService %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -1780,6 +1786,7 @@ func resourceComputeRegionBackendServiceUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1812,6 +1819,8 @@ func resourceComputeRegionBackendServiceUpdate(d *schema.ResourceData, meta inte return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1825,6 +1834,7 @@ func resourceComputeRegionBackendServiceUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating RegionBackendService %q: %s", d.Id(), err) @@ -1872,6 +1882,8 @@ func resourceComputeRegionBackendServiceDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionBackendService %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1881,6 +1893,7 @@ func resourceComputeRegionBackendServiceDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionBackendService") diff --git a/google-beta/services/compute/resource_compute_region_commitment.go b/google-beta/services/compute/resource_compute_region_commitment.go index 885e826356..57a6539193 100644 --- a/google-beta/services/compute/resource_compute_region_commitment.go +++ b/google-beta/services/compute/resource_compute_region_commitment.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -306,6 +307,7 @@ func resourceComputeRegionCommitmentCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -314,6 +316,7 @@ func resourceComputeRegionCommitmentCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionCommitment: %s", err) @@ -366,12 +369,14 @@ func resourceComputeRegionCommitmentRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionCommitment %q", d.Id())) diff --git a/google-beta/services/compute/resource_compute_region_disk.go b/google-beta/services/compute/resource_compute_region_disk.go index 838baf8ad8..f82513bd37 100644 --- a/google-beta/services/compute/resource_compute_region_disk.go +++ b/google-beta/services/compute/resource_compute_region_disk.go @@ -21,6 +21,7 @@ import ( "errors" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -349,8 +350,7 @@ used.`, Description: `Links to the users of the disk (attached instances) in form: project/zones/zone/instances/instance`, Elem: &schema.Schema{ - Type: schema.TypeString, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Type: schema.TypeString, }, }, "project": { @@ -511,6 +511,7 @@ func resourceComputeRegionDiskCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -519,6 +520,7 @@ func resourceComputeRegionDiskCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionDisk: %s", err) @@ -571,12 +573,14 @@ func resourceComputeRegionDiskRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionDisk %q", d.Id())) @@ -715,6 +719,8 @@ func resourceComputeRegionDiskUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -728,6 +734,7 @@ func resourceComputeRegionDiskUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating RegionDisk %q: %s", d.Id(), err) @@ -757,6 +764,8 @@ func resourceComputeRegionDiskUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -770,6 +779,7 @@ func resourceComputeRegionDiskUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating RegionDisk %q: %s", d.Id(), err) @@ -817,6 +827,7 @@ func resourceComputeRegionDiskDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) readRes, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", @@ -888,6 +899,7 @@ func resourceComputeRegionDiskDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionDisk") diff --git a/google-beta/services/compute/resource_compute_region_disk_resource_policy_attachment.go b/google-beta/services/compute/resource_compute_region_disk_resource_policy_attachment.go index c5ee424730..d6b8471597 100644 --- a/google-beta/services/compute/resource_compute_region_disk_resource_policy_attachment.go +++ b/google-beta/services/compute/resource_compute_region_disk_resource_policy_attachment.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -122,6 +123,7 @@ func resourceComputeRegionDiskResourcePolicyAttachmentCreate(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -130,6 +132,7 @@ func resourceComputeRegionDiskResourcePolicyAttachmentCreate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionDiskResourcePolicyAttachment: %s", err) @@ -182,12 +185,14 @@ func resourceComputeRegionDiskResourcePolicyAttachmentRead(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionDiskResourcePolicyAttachment %q", d.Id())) @@ -255,6 +260,7 @@ func resourceComputeRegionDiskResourcePolicyAttachmentDelete(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) obj = make(map[string]interface{}) region, err := tpgresource.GetRegion(d, config) @@ -281,6 +287,7 @@ func resourceComputeRegionDiskResourcePolicyAttachmentDelete(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionDiskResourcePolicyAttachment") diff --git a/google-beta/services/compute/resource_compute_region_health_check.go b/google-beta/services/compute/resource_compute_region_health_check.go index 912ca424e5..eb16825882 100644 --- a/google-beta/services/compute/resource_compute_region_health_check.go +++ b/google-beta/services/compute/resource_compute_region_health_check.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -710,6 +711,7 @@ func resourceComputeRegionHealthCheckCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -718,6 +720,7 @@ func resourceComputeRegionHealthCheckCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionHealthCheck: %s", err) @@ -770,12 +773,14 @@ func resourceComputeRegionHealthCheckRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionHealthCheck %q", d.Id())) @@ -952,6 +957,7 @@ func resourceComputeRegionHealthCheckUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating RegionHealthCheck %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -966,6 +972,7 @@ func resourceComputeRegionHealthCheckUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1012,6 +1019,8 @@ func resourceComputeRegionHealthCheckDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionHealthCheck %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1021,6 +1030,7 @@ func resourceComputeRegionHealthCheckDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionHealthCheck") diff --git a/google-beta/services/compute/resource_compute_region_instance_group_manager.go b/google-beta/services/compute/resource_compute_region_instance_group_manager.go index 410206b709..49ef72ca46 100644 --- a/google-beta/services/compute/resource_compute_region_instance_group_manager.go +++ b/google-beta/services/compute/resource_compute_region_instance_group_manager.go @@ -526,6 +526,24 @@ func ResourceComputeRegionInstanceGroupManager() *schema.Resource { }, }, }, + "params": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Description: `Input only additional params for instance group manager creation.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_manager_tags": { + Type: schema.TypeMap, + Optional: true, + // This field is intentionally not updatable. The API overrides all existing tags on the field when updated. + ForceNew: true, + Description: `Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456.`, + }, + }, + }, + }, }, UseJSONNumber: true, } @@ -563,6 +581,7 @@ func resourceComputeRegionInstanceGroupManagerCreate(d *schema.ResourceData, met AllInstancesConfig: expandAllInstancesConfig(nil, d.Get("all_instances_config").([]interface{})), DistributionPolicy: expandDistributionPolicy(d), StatefulPolicy: expandStatefulPolicy(d), + Params: expandInstanceGroupManagerParams(d), // Force send TargetSize to allow size of 0. ForceSendFields: []string{"TargetSize"}, } diff --git a/google-beta/services/compute/resource_compute_region_instance_group_manager_test.go b/google-beta/services/compute/resource_compute_region_instance_group_manager_test.go index ea4d5ecc67..d1f53d768e 100644 --- a/google-beta/services/compute/resource_compute_region_instance_group_manager_test.go +++ b/google-beta/services/compute/resource_compute_region_instance_group_manager_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" ) func TestAccRegionInstanceGroupManager_basic(t *testing.T) { @@ -394,6 +395,32 @@ func TestAccRegionInstanceGroupManager_APISideListRecordering(t *testing.T) { }) } +func TestAccRegionInstanceGroupManager_resourceManagerTags(t *testing.T) { + t.Parallel() + + tag_name := fmt.Sprintf("tf-test-igm-%s", acctest.RandString(t, 10)) + template_name := fmt.Sprintf("tf-test-igm-%s", acctest.RandString(t, 10)) + igm_name := fmt.Sprintf("tf-test-igm-%s", acctest.RandString(t, 10)) + project_id := envvar.GetTestProjectFromEnv() + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckInstanceGroupManagerDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccRegionInstanceGroupManager_resourceManagerTags(template_name, tag_name, igm_name, project_id), + }, + { + ResourceName: "google_compute_region_instance_group_manager.rigm-tags", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"status", "params"}, + }, + }, + }) +} + func testAccCheckRegionInstanceGroupManagerDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { config := acctest.GoogleProviderConfig(t) @@ -1682,3 +1709,57 @@ resource "google_compute_region_instance_group_manager" "igm-basic" { } `, context) } + +func testAccRegionInstanceGroupManager_resourceManagerTags(template_name, tag_name, igm_name, project_id string) string { + return fmt.Sprintf(` +data "google_compute_image" "my_image" { + family = "debian-11" + project = "debian-cloud" +} + +resource "google_compute_instance_template" "rigm-tags" { + name = "%s" + description = "Terraform test instance template." + machine_type = "e2-medium" + + disk { + source_image = data.google_compute_image.my_image.self_link + } + + network_interface { + network = "default" + } +} + +resource "google_tags_tag_key" "rigm-key" { + description = "Terraform test tag key." + parent = "projects/%s" + short_name = "%s" +} + +resource "google_tags_tag_value" "rigm-value" { + description = "Terraform test tag value." + parent = "tagKeys/${google_tags_tag_key.rigm-key.name}" + short_name = "%s" +} + +resource "google_compute_region_instance_group_manager" "rigm-tags" { + description = "Terraform test instance group manager." + name = "%s" + base_instance_name = "tf-rigm-tags-test" + region = "us-central1" + target_size = 0 + + version { + name = "prod" + instance_template = google_compute_instance_template.rigm-tags.self_link + } + + params { + resource_manager_tags = { + "tagKeys/${google_tags_tag_key.rigm-key.name}" = "tagValues/${google_tags_tag_value.rigm-value.name}" + } + } +} +`, template_name, project_id, tag_name, tag_name, igm_name) +} diff --git a/google-beta/services/compute/resource_compute_region_instance_template.go b/google-beta/services/compute/resource_compute_region_instance_template.go index c20f8c3fd2..1c61d40b58 100644 --- a/google-beta/services/compute/resource_compute_region_instance_template.go +++ b/google-beta/services/compute/resource_compute_region_instance_template.go @@ -815,7 +815,6 @@ be from 0 to 999,999,999 inclusive.`, Description: `The Confidential VM config being used by the instance. on_host_maintenance has to be set to TERMINATE or this will fail to create.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "enable_confidential_compute": { Type: schema.TypeBool, Optional: true, diff --git a/google-beta/services/compute/resource_compute_region_instance_template_test.go b/google-beta/services/compute/resource_compute_region_instance_template_test.go index 53132fb1ae..deed117f26 100644 --- a/google-beta/services/compute/resource_compute_region_instance_template_test.go +++ b/google-beta/services/compute/resource_compute_region_instance_template_test.go @@ -682,7 +682,6 @@ func TestAccComputeRegionInstanceTemplate_ConfidentialInstanceConfigMain(t *test t.Parallel() var instanceTemplate compute.InstanceTemplate - var instanceTemplate2 compute.InstanceTemplate acctest.VcrTest(t, resource.TestCase{ @@ -695,12 +694,10 @@ func TestAccComputeRegionInstanceTemplate_ConfidentialInstanceConfigMain(t *test Check: resource.ComposeTestCheckFunc( testAccCheckComputeRegionInstanceTemplateExists(t, "google_compute_region_instance_template.foobar", &instanceTemplate), testAccCheckComputeRegionInstanceTemplateHasConfidentialInstanceConfig(&instanceTemplate, true, "SEV"), - testAccCheckComputeRegionInstanceTemplateExists(t, "google_compute_region_instance_template.foobar2", &instanceTemplate2), testAccCheckComputeRegionInstanceTemplateHasConfidentialInstanceConfig(&instanceTemplate2, true, ""), ), }, - { Config: testAccComputeRegionInstanceTemplateConfidentialInstanceConfigNoEnable(acctest.RandString(t, 10), "AMD Milan", "SEV_SNP"), Check: resource.ComposeTestCheckFunc( @@ -1606,7 +1603,6 @@ func testAccCheckComputeRegionInstanceTemplateHasConfidentialInstanceConfig(inst if instanceTemplate.Properties.ConfidentialInstanceConfig.EnableConfidentialCompute != EnableConfidentialCompute { return fmt.Errorf("Wrong ConfidentialInstanceConfig EnableConfidentialCompute: expected %t, got, %t", EnableConfidentialCompute, instanceTemplate.Properties.ConfidentialInstanceConfig.EnableConfidentialCompute) } - if instanceTemplate.Properties.ConfidentialInstanceConfig.ConfidentialInstanceType != ConfidentialInstanceType { return fmt.Errorf("Wrong ConfidentialInstanceConfig ConfidentialInstanceType: expected %s, got, %s", ConfidentialInstanceType, instanceTemplate.Properties.ConfidentialInstanceConfig.ConfidentialInstanceType) } @@ -2753,9 +2749,7 @@ resource "google_compute_region_instance_template" "foobar" { confidential_instance_config { enable_confidential_compute = true - confidential_instance_type = %q - } scheduling { @@ -2788,10 +2782,7 @@ resource "google_compute_region_instance_template" "foobar2" { } } - - `, suffix, confidentialInstanceType, suffix) - } func testAccComputeRegionInstanceTemplateConfidentialInstanceConfigNoEnable(suffix string, minCpuPlatform, confidentialInstanceType string) string { diff --git a/google-beta/services/compute/resource_compute_region_network_endpoint.go b/google-beta/services/compute/resource_compute_region_network_endpoint.go index dc1da414ac..a13cdd1f35 100644 --- a/google-beta/services/compute/resource_compute_region_network_endpoint.go +++ b/google-beta/services/compute/resource_compute_region_network_endpoint.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -160,6 +161,7 @@ func resourceComputeRegionNetworkEndpointCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -168,6 +170,7 @@ func resourceComputeRegionNetworkEndpointCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionNetworkEndpoint: %s", err) @@ -220,12 +223,14 @@ func resourceComputeRegionNetworkEndpointRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionNetworkEndpoint %q", d.Id())) @@ -314,6 +319,7 @@ func resourceComputeRegionNetworkEndpointDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) toDelete := make(map[string]interface{}) // Port @@ -356,6 +362,7 @@ func resourceComputeRegionNetworkEndpointDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionNetworkEndpoint") diff --git a/google-beta/services/compute/resource_compute_region_network_endpoint_group.go b/google-beta/services/compute/resource_compute_region_network_endpoint_group.go index e4ac839d04..11b3b2bb8f 100644 --- a/google-beta/services/compute/resource_compute_region_network_endpoint_group.go +++ b/google-beta/services/compute/resource_compute_region_network_endpoint_group.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -393,6 +394,7 @@ func resourceComputeRegionNetworkEndpointGroupCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -401,6 +403,7 @@ func resourceComputeRegionNetworkEndpointGroupCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionNetworkEndpointGroup: %s", err) @@ -453,12 +456,14 @@ func resourceComputeRegionNetworkEndpointGroupRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionNetworkEndpointGroup %q", d.Id())) @@ -535,6 +540,8 @@ func resourceComputeRegionNetworkEndpointGroupDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionNetworkEndpointGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -544,6 +551,7 @@ func resourceComputeRegionNetworkEndpointGroupDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionNetworkEndpointGroup") diff --git a/google-beta/services/compute/resource_compute_region_network_firewall_policy.go b/google-beta/services/compute/resource_compute_region_network_firewall_policy.go index 5318ceeeb7..c1a7f33d03 100644 --- a/google-beta/services/compute/resource_compute_region_network_firewall_policy.go +++ b/google-beta/services/compute/resource_compute_region_network_firewall_policy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -158,6 +159,7 @@ func resourceComputeRegionNetworkFirewallPolicyCreate(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -166,6 +168,7 @@ func resourceComputeRegionNetworkFirewallPolicyCreate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionNetworkFirewallPolicy: %s", err) @@ -218,12 +221,14 @@ func resourceComputeRegionNetworkFirewallPolicyRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionNetworkFirewallPolicy %q", d.Id())) @@ -296,6 +301,7 @@ func resourceComputeRegionNetworkFirewallPolicyUpdate(d *schema.ResourceData, me } log.Printf("[DEBUG] Updating RegionNetworkFirewallPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -310,6 +316,7 @@ func resourceComputeRegionNetworkFirewallPolicyUpdate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -356,6 +363,8 @@ func resourceComputeRegionNetworkFirewallPolicyDelete(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionNetworkFirewallPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -365,6 +374,7 @@ func resourceComputeRegionNetworkFirewallPolicyDelete(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionNetworkFirewallPolicy") diff --git a/google-beta/services/compute/resource_compute_region_per_instance_config.go b/google-beta/services/compute/resource_compute_region_per_instance_config.go index 102c11f479..61ec5a7ffd 100644 --- a/google-beta/services/compute/resource_compute_region_per_instance_config.go +++ b/google-beta/services/compute/resource_compute_region_per_instance_config.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -306,6 +307,7 @@ func resourceComputeRegionPerInstanceConfigCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -314,6 +316,7 @@ func resourceComputeRegionPerInstanceConfigCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionPerInstanceConfig: %s", err) @@ -366,12 +369,14 @@ func resourceComputeRegionPerInstanceConfigRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionPerInstanceConfig %q", d.Id())) @@ -479,6 +484,7 @@ func resourceComputeRegionPerInstanceConfigUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating RegionPerInstanceConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -493,6 +499,7 @@ func resourceComputeRegionPerInstanceConfigUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/compute/resource_compute_region_security_policy.go b/google-beta/services/compute/resource_compute_region_security_policy.go index cf72665897..efb39bab78 100644 --- a/google-beta/services/compute/resource_compute_region_security_policy.go +++ b/google-beta/services/compute/resource_compute_region_security_policy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -250,6 +251,7 @@ func resourceComputeRegionSecurityPolicyCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -258,6 +260,7 @@ func resourceComputeRegionSecurityPolicyCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionSecurityPolicy: %s", err) @@ -310,12 +313,14 @@ func resourceComputeRegionSecurityPolicyRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionSecurityPolicy %q", d.Id())) @@ -406,6 +411,7 @@ func resourceComputeRegionSecurityPolicyUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating RegionSecurityPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -445,6 +451,7 @@ func resourceComputeRegionSecurityPolicyUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -492,6 +499,8 @@ func resourceComputeRegionSecurityPolicyDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionSecurityPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -501,6 +510,7 @@ func resourceComputeRegionSecurityPolicyDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionSecurityPolicy") diff --git a/google-beta/services/compute/resource_compute_region_security_policy_rule.go b/google-beta/services/compute/resource_compute_region_security_policy_rule.go index 83409b8d43..67bd0290b4 100644 --- a/google-beta/services/compute/resource_compute_region_security_policy_rule.go +++ b/google-beta/services/compute/resource_compute_region_security_policy_rule.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -306,6 +307,7 @@ func resourceComputeRegionSecurityPolicyRuleCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -314,6 +316,7 @@ func resourceComputeRegionSecurityPolicyRuleCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionSecurityPolicyRule: %s", err) @@ -366,12 +369,14 @@ func resourceComputeRegionSecurityPolicyRuleRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionSecurityPolicyRule %q", d.Id())) @@ -462,6 +467,7 @@ func resourceComputeRegionSecurityPolicyRuleUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating RegionSecurityPolicyRule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -516,6 +522,7 @@ func resourceComputeRegionSecurityPolicyRuleUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -563,6 +570,8 @@ func resourceComputeRegionSecurityPolicyRuleDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionSecurityPolicyRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -572,6 +581,7 @@ func resourceComputeRegionSecurityPolicyRuleDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionSecurityPolicyRule") diff --git a/google-beta/services/compute/resource_compute_region_ssl_certificate.go b/google-beta/services/compute/resource_compute_region_ssl_certificate.go index bcc6d17170..0d58f57653 100644 --- a/google-beta/services/compute/resource_compute_region_ssl_certificate.go +++ b/google-beta/services/compute/resource_compute_region_ssl_certificate.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -207,6 +208,7 @@ func resourceComputeRegionSslCertificateCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -215,6 +217,7 @@ func resourceComputeRegionSslCertificateCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionSslCertificate: %s", err) @@ -267,12 +270,14 @@ func resourceComputeRegionSslCertificateRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionSslCertificate %q", d.Id())) @@ -337,6 +342,8 @@ func resourceComputeRegionSslCertificateDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionSslCertificate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -346,6 +353,7 @@ func resourceComputeRegionSslCertificateDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionSslCertificate") diff --git a/google-beta/services/compute/resource_compute_region_ssl_policy.go b/google-beta/services/compute/resource_compute_region_ssl_policy.go index e75d73ca92..e8550716cf 100644 --- a/google-beta/services/compute/resource_compute_region_ssl_policy.go +++ b/google-beta/services/compute/resource_compute_region_ssl_policy.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -240,6 +241,7 @@ func resourceComputeRegionSslPolicyCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -248,6 +250,7 @@ func resourceComputeRegionSslPolicyCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionSslPolicy: %s", err) @@ -300,12 +303,14 @@ func resourceComputeRegionSslPolicyRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionSslPolicy %q", d.Id())) @@ -396,6 +401,7 @@ func resourceComputeRegionSslPolicyUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating RegionSslPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -410,6 +416,7 @@ func resourceComputeRegionSslPolicyUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -456,6 +463,8 @@ func resourceComputeRegionSslPolicyDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionSslPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -465,6 +474,7 @@ func resourceComputeRegionSslPolicyDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionSslPolicy") diff --git a/google-beta/services/compute/resource_compute_region_target_http_proxy.go b/google-beta/services/compute/resource_compute_region_target_http_proxy.go index b2ec39601a..c2e1309a87 100644 --- a/google-beta/services/compute/resource_compute_region_target_http_proxy.go +++ b/google-beta/services/compute/resource_compute_region_target_http_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -163,6 +164,7 @@ func resourceComputeRegionTargetHttpProxyCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -171,6 +173,7 @@ func resourceComputeRegionTargetHttpProxyCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionTargetHttpProxy: %s", err) @@ -223,12 +226,14 @@ func resourceComputeRegionTargetHttpProxyRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionTargetHttpProxy %q", d.Id())) @@ -295,6 +300,8 @@ func resourceComputeRegionTargetHttpProxyUpdate(d *schema.ResourceData, meta int return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -308,6 +315,7 @@ func resourceComputeRegionTargetHttpProxyUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating RegionTargetHttpProxy %q: %s", d.Id(), err) @@ -355,6 +363,8 @@ func resourceComputeRegionTargetHttpProxyDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionTargetHttpProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -364,6 +374,7 @@ func resourceComputeRegionTargetHttpProxyDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionTargetHttpProxy") diff --git a/google-beta/services/compute/resource_compute_region_target_https_proxy.go b/google-beta/services/compute/resource_compute_region_target_https_proxy.go index a5d0488f8b..934393da40 100644 --- a/google-beta/services/compute/resource_compute_region_target_https_proxy.go +++ b/google-beta/services/compute/resource_compute_region_target_https_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -100,6 +101,21 @@ Accepted format is '//certificatemanager.googleapis.com/projects/{project}/locat DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, Description: `The Region in which the created target https proxy should reside. If it is not provided, the provider region is used.`, + }, + "server_tls_policy": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `A URL referring to a networksecurity.ServerTlsPolicy +resource that describes how the proxy should authenticate inbound +traffic. serverTlsPolicy only applies to a global TargetHttpsProxy +attached to globalForwardingRules with the loadBalancingScheme +set to INTERNAL_SELF_MANAGED or EXTERNAL or EXTERNAL_MANAGED. +For details which ServerTlsPolicy resources are accepted with +INTERNAL_SELF_MANAGED and which with EXTERNAL, EXTERNAL_MANAGED +loadBalancingScheme consult ServerTlsPolicy documentation. +If left blank, communications are not encrypted.`, }, "ssl_certificates": { Type: schema.TypeList, @@ -191,6 +207,12 @@ func resourceComputeRegionTargetHttpsProxyCreate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("url_map"); !tpgresource.IsEmptyValue(reflect.ValueOf(urlMapProp)) && (ok || !reflect.DeepEqual(v, urlMapProp)) { obj["urlMap"] = urlMapProp } + serverTlsPolicyProp, err := expandComputeRegionTargetHttpsProxyServerTlsPolicy(d.Get("server_tls_policy"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("server_tls_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(serverTlsPolicyProp)) && (ok || !reflect.DeepEqual(v, serverTlsPolicyProp)) { + obj["serverTlsPolicy"] = serverTlsPolicyProp + } regionProp, err := expandComputeRegionTargetHttpsProxyRegion(d.Get("region"), d, config) if err != nil { return err @@ -222,6 +244,7 @@ func resourceComputeRegionTargetHttpsProxyCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -230,6 +253,7 @@ func resourceComputeRegionTargetHttpsProxyCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionTargetHttpsProxy: %s", err) @@ -282,12 +306,14 @@ func resourceComputeRegionTargetHttpsProxyRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionTargetHttpsProxy %q", d.Id())) @@ -333,6 +359,9 @@ func resourceComputeRegionTargetHttpsProxyRead(d *schema.ResourceData, meta inte if err := d.Set("url_map", flattenComputeRegionTargetHttpsProxyUrlMap(res["urlMap"], d, config)); err != nil { return fmt.Errorf("Error reading RegionTargetHttpsProxy: %s", err) } + if err := d.Set("server_tls_policy", flattenComputeRegionTargetHttpsProxyServerTlsPolicy(res["serverTlsPolicy"], d, config)); err != nil { + return fmt.Errorf("Error reading RegionTargetHttpsProxy: %s", err) + } if err := d.Set("region", flattenComputeRegionTargetHttpsProxyRegion(res["region"], d, config)); err != nil { return fmt.Errorf("Error reading RegionTargetHttpsProxy: %s", err) } @@ -376,11 +405,18 @@ func resourceComputeRegionTargetHttpsProxyUpdate(d *schema.ResourceData, meta in obj["sslCertificates"] = sslCertificatesProp } + obj, err = resourceComputeRegionTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/targetHttpsProxies/{{name}}/setSslCertificates") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -394,6 +430,7 @@ func resourceComputeRegionTargetHttpsProxyUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating RegionTargetHttpsProxy %q: %s", d.Id(), err) @@ -418,11 +455,18 @@ func resourceComputeRegionTargetHttpsProxyUpdate(d *schema.ResourceData, meta in obj["urlMap"] = urlMapProp } + obj, err = resourceComputeRegionTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/targetHttpsProxies/{{name}}/setUrlMap") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -436,6 +480,7 @@ func resourceComputeRegionTargetHttpsProxyUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating RegionTargetHttpsProxy %q: %s", d.Id(), err) @@ -483,6 +528,8 @@ func resourceComputeRegionTargetHttpsProxyDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionTargetHttpsProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -492,6 +539,7 @@ func resourceComputeRegionTargetHttpsProxyDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionTargetHttpsProxy") @@ -584,6 +632,13 @@ func flattenComputeRegionTargetHttpsProxyUrlMap(v interface{}, d *schema.Resourc return tpgresource.ConvertSelfLinkToV1(v.(string)) } +func flattenComputeRegionTargetHttpsProxyServerTlsPolicy(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + return tpgresource.ConvertSelfLinkToV1(v.(string)) +} + func flattenComputeRegionTargetHttpsProxyRegion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -656,6 +711,10 @@ func expandComputeRegionTargetHttpsProxyUrlMap(v interface{}, d tpgresource.Terr return f.RelativeLink(), nil } +func expandComputeRegionTargetHttpsProxyServerTlsPolicy(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandComputeRegionTargetHttpsProxyRegion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { f, err := tpgresource.ParseGlobalFieldValue("regions", v.(string), "project", d, config, true) if err != nil { @@ -677,6 +736,19 @@ func resourceComputeRegionTargetHttpsProxyEncoder(d *schema.ResourceData, meta i return obj, nil } +func resourceComputeRegionTargetHttpsProxyUpdateEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { + + if _, ok := obj["certificateManagerCertificates"]; ok { + // The field certificateManagerCertificates should not be included in the API request, and it should be renamed to `sslCertificates` + // The API does not allow using both certificate manager certificates and sslCertificates. If that changes + // in the future, the encoder logic should change accordingly because this will mean that both fields are no longer mutual exclusive. + log.Printf("[DEBUG] converting the field CertificateManagerCertificates to sslCertificates before sending the request") + obj["sslCertificates"] = obj["certificateManagerCertificates"] + delete(obj, "certificateManagerCertificates") + } + return obj, nil +} + func resourceComputeRegionTargetHttpsProxyDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { // Since both sslCertificates and certificateManagerCertificates maps to the same API field (sslCertificates), we need to check the types // of certificates that exist in the array and decide whether to change the field to certificateManagerCertificate or not. diff --git a/google-beta/services/compute/resource_compute_region_target_https_proxy_generated_test.go b/google-beta/services/compute/resource_compute_region_target_https_proxy_generated_test.go index 13644a6c51..682aca832a 100644 --- a/google-beta/services/compute/resource_compute_region_target_https_proxy_generated_test.go +++ b/google-beta/services/compute/resource_compute_region_target_https_proxy_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeRegionTargetHttpsProxy_regionTargetHttpsProxyBasicExample(t * ResourceName: "google_compute_region_target_https_proxy.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ssl_policy", "url_map", "region"}, + ImportStateVerifyIgnore: []string{"ssl_policy", "url_map", "server_tls_policy", "region"}, }, }, }) @@ -114,6 +114,137 @@ resource "google_compute_region_health_check" "default" { `, context) } +func TestAccComputeRegionTargetHttpsProxy_regionTargetHttpsProxyMtlsExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckComputeRegionTargetHttpsProxyDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRegionTargetHttpsProxy_regionTargetHttpsProxyMtlsExample(context), + }, + { + ResourceName: "google_compute_region_target_https_proxy.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ssl_policy", "url_map", "server_tls_policy", "region"}, + }, + }, + }) +} + +func testAccComputeRegionTargetHttpsProxy_regionTargetHttpsProxyMtlsExample(context map[string]interface{}) string { + return acctest.Nprintf(` +data "google_project" "project" { + provider = google-beta +} + +resource "google_compute_region_target_https_proxy" "default" { + provider = google-beta + region = "us-central1" + name = "tf-test-test-mtls-proxy%{random_suffix}" + url_map = google_compute_region_url_map.default.id + ssl_certificates = [google_compute_region_ssl_certificate.default.id] + server_tls_policy = google_network_security_server_tls_policy.default.id +} + +resource "google_certificate_manager_trust_config" "default" { + provider = google-beta + location = "us-central1" + name = "tf-test-my-trust-config%{random_suffix}" + description = "sample description for trust config" + + trust_stores { + trust_anchors { + pem_certificate = file("test-fixtures/ca_cert.pem") + } + intermediate_cas { + pem_certificate = file("test-fixtures/ca_cert.pem") + } + } + + labels = { + foo = "bar" + } +} + +resource "google_network_security_server_tls_policy" "default" { + provider = google-beta + location = "us-central1" + name = "tf-test-my-tls-policy%{random_suffix}" + description = "my description" + allow_open = "false" + mtls_policy { + client_validation_mode = "REJECT_INVALID" + client_validation_trust_config = "projects/${data.google_project.project.number}/locations/us-central1/trustConfigs/${google_certificate_manager_trust_config.default.name}" + } +} + +resource "google_compute_region_ssl_certificate" "default" { + provider = google-beta + region = "us-central1" + name = "tf-test-my-certificate%{random_suffix}" + private_key = file("test-fixtures/test.key") + certificate = file("test-fixtures/test.crt") +} + +resource "google_compute_region_url_map" "default" { + provider = google-beta + region = "us-central1" + name = "tf-test-url-map%{random_suffix}" + description = "a description" + + default_service = google_compute_region_backend_service.default.id + + host_rule { + hosts = ["mysite.com"] + path_matcher = "allpaths" + } + + path_matcher { + name = "allpaths" + default_service = google_compute_region_backend_service.default.id + + path_rule { + paths = ["/*"] + service = google_compute_region_backend_service.default.id + } + } +} + +resource "google_compute_region_backend_service" "default" { + provider = google-beta + region = "us-central1" + name = "tf-test-backend-service%{random_suffix}" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + load_balancing_scheme = "INTERNAL_MANAGED" + + health_checks = [google_compute_region_health_check.default.id] +} + +resource "google_compute_region_health_check" "default" { + provider = google-beta + region = "us-central1" + name = "tf-test-http-health-check%{random_suffix}" + check_interval_sec = 1 + timeout_sec = 1 + + http_health_check { + port = 80 + } +} +`, context) +} + func TestAccComputeRegionTargetHttpsProxy_regionTargetHttpsProxyCertificateManagerCertificateExample(t *testing.T) { t.Parallel() @@ -133,7 +264,7 @@ func TestAccComputeRegionTargetHttpsProxy_regionTargetHttpsProxyCertificateManag ResourceName: "google_compute_region_target_https_proxy.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ssl_policy", "url_map", "region"}, + ImportStateVerifyIgnore: []string{"ssl_policy", "url_map", "server_tls_policy", "region"}, }, }, }) diff --git a/google-beta/services/compute/resource_compute_region_target_tcp_proxy.go b/google-beta/services/compute/resource_compute_region_target_tcp_proxy.go index 46b4581d47..e14656bf2a 100644 --- a/google-beta/services/compute/resource_compute_region_target_tcp_proxy.go +++ b/google-beta/services/compute/resource_compute_region_target_tcp_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -191,6 +192,7 @@ func resourceComputeRegionTargetTcpProxyCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -199,6 +201,7 @@ func resourceComputeRegionTargetTcpProxyCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionTargetTcpProxy: %s", err) @@ -251,12 +254,14 @@ func resourceComputeRegionTargetTcpProxyRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionTargetTcpProxy %q", d.Id())) @@ -324,6 +329,8 @@ func resourceComputeRegionTargetTcpProxyDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionTargetTcpProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -333,6 +340,7 @@ func resourceComputeRegionTargetTcpProxyDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionTargetTcpProxy") diff --git a/google-beta/services/compute/resource_compute_region_url_map.go b/google-beta/services/compute/resource_compute_region_url_map.go index cca1c5573d..dc3a996439 100644 --- a/google-beta/services/compute/resource_compute_region_url_map.go +++ b/google-beta/services/compute/resource_compute_region_url_map.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -2309,6 +2310,7 @@ func resourceComputeRegionUrlMapCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -2317,6 +2319,7 @@ func resourceComputeRegionUrlMapCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RegionUrlMap: %s", err) @@ -2369,12 +2372,14 @@ func resourceComputeRegionUrlMapRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRegionUrlMap %q", d.Id())) @@ -2510,6 +2515,7 @@ func resourceComputeRegionUrlMapUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating RegionUrlMap %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -2524,6 +2530,7 @@ func resourceComputeRegionUrlMapUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -2570,6 +2577,8 @@ func resourceComputeRegionUrlMapDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RegionUrlMap %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -2579,6 +2588,7 @@ func resourceComputeRegionUrlMapDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RegionUrlMap") diff --git a/google-beta/services/compute/resource_compute_reservation.go b/google-beta/services/compute/resource_compute_reservation.go index 93070b5154..10ebbee322 100644 --- a/google-beta/services/compute/resource_compute_reservation.go +++ b/google-beta/services/compute/resource_compute_reservation.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "net/url" "reflect" "strconv" @@ -322,6 +323,7 @@ func resourceComputeReservationCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -330,6 +332,7 @@ func resourceComputeReservationCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Reservation: %s", err) @@ -382,12 +385,14 @@ func resourceComputeReservationRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeReservation %q", d.Id())) @@ -462,6 +467,7 @@ func resourceComputeReservationUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Reservation %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("share_settings") { @@ -500,6 +506,7 @@ func resourceComputeReservationUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -538,6 +545,7 @@ func resourceComputeReservationUpdate(d *schema.ResourceData, meta interface{}) return err } + headers := make(http.Header) if d.HasChange("share_settings") { url, err = tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/reservations/{{name}}") if err != nil { @@ -563,6 +571,7 @@ func resourceComputeReservationUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Reservation %q: %s", d.Id(), err) @@ -610,6 +619,8 @@ func resourceComputeReservationDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Reservation %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -619,6 +630,7 @@ func resourceComputeReservationDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Reservation") diff --git a/google-beta/services/compute/resource_compute_resource_policy.go b/google-beta/services/compute/resource_compute_resource_policy.go index 0cb3c8cf8f..6f2be776a2 100644 --- a/google-beta/services/compute/resource_compute_resource_policy.go +++ b/google-beta/services/compute/resource_compute_resource_policy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -485,6 +486,7 @@ func resourceComputeResourcePolicyCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -493,6 +495,7 @@ func resourceComputeResourcePolicyCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ResourcePolicy: %s", err) @@ -545,12 +548,14 @@ func resourceComputeResourcePolicyRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeResourcePolicy %q", d.Id())) @@ -620,6 +625,8 @@ func resourceComputeResourcePolicyDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ResourcePolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -629,6 +636,7 @@ func resourceComputeResourcePolicyDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ResourcePolicy") diff --git a/google-beta/services/compute/resource_compute_route.go b/google-beta/services/compute/resource_compute_route.go index b7d8999275..d9e625af52 100644 --- a/google-beta/services/compute/resource_compute_route.go +++ b/google-beta/services/compute/resource_compute_route.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -302,6 +303,7 @@ func resourceComputeRouteCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -310,6 +312,7 @@ func resourceComputeRouteCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsPeeringOperationInProgress}, }) if err != nil { @@ -363,12 +366,14 @@ func resourceComputeRouteRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsPeeringOperationInProgress}, }) if err != nil { @@ -468,6 +473,8 @@ func resourceComputeRouteDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Route %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -477,6 +484,7 @@ func resourceComputeRouteDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsPeeringOperationInProgress}, }) if err != nil { diff --git a/google-beta/services/compute/resource_compute_router.go b/google-beta/services/compute/resource_compute_router.go index 98146c7de4..3013bdccdc 100644 --- a/google-beta/services/compute/resource_compute_router.go +++ b/google-beta/services/compute/resource_compute_router.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -153,6 +154,16 @@ CIDR-formatted string.`, }, }, }, + "identifier_range": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `Explicitly specifies a range of valid BGP Identifiers for this Router. +It is provided as a link-local IPv4 range (from 169.254.0.0/16), of +size at least /30, even if the BGP sessions are over IPv6. It must +not overlap with any IPv4 BGP session ranges. Other vendors commonly +call this router ID.`, + }, "keepalive_interval": { Type: schema.TypeInt, Optional: true, @@ -282,6 +293,7 @@ func resourceComputeRouterCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -290,6 +302,7 @@ func resourceComputeRouterCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Router: %s", err) @@ -342,12 +355,14 @@ func resourceComputeRouterRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRouter %q", d.Id())) @@ -427,6 +442,7 @@ func resourceComputeRouterUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Router %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -441,6 +457,7 @@ func resourceComputeRouterUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -494,6 +511,8 @@ func resourceComputeRouterDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Router %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -503,6 +522,7 @@ func resourceComputeRouterDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Router") @@ -579,6 +599,8 @@ func flattenComputeRouterBgp(v interface{}, d *schema.ResourceData, config *tran flattenComputeRouterBgpAdvertisedIpRanges(original["advertisedIpRanges"], d, config) transformed["keepalive_interval"] = flattenComputeRouterBgpKeepaliveInterval(original["keepaliveInterval"], d, config) + transformed["identifier_range"] = + flattenComputeRouterBgpIdentifierRange(original["identifierRange"], d, config) return []interface{}{transformed} } func flattenComputeRouterBgpAsn(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -650,6 +672,10 @@ func flattenComputeRouterBgpKeepaliveInterval(v interface{}, d *schema.ResourceD return v // let terraform core handle it otherwise } +func flattenComputeRouterBgpIdentifierRange(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenComputeRouterEncryptedInterconnectRouter(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -721,6 +747,13 @@ func expandComputeRouterBgp(v interface{}, d tpgresource.TerraformResourceData, transformed["keepaliveInterval"] = transformedKeepaliveInterval } + transformedIdentifierRange, err := expandComputeRouterBgpIdentifierRange(original["identifier_range"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedIdentifierRange); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["identifierRange"] = transformedIdentifierRange + } + return transformed, nil } @@ -777,6 +810,10 @@ func expandComputeRouterBgpKeepaliveInterval(v interface{}, d tpgresource.Terraf return v, nil } +func expandComputeRouterBgpIdentifierRange(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandComputeRouterEncryptedInterconnectRouter(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/compute/resource_compute_router_bgp_peer_test.go b/google-beta/services/compute/resource_compute_router_bgp_peer_test.go index 528389d496..f4525f0772 100644 --- a/google-beta/services/compute/resource_compute_router_bgp_peer_test.go +++ b/google-beta/services/compute/resource_compute_router_bgp_peer_test.go @@ -208,6 +208,48 @@ func TestAccComputeRouterPeer_Ipv6Basic(t *testing.T) { }) } +func TestAccComputeRouterPeer_Ipv4BasicCreateUpdate(t *testing.T) { + t.Parallel() + + routerName := fmt.Sprintf("tf-test-router-%s", acctest.RandString(t, 10)) + resourceName := "google_compute_router_peer.foobar" + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckComputeRouterPeerDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRouterPeerIpv4(routerName), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeRouterPeerExists( + t, resourceName), + resource.TestCheckResourceAttr(resourceName, "enable_ipv4", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeRouterPeerUpdateIpv4Address(routerName), + Check: resource.ComposeTestCheckFunc( + testAccCheckComputeRouterPeerExists( + t, resourceName), + resource.TestCheckResourceAttr(resourceName, "enable_ipv4", "true"), + resource.TestCheckResourceAttr(resourceName, "ipv4_nexthop_address", "169.254.1.2"), + resource.TestCheckResourceAttr(resourceName, "peer_ipv4_nexthop_address", "169.254.1.1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccComputeRouterPeer_UpdateIpv6Address(t *testing.T) { t.Parallel() @@ -606,6 +648,17 @@ func testAccComputeRouterPeerWithMd5AuthKey(routerName string) string { router = google_compute_router.foobar.name vpn_gateway_interface = 0 } + + resource "google_compute_vpn_tunnel" "foobar1" { + name = "%s1" + region = google_compute_subnetwork.foobar.region + vpn_gateway = google_compute_ha_vpn_gateway.foobar.id + peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.id + peer_external_gateway_interface = 0 + shared_secret = "unguessable" + router = google_compute_router.foobar.name + vpn_gateway_interface = 1 + } resource "google_compute_router_interface" "foobar" { name = "%s" @@ -619,7 +672,7 @@ func testAccComputeRouterPeerWithMd5AuthKey(routerName string) string { name = "%s1" router = google_compute_router.foobar.name region = google_compute_router.foobar.region - vpn_tunnel = google_compute_vpn_tunnel.foobar.name + vpn_tunnel = google_compute_vpn_tunnel.foobar1.name ip_range = "169.254.4.1/30" depends_on = [ google_compute_router_interface.foobar @@ -657,7 +710,7 @@ func testAccComputeRouterPeerWithMd5AuthKey(routerName string) string { ] } `, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, - routerName) + routerName, routerName) } func testAccComputeRouterPeerWithMd5AuthKeyUpdate(routerName string) string { @@ -714,6 +767,17 @@ func testAccComputeRouterPeerWithMd5AuthKeyUpdate(routerName string) string { router = google_compute_router.foobar.name vpn_gateway_interface = 0 } + + resource "google_compute_vpn_tunnel" "foobar1" { + name = "%s1" + region = google_compute_subnetwork.foobar.region + vpn_gateway = google_compute_ha_vpn_gateway.foobar.id + peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.id + peer_external_gateway_interface = 0 + shared_secret = "unguessable" + router = google_compute_router.foobar.name + vpn_gateway_interface = 1 + } resource "google_compute_router_interface" "foobar" { name = "%s" @@ -727,7 +791,7 @@ func testAccComputeRouterPeerWithMd5AuthKeyUpdate(routerName string) string { name = "%s1" router = google_compute_router.foobar.name region = google_compute_router.foobar.region - vpn_tunnel = google_compute_vpn_tunnel.foobar.name + vpn_tunnel = google_compute_vpn_tunnel.foobar1.name ip_range = "169.254.4.1/30" depends_on = [ google_compute_router_interface.foobar @@ -765,7 +829,7 @@ func testAccComputeRouterPeerWithMd5AuthKeyUpdate(routerName string) string { ] } `, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, - routerName) + routerName, routerName) } func testAccComputeRouterPeerKeepRouter(routerName string) string { @@ -1399,8 +1463,8 @@ resource "google_compute_router_peer" "foobar" { peer_asn = 65515 advertised_route_priority = 100 interface = google_compute_router_interface.foobar.name - enable_ipv6 = %v + } `, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, enableIpv6) } @@ -1475,10 +1539,181 @@ resource "google_compute_router_peer" "foobar" { peer_asn = 65515 advertised_route_priority = 100 interface = google_compute_router_interface.foobar.name - enable_ipv6 = %v ipv6_nexthop_address = "2600:2d00:0000:0002:0000:0000:0000:0001" peer_ipv6_nexthop_address = "2600:2d00:0:2::2" } `, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName, enableIpv6) } + +func testAccComputeRouterPeerIpv4(routerName string) string { + return fmt.Sprintf(`resource "google_compute_network" "foobar" { + provider = google-beta + name = "%s-net" + auto_create_subnetworks = false + } + + resource "google_compute_subnetwork" "foobar" { + provider = google-beta + name = "%s-subnet" + network = google_compute_network.foobar.self_link + ip_cidr_range = "10.0.0.0/16" + region = "us-central1" + stack_type = "IPV4_IPV6" + ipv6_access_type = "EXTERNAL" + } + + resource "google_compute_ha_vpn_gateway" "foobar" { + provider = google-beta + name = "%s-gateway" + network = google_compute_network.foobar.self_link + region = google_compute_subnetwork.foobar.region + stack_type = "IPV4_IPV6" + } + + resource "google_compute_external_vpn_gateway" "external_gateway" { + provider = google-beta + name = "%s-external-gateway" + redundancy_type = "SINGLE_IP_INTERNALLY_REDUNDANT" + description = "An externally managed VPN gateway" + interface { + id = 0 + ip_address = "8.8.8.8" + } + } + + resource "google_compute_router" "foobar" { + provider = google-beta + name = "%s" + region = google_compute_subnetwork.foobar.region + network = google_compute_network.foobar.self_link + bgp { + asn = 64514 + } + } + + resource "google_compute_vpn_tunnel" "foobar" { + provider = google-beta + name = "%s-tunnel" + region = google_compute_subnetwork.foobar.region + vpn_gateway = google_compute_ha_vpn_gateway.foobar.id + peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.id + peer_external_gateway_interface = 0 + shared_secret = "unguessable" + router = google_compute_router.foobar.name + vpn_gateway_interface = 0 + } + + resource "google_compute_router_interface" "foobar" { + provider = google-beta + name = "%s-interface" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + vpn_tunnel = google_compute_vpn_tunnel.foobar.name + ip_range = "fdff:1::1:1/126" + } + + resource "google_compute_router_peer" "foobar" { + provider = google-beta + name = "%s-peer" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + peer_asn = 65515 + advertised_route_priority = 100 + interface = google_compute_router_interface.foobar.name + ip_address = "fdff:1::1:1" + peer_ip_address = "fdff:1::1:2" + + enable_ipv4 = true + enable_ipv6 = true + ipv4_nexthop_address = "169.254.1.1" + peer_ipv4_nexthop_address = "169.254.1.2" + } + `, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) +} + +func testAccComputeRouterPeerUpdateIpv4Address(routerName string) string { + return fmt.Sprintf(`resource "google_compute_network" "foobar" { + provider = google-beta + name = "%s-net" + auto_create_subnetworks = false + } + + resource "google_compute_subnetwork" "foobar" { + provider = google-beta + name = "%s-subnet" + network = google_compute_network.foobar.self_link + ip_cidr_range = "10.0.0.0/16" + region = "us-central1" + stack_type = "IPV4_IPV6" + ipv6_access_type = "EXTERNAL" + } + + resource "google_compute_ha_vpn_gateway" "foobar" { + provider = google-beta + name = "%s-gateway" + network = google_compute_network.foobar.self_link + region = google_compute_subnetwork.foobar.region + stack_type = "IPV4_IPV6" + } + + resource "google_compute_external_vpn_gateway" "external_gateway" { + provider = google-beta + name = "%s-external-gateway" + redundancy_type = "SINGLE_IP_INTERNALLY_REDUNDANT" + description = "An externally managed VPN gateway" + interface { + id = 0 + ip_address = "8.8.8.8" + } + } + + resource "google_compute_router" "foobar" { + provider = google-beta + name = "%s" + region = google_compute_subnetwork.foobar.region + network = google_compute_network.foobar.self_link + bgp { + asn = 64514 + } + } + + resource "google_compute_vpn_tunnel" "foobar" { + provider = google-beta + name = "%s-tunnel" + region = google_compute_subnetwork.foobar.region + vpn_gateway = google_compute_ha_vpn_gateway.foobar.id + peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.id + peer_external_gateway_interface = 0 + shared_secret = "unguessable" + router = google_compute_router.foobar.name + vpn_gateway_interface = 0 + } + + resource "google_compute_router_interface" "foobar" { + provider = google-beta + name = "%s-interface" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + vpn_tunnel = google_compute_vpn_tunnel.foobar.name + ip_range = "fdff:1::1:1/126" + } + + resource "google_compute_router_peer" "foobar" { + provider = google-beta + name = "%s-peer" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + peer_asn = 65515 + advertised_route_priority = 100 + interface = google_compute_router_interface.foobar.name + ip_address = "fdff:1::1:1" + peer_ip_address = "fdff:1::1:2" + + enable_ipv4 = true + enable_ipv6 = true + ipv4_nexthop_address = "169.254.1.2" + peer_ipv4_nexthop_address = "169.254.1.1" + } + `, routerName, routerName, routerName, routerName, routerName, routerName, routerName, routerName) +} diff --git a/google-beta/services/compute/resource_compute_router_interface.go b/google-beta/services/compute/resource_compute_router_interface.go index e87dc42246..0573cf269a 100644 --- a/google-beta/services/compute/resource_compute_router_interface.go +++ b/google-beta/services/compute/resource_compute_router_interface.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" "google.golang.org/api/googleapi" compute "google.golang.org/api/compute/v0.beta" @@ -77,6 +78,14 @@ func ResourceComputeRouterInterface() *schema.Resource { AtLeastOneOf: []string{"ip_range", "interconnect_attachment", "subnetwork", "vpn_tunnel"}, Description: `The IP address and range of the interface. The IP range must be in the RFC3927 link-local IP space. Changing this forces a new interface to be created.`, }, + "ip_version": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + ValidateFunc: verify.ValidateEnum([]string{"IPV4", "IPV6"}), + Description: `IP version of this interface.`, + }, "private_ip_address": { Type: schema.TypeString, Optional: true, @@ -175,6 +184,10 @@ func resourceComputeRouterInterfaceCreate(d *schema.ResourceData, meta interface iface.IpRange = ipRangeVal.(string) } + if ipVersionVal, ok := d.GetOk("ip_version"); ok { + iface.IpVersion = ipVersionVal.(string) + } + if privateIpVal, ok := d.GetOk("private_ip_address"); ok { iface.PrivateIpAddress = privateIpVal.(string) } @@ -266,6 +279,9 @@ func resourceComputeRouterInterfaceRead(d *schema.ResourceData, meta interface{} if err := d.Set("ip_range", iface.IpRange); err != nil { return fmt.Errorf("Error setting ip_range: %s", err) } + if err := d.Set("ip_version", iface.IpVersion); err != nil { + return fmt.Errorf("Error setting ip_version: %s", err) + } if err := d.Set("private_ip_address", iface.PrivateIpAddress); err != nil { return fmt.Errorf("Error setting private_ip_address: %s", err) } diff --git a/google-beta/services/compute/resource_compute_router_interface_test.go b/google-beta/services/compute/resource_compute_router_interface_test.go index 5a7c5598ef..025701c322 100644 --- a/google-beta/services/compute/resource_compute_router_interface_test.go +++ b/google-beta/services/compute/resource_compute_router_interface_test.go @@ -121,6 +121,52 @@ func TestAccComputeRouterInterface_withPrivateIpAddress(t *testing.T) { }) } +func TestAccComputeRouterInterface_withIPVersionV4(t *testing.T) { + t.Parallel() + + routerName := fmt.Sprintf("tf-test-router-%s", acctest.RandString(t, 10)) + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckComputeRouterInterfaceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRouterInterfaceWithIpVersionIPV4(routerName), + Check: testAccCheckComputeRouterInterfaceExists( + t, "google_compute_router_interface.foobar"), + }, + { + ResourceName: "google_compute_router_interface.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccComputeRouterInterface_withIPVersionV6(t *testing.T) { + t.Parallel() + + routerName := fmt.Sprintf("tf-test-router-%s", acctest.RandString(t, 10)) + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckComputeRouterInterfaceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRouterInterfaceWithIpVersionIPV6(routerName), + Check: testAccCheckComputeRouterInterfaceExists( + t, "google_compute_router_interface.foobar"), + }, + { + ResourceName: "google_compute_router_interface.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckComputeRouterInterfaceDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { config := acctest.GoogleProviderConfig(t) @@ -513,3 +559,71 @@ resource "google_compute_router_interface" "foobar" { } `, routerName, routerName, routerName, routerName, routerName) } + +func testAccComputeRouterInterfaceWithIpVersionIPV6(routerName string) string { + return fmt.Sprintf(` +resource "google_compute_network" "foobar" { + provider = google-beta + name = "%s-net" +} + +resource "google_compute_subnetwork" "foobar" { + provider = google-beta + name = "%s-subnet" + network = google_compute_network.foobar.self_link + ip_cidr_range = "10.0.0.0/16" +} + +resource "google_compute_router" "foobar" { + provider = google-beta + name = "%s" + network = google_compute_network.foobar.self_link + bgp { + asn = 64514 + } +} + +resource "google_compute_router_interface" "foobar" { + provider = google-beta + name = "%s-interface" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + ip_range = "fdff:1::1:1/126" + ip_version = "IPV6" +} +`, routerName, routerName, routerName, routerName) +} + +func testAccComputeRouterInterfaceWithIpVersionIPV4(routerName string) string { + return fmt.Sprintf(` +resource "google_compute_network" "foobar" { + provider = google-beta + name = "%s-net" +} + +resource "google_compute_subnetwork" "foobar" { + provider = google-beta + name = "%s-subnet" + network = google_compute_network.foobar.self_link + ip_cidr_range = "10.0.0.0/16" +} + +resource "google_compute_router" "foobar" { + provider = google-beta + name = "%s" + network = google_compute_network.foobar.self_link + bgp { + asn = 64514 + } +} + +resource "google_compute_router_interface" "foobar" { + provider = google-beta + name = "%s-interface" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + ip_range = "169.254.3.1/30" + ip_version = "IPV4" +} +`, routerName, routerName, routerName, routerName) +} diff --git a/google-beta/services/compute/resource_compute_router_nat.go b/google-beta/services/compute/resource_compute_router_nat.go index 04620d56a9..ebd25f1fd9 100644 --- a/google-beta/services/compute/resource_compute_router_nat.go +++ b/google-beta/services/compute/resource_compute_router_nat.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -273,6 +274,20 @@ Mutually exclusive with enableEndpointIndependentMapping.`, Description: `Enable endpoint independent mapping. For more information see the [official documentation](https://cloud.google.com/nat/docs/overview#specs-rfcs).`, }, + "endpoint_types": { + Type: schema.TypeList, + Computed: true, + Optional: true, + ForceNew: true, + Description: `Specifies the endpoint Types supported by the NAT Gateway. +Supported values include: + 'ENDPOINT_TYPE_VM', 'ENDPOINT_TYPE_SWG', + 'ENDPOINT_TYPE_MANAGED_PROXY_LB'.`, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, "icmp_idle_timeout_sec": { Type: schema.TypeInt, Optional: true, @@ -634,6 +649,12 @@ func resourceComputeRouterNatCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("log_config"); ok || !reflect.DeepEqual(v, logConfigProp) { obj["logConfig"] = logConfigProp } + endpointTypesProp, err := expandNestedComputeRouterNatEndpointTypes(d.Get("endpoint_types"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("endpoint_types"); !tpgresource.IsEmptyValue(reflect.ValueOf(endpointTypesProp)) && (ok || !reflect.DeepEqual(v, endpointTypesProp)) { + obj["endpointTypes"] = endpointTypesProp + } rulesProp, err := expandNestedComputeRouterNatRules(d.Get("rules"), d, config) if err != nil { return err @@ -684,6 +705,7 @@ func resourceComputeRouterNatCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) // validates if the field action.source_nat_active_ranges is filled when the type is PRIVATE. natType := d.Get("type").(string) if natType == "PRIVATE" { @@ -719,6 +741,7 @@ func resourceComputeRouterNatCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RouterNat: %s", err) @@ -771,12 +794,14 @@ func resourceComputeRouterNatRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeRouterNat %q", d.Id())) @@ -843,6 +868,9 @@ func resourceComputeRouterNatRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("log_config", flattenNestedComputeRouterNatLogConfig(res["logConfig"], d, config)); err != nil { return fmt.Errorf("Error reading RouterNat: %s", err) } + if err := d.Set("endpoint_types", flattenNestedComputeRouterNatEndpointTypes(res["endpointTypes"], d, config)); err != nil { + return fmt.Errorf("Error reading RouterNat: %s", err) + } if err := d.Set("rules", flattenNestedComputeRouterNatRules(res["rules"], d, config)); err != nil { return fmt.Errorf("Error reading RouterNat: %s", err) } @@ -982,6 +1010,7 @@ func resourceComputeRouterNatUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating RouterNat %q: %#v", d.Id(), obj) + headers := make(http.Header) // validates if the field action.source_nat_active_ranges is filled when the type is PRIVATE. natType := d.Get("type").(string) if natType == "PRIVATE" { @@ -1028,6 +1057,7 @@ func resourceComputeRouterNatUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1086,6 +1116,8 @@ func resourceComputeRouterNatDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RouterNat %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1095,6 +1127,7 @@ func resourceComputeRouterNatDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RouterNat") @@ -1331,6 +1364,10 @@ func flattenNestedComputeRouterNatLogConfigFilter(v interface{}, d *schema.Resou return v } +func flattenNestedComputeRouterNatEndpointTypes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenNestedComputeRouterNatRules(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -1599,6 +1636,10 @@ func expandNestedComputeRouterNatLogConfigFilter(v interface{}, d tpgresource.Te return v, nil } +func expandNestedComputeRouterNatEndpointTypes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandNestedComputeRouterNatRules(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { v = v.(*schema.Set).List() l := v.([]interface{}) diff --git a/google-beta/services/compute/resource_compute_router_nat_test.go b/google-beta/services/compute/resource_compute_router_nat_test.go index 0c4247c9c8..553c218a30 100644 --- a/google-beta/services/compute/resource_compute_router_nat_test.go +++ b/google-beta/services/compute/resource_compute_router_nat_test.go @@ -406,6 +406,66 @@ func TestAccComputeRouterNat_withNatRules(t *testing.T) { }) } +func TestAccComputeRouterNat_withEndpointTypes(t *testing.T) { + t.Parallel() + + testId := acctest.RandString(t, 10) + routerName := fmt.Sprintf("tf-test-router-nat-%s", testId) + testResourceName := "google_compute_router_nat.foobar" + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeRouterNatDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRouterNatBasic(routerName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "endpoint_types.0", "ENDPOINT_TYPE_VM"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeRouterNatUpdateEndpointType(routerName, "ENDPOINT_TYPE_SWG"), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "endpoint_types.0", "ENDPOINT_TYPE_SWG"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeRouterNatUpdateEndpointType(routerName, "ENDPOINT_TYPE_VM"), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "endpoint_types.0", "ENDPOINT_TYPE_VM"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeRouterNatUpdateEndpointType(routerName, "ENDPOINT_TYPE_MANAGED_PROXY_LB"), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "endpoint_types.0", "ENDPOINT_TYPE_MANAGED_PROXY_LB"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccComputeRouterNat_withPrivateNat(t *testing.T) { t.Parallel() @@ -811,6 +871,40 @@ resource "google_compute_router_nat" "foobar" { `, routerName, routerName, routerName, routerName, routerName) } +func testAccComputeRouterNatUpdateEndpointType(routerName string, endpointType string) string { + return fmt.Sprintf(` +resource "google_compute_network" "foobar" { + name = "%[1]s-net" +} + +resource "google_compute_subnetwork" "foobar" { + name = "%[1]s-subnet" + network = google_compute_network.foobar.self_link + ip_cidr_range = "10.0.0.0/16" + region = "us-central1" +} + +resource "google_compute_router" "foobar" { + name = "%[1]s" + region = google_compute_subnetwork.foobar.region + network = google_compute_network.foobar.self_link +} + +resource "google_compute_router_nat" "foobar" { + name = "%[1]s" + router = google_compute_router.foobar.name + region = google_compute_router.foobar.region + nat_ip_allocate_option = "AUTO_ONLY" + source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES" + endpoint_types = [ "%[2]s" ] + log_config { + enable = true + filter = "ERRORS_ONLY" + } +} +`, routerName, endpointType) +} + func testAccComputeRouterNatUpdateToNatIPsId(routerName string) string { return fmt.Sprintf(` resource "google_compute_router" "foobar" { diff --git a/google-beta/services/compute/resource_compute_router_peer.go b/google-beta/services/compute/resource_compute_router_peer.go index 69684e3f65..0b6e677998 100644 --- a/google-beta/services/compute/resource_compute_router_peer.go +++ b/google-beta/services/compute/resource_compute_router_peer.go @@ -208,6 +208,12 @@ The default is true.`, Description: `Enable IPv6 traffic over BGP Peer. If not specified, it is disabled by default.`, Default: false, }, + "enable_ipv4": { + Type: schema.TypeBool, + Optional: true, + Description: `Enable IPv4 traffic over BGP Peer. It is enabled by default if the peerIpAddress is version 4.`, + Computed: true, + }, "ip_address": { Type: schema.TypeString, Computed: true, @@ -226,6 +232,13 @@ The address must be in the range 2600:2d00:0:2::/64 or 2600:2d00:0:3::/64. If you do not specify the next hop addresses, Google Cloud automatically assigns unused addresses from the 2600:2d00:0:2::/64 or 2600:2d00:0:3::/64 range for you.`, }, + "ipv4_nexthop_address": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: verify.ValidateIpAddress, + Description: `IPv4 address of the interface inside Google Cloud Platform.`, + }, "peer_ip_address": { Type: schema.TypeString, Computed: true, @@ -244,6 +257,13 @@ The address must be in the range 2600:2d00:0:2::/64 or 2600:2d00:0:3::/64. If you do not specify the next hop addresses, Google Cloud automatically assigns unused addresses from the 2600:2d00:0:2::/64 or 2600:2d00:0:3::/64 range for you.`, }, + "peer_ipv4_nexthop_address": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: verify.ValidateIpAddress, + Description: `IPv4 address of the BGP interface outside Google Cloud Platform.`, + }, "region": { Type: schema.TypeString, Computed: true, @@ -396,6 +416,24 @@ func resourceComputeRouterBgpPeerCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("enable_ipv6"); ok || !reflect.DeepEqual(v, enableIpv6Prop) { obj["enableIpv6"] = enableIpv6Prop } + enableIpv4Prop, err := expandNestedComputeRouterBgpPeerEnableIpv4(d.Get("enable_ipv4"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("enable_ipv4"); ok || !reflect.DeepEqual(v, enableIpv4Prop) { + obj["enableIpv4"] = enableIpv4Prop + } + ipv4NexthopAddressProp, err := expandNestedComputeRouterBgpPeerIpv4NexthopAddress(d.Get("ipv4_nexthop_address"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("ipv4_nexthop_address"); !tpgresource.IsEmptyValue(reflect.ValueOf(ipv4NexthopAddressProp)) && (ok || !reflect.DeepEqual(v, ipv4NexthopAddressProp)) { + obj["ipv4NexthopAddress"] = ipv4NexthopAddressProp + } + peerIpv4NexthopAddressProp, err := expandNestedComputeRouterBgpPeerPeerIpv4NexthopAddress(d.Get("peer_ipv4_nexthop_address"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("peer_ipv6_nexthop_address"); !tpgresource.IsEmptyValue(reflect.ValueOf(peerIpv4NexthopAddressProp)) && (ok || !reflect.DeepEqual(v, peerIpv4NexthopAddressProp)) { + obj["peerIpv4NexthopAddress"] = peerIpv4NexthopAddressProp + } ipv6NexthopAddressProp, err := expandNestedComputeRouterBgpPeerIpv6NexthopAddress(d.Get("ipv6_nexthop_address"), d, config) if err != nil { return err @@ -587,6 +625,15 @@ func resourceComputeRouterBgpPeerRead(d *schema.ResourceData, meta interface{}) if err := d.Set("enable_ipv6", flattenNestedComputeRouterBgpPeerEnableIpv6(res["enableIpv6"], d, config)); err != nil { return fmt.Errorf("Error reading RouterBgpPeer: %s", err) } + if err := d.Set("enable_ipv4", flattenNestedComputeRouterBgpPeerEnableIpv4(res["enableIpv4"], d, config)); err != nil { + return fmt.Errorf("Error reading RouterBgpPeer: %s", err) + } + if err := d.Set("ipv4_nexthop_address", flattenNestedComputeRouterBgpPeerIpv4NexthopAddress(res["ipv4NexthopAddress"], d, config)); err != nil { + return fmt.Errorf("Error reading RouterBgpPeer: %s", err) + } + if err := d.Set("peer_ipv4_nexthop_address", flattenNestedComputeRouterBgpPeerPeerIpv4NexthopAddress(res["peerIpv4NexthopAddress"], d, config)); err != nil { + return fmt.Errorf("Error reading RouterBgpPeer: %s", err) + } if err := d.Set("ipv6_nexthop_address", flattenNestedComputeRouterBgpPeerIpv6NexthopAddress(res["ipv6NexthopAddress"], d, config)); err != nil { return fmt.Errorf("Error reading RouterBgpPeer: %s", err) } @@ -682,6 +729,24 @@ func resourceComputeRouterBgpPeerUpdate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("enable_ipv6"); ok || !reflect.DeepEqual(v, enableIpv6Prop) { obj["enableIpv6"] = enableIpv6Prop } + enableIpv4Prop, err := expandNestedComputeRouterBgpPeerEnableIpv4(d.Get("enable_ipv4"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("enable_ipv4"); ok || !reflect.DeepEqual(v, enableIpv4Prop) { + obj["enableIpv4"] = enableIpv4Prop + } + ipv4NexthopAddressProp, err := expandNestedComputeRouterBgpPeerIpv4NexthopAddress(d.Get("ipv4_nexthop_address"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("ipv4_nexthop_address"); !tpgresource.IsEmptyValue(reflect.ValueOf(ipv4NexthopAddressProp)) && (ok || !reflect.DeepEqual(v, ipv4NexthopAddressProp)) { + obj["ipv4NexthopAddress"] = ipv4NexthopAddressProp + } + peerIpv4NexthopAddressProp, err := expandNestedComputeRouterBgpPeerPeerIpv4NexthopAddress(d.Get("peer_ipv4_nexthop_address"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("peer_ipv4_nexthop_address"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, peerIpv4NexthopAddressProp)) { + obj["peerIpv4NexthopAddress"] = peerIpv4NexthopAddressProp + } ipv6NexthopAddressProp, err := expandNestedComputeRouterBgpPeerIpv6NexthopAddress(d.Get("ipv6_nexthop_address"), d, config) if err != nil { return err @@ -1055,6 +1120,18 @@ func flattenNestedComputeRouterBgpPeerEnableIpv6(v interface{}, d *schema.Resour return v } +func flattenNestedComputeRouterBgpPeerEnableIpv4(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNestedComputeRouterBgpPeerIpv4NexthopAddress(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNestedComputeRouterBgpPeerPeerIpv4NexthopAddress(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenNestedComputeRouterBgpPeerIpv6NexthopAddress(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1242,6 +1319,18 @@ func expandNestedComputeRouterBgpPeerEnableIpv6(v interface{}, d tpgresource.Ter return v, nil } +func expandNestedComputeRouterBgpPeerEnableIpv4(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNestedComputeRouterBgpPeerIpv4NexthopAddress(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNestedComputeRouterBgpPeerPeerIpv4NexthopAddress(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandNestedComputeRouterBgpPeerIpv6NexthopAddress(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/compute/resource_compute_router_test.go b/google-beta/services/compute/resource_compute_router_test.go index 21d96330c2..06ae344589 100644 --- a/google-beta/services/compute/resource_compute_router_test.go +++ b/google-beta/services/compute/resource_compute_router_test.go @@ -157,6 +157,39 @@ func TestAccComputeRouter_updateAddRemoveBGP(t *testing.T) { }) } +func TestAccComputeRouter_addAndUpdateIdentifierRangeBgp(t *testing.T) { + t.Parallel() + + testId := acctest.RandString(t, 10) + routerName := fmt.Sprintf("tf-test-router-%s", testId) + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckComputeRouterDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeRouter_addIdentifierRangeBgp(routerName), + }, + { + ResourceName: "google_compute_router.foobar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeRouter_updateIdentifierRangeBgp(routerName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_compute_router.foobar", "bgp.0.identifier_range", "169.254.8.8/30"), + ), + }, + { + ResourceName: "google_compute_router.foobar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccComputeRouterBasic(routerName, resourceRegion string) string { return fmt.Sprintf(` resource "google_compute_network" "foobar" { @@ -253,3 +286,61 @@ resource "google_compute_router" "foobar" { } `, routerName, routerName, resourceRegion, routerName) } + +func testAccComputeRouter_addIdentifierRangeBgp(routerName string) string { + return fmt.Sprintf(` +resource "google_compute_network" "foobar" { + provider = google-beta + name = "%s-net" + auto_create_subnetworks = false +} + +resource "google_compute_router" "foobar" { + provider = google-beta + name = "%s" + network = google_compute_network.foobar.name + bgp { + asn = 64514 + advertise_mode = "CUSTOM" + advertised_groups = ["ALL_SUBNETS"] + advertised_ip_ranges { + range = "1.2.3.4" + } + advertised_ip_ranges { + range = "6.7.0.0/16" + } + identifier_range = "169.254.8.8/29" + keepalive_interval = 25 + } +} +`, routerName, routerName) +} + +func testAccComputeRouter_updateIdentifierRangeBgp(routerName string) string { + return fmt.Sprintf(` +resource "google_compute_network" "foobar" { + provider = google-beta + name = "%s-net" + auto_create_subnetworks = false +} + +resource "google_compute_router" "foobar" { + provider = google-beta + name = "%s" + network = google_compute_network.foobar.name + bgp { + asn = 64514 + advertise_mode = "CUSTOM" + advertised_groups = ["ALL_SUBNETS"] + advertised_ip_ranges { + range = "1.2.3.4" + } + advertised_ip_ranges { + range = "6.7.0.0/16" + } + identifier_range = "169.254.8.8/30" + keepalive_interval = 25 + } +} +`, routerName, routerName) +} diff --git a/google-beta/services/compute/resource_compute_security_policy.go b/google-beta/services/compute/resource_compute_security_policy.go index 41700a69ad..03b52c395c 100644 --- a/google-beta/services/compute/resource_compute_security_policy.go +++ b/google-beta/services/compute/resource_compute_security_policy.go @@ -37,9 +37,9 @@ func ResourceComputeSecurityPolicy() *schema.Resource { ), Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(8 * time.Minute), - Update: schema.DefaultTimeout(8 * time.Minute), - Delete: schema.DefaultTimeout(8 * time.Minute), + Create: schema.DefaultTimeout(20 * time.Minute), + Update: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(20 * time.Minute), }, Schema: map[string]*schema.Schema{ diff --git a/google-beta/services/compute/resource_compute_security_policy_rule.go b/google-beta/services/compute/resource_compute_security_policy_rule.go new file mode 100644 index 0000000000..f2068f023c --- /dev/null +++ b/google-beta/services/compute/resource_compute_security_policy_rule.go @@ -0,0 +1,1239 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package compute + +import ( + "fmt" + "log" + "net/http" + "reflect" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" +) + +func ResourceComputeSecurityPolicyRule() *schema.Resource { + return &schema.Resource{ + Create: resourceComputeSecurityPolicyRuleCreate, + Read: resourceComputeSecurityPolicyRuleRead, + Update: resourceComputeSecurityPolicyRuleUpdate, + Delete: resourceComputeSecurityPolicyRuleDelete, + + Importer: &schema.ResourceImporter{ + State: resourceComputeSecurityPolicyRuleImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + Update: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(20 * time.Minute), + }, + + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeString, + Required: true, + Description: `The Action to perform when the rule is matched. The following are the valid actions: + +* allow: allow access to target. + +* deny(STATUS): deny access to target, returns the HTTP response code specified. Valid values for STATUS are 403, 404, and 502. + +* rate_based_ban: limit client traffic to the configured threshold and ban the client if the traffic exceeds the threshold. Configure parameters for this action in RateLimitOptions. Requires rateLimitOptions to be set. + +* redirect: redirect to a different target. This can either be an internal reCAPTCHA redirect, or an external URL-based redirect via a 302 response. Parameters for this action can be configured via redirectOptions. This action is only supported in Global Security Policies of type CLOUD_ARMOR. + +* throttle: limit client traffic to the configured threshold. Configure parameters for this action in rateLimitOptions. Requires rateLimitOptions to be set for this.`, + }, + "priority": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + Description: `An integer indicating the priority of a rule in the list. +The priority must be a positive value between 0 and 2147483647. +Rules are evaluated from highest to lowest priority where 0 is the highest priority and 2147483647 is the lowest priority.`, + }, + "security_policy": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the security policy this rule belongs to.`, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: `An optional description of this resource. Provide this property when you create the resource.`, + }, + "match": { + Type: schema.TypeList, + Optional: true, + Description: `A match condition that incoming traffic is evaluated against. +If it evaluates to true, the corresponding 'action' is enforced.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "config": { + Type: schema.TypeList, + Optional: true, + Description: `The configuration options available when specifying versionedExpr. +This field must be specified if versionedExpr is specified and cannot be specified if versionedExpr is not specified.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "src_ip_ranges": { + Type: schema.TypeList, + Optional: true, + Description: `CIDR IP address range. Maximum number of srcIpRanges allowed is 10.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + "expr": { + Type: schema.TypeList, + Optional: true, + Description: `User defined CEVAL expression. A CEVAL expression is used to specify match criteria such as origin.ip, source.region_code and contents in the request header.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "expression": { + Type: schema.TypeString, + Required: true, + Description: `Textual representation of an expression in Common Expression Language syntax. The application context of the containing message determines which well-known feature set of CEL is supported.`, + }, + }, + }, + }, + "versioned_expr": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"SRC_IPS_V1", ""}), + Description: `Preconfigured versioned expression. If this field is specified, config must also be specified. +Available preconfigured expressions along with their requirements are: SRC_IPS_V1 - must specify the corresponding srcIpRange field in config. Possible values: ["SRC_IPS_V1"]`, + }, + }, + }, + }, + "preconfigured_waf_config": { + Type: schema.TypeList, + Optional: true, + Description: `Preconfigured WAF configuration to be applied for the rule. +If the rule does not evaluate preconfigured WAF rules, i.e., if evaluatePreconfiguredWaf() is not used, this field will have no effect.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "exclusion": { + Type: schema.TypeList, + Optional: true, + Description: `An exclusion to apply during preconfigured WAF evaluation.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "target_rule_set": { + Type: schema.TypeString, + Required: true, + Description: `Target WAF rule set to apply the preconfigured WAF exclusion.`, + }, + "request_cookie": { + Type: schema.TypeList, + Optional: true, + Description: `Request cookie whose value will be excluded from inspection during preconfigured WAF evaluation.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "operator": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"EQUALS", "STARTS_WITH", "ENDS_WITH", "CONTAINS", "EQUALS_ANY"}, false), + Description: `You can specify an exact match or a partial match by using a field operator and a field value. +Available options: +EQUALS: The operator matches if the field value equals the specified value. +STARTS_WITH: The operator matches if the field value starts with the specified value. +ENDS_WITH: The operator matches if the field value ends with the specified value. +CONTAINS: The operator matches if the field value contains the specified value. +EQUALS_ANY: The operator matches if the field value is any value.`, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Description: `A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. +The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY.`, + }, + }, + }, + }, + "request_header": { + Type: schema.TypeList, + Optional: true, + Description: `Request header whose value will be excluded from inspection during preconfigured WAF evaluation.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "operator": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"EQUALS", "STARTS_WITH", "ENDS_WITH", "CONTAINS", "EQUALS_ANY"}, false), + Description: `You can specify an exact match or a partial match by using a field operator and a field value. +Available options: +EQUALS: The operator matches if the field value equals the specified value. +STARTS_WITH: The operator matches if the field value starts with the specified value. +ENDS_WITH: The operator matches if the field value ends with the specified value. +CONTAINS: The operator matches if the field value contains the specified value. +EQUALS_ANY: The operator matches if the field value is any value.`, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Description: `A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. +The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY.`, + }, + }, + }, + }, + "request_query_param": { + Type: schema.TypeList, + Optional: true, + Description: `Request query parameter whose value will be excluded from inspection during preconfigured WAF evaluation. +Note that the parameter can be in the query string or in the POST body.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "operator": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"EQUALS", "STARTS_WITH", "ENDS_WITH", "CONTAINS", "EQUALS_ANY"}, false), + Description: `You can specify an exact match or a partial match by using a field operator and a field value. +Available options: +EQUALS: The operator matches if the field value equals the specified value. +STARTS_WITH: The operator matches if the field value starts with the specified value. +ENDS_WITH: The operator matches if the field value ends with the specified value. +CONTAINS: The operator matches if the field value contains the specified value. +EQUALS_ANY: The operator matches if the field value is any value.`, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Description: `A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. +The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY.`, + }, + }, + }, + }, + "request_uri": { + Type: schema.TypeList, + Optional: true, + Description: `Request URI from the request line to be excluded from inspection during preconfigured WAF evaluation. +When specifying this field, the query or fragment part should be excluded.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "operator": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"EQUALS", "STARTS_WITH", "ENDS_WITH", "CONTAINS", "EQUALS_ANY"}, false), + Description: `You can specify an exact match or a partial match by using a field operator and a field value. +Available options: +EQUALS: The operator matches if the field value equals the specified value. +STARTS_WITH: The operator matches if the field value starts with the specified value. +ENDS_WITH: The operator matches if the field value ends with the specified value. +CONTAINS: The operator matches if the field value contains the specified value. +EQUALS_ANY: The operator matches if the field value is any value.`, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Description: `A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. +The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY.`, + }, + }, + }, + }, + "target_rule_ids": { + Type: schema.TypeList, + Optional: true, + Description: `A list of target rule IDs under the WAF rule set to apply the preconfigured WAF exclusion. +If omitted, it refers to all the rule IDs under the WAF rule set.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + }, + }, + }, + "preview": { + Type: schema.TypeBool, + Optional: true, + Description: `If set to true, the specified action is not enforced.`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func resourceComputeSecurityPolicyRuleCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + obj := make(map[string]interface{}) + descriptionProp, err := expandComputeSecurityPolicyRuleDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + priorityProp, err := expandComputeSecurityPolicyRulePriority(d.Get("priority"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("priority"); !tpgresource.IsEmptyValue(reflect.ValueOf(priorityProp)) && (ok || !reflect.DeepEqual(v, priorityProp)) { + obj["priority"] = priorityProp + } + matchProp, err := expandComputeSecurityPolicyRuleMatch(d.Get("match"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("match"); !tpgresource.IsEmptyValue(reflect.ValueOf(matchProp)) && (ok || !reflect.DeepEqual(v, matchProp)) { + obj["match"] = matchProp + } + preconfiguredWafConfigProp, err := expandComputeSecurityPolicyRulePreconfiguredWafConfig(d.Get("preconfigured_waf_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("preconfigured_waf_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(preconfiguredWafConfigProp)) && (ok || !reflect.DeepEqual(v, preconfiguredWafConfigProp)) { + obj["preconfiguredWafConfig"] = preconfiguredWafConfigProp + } + actionProp, err := expandComputeSecurityPolicyRuleAction(d.Get("action"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("action"); !tpgresource.IsEmptyValue(reflect.ValueOf(actionProp)) && (ok || !reflect.DeepEqual(v, actionProp)) { + obj["action"] = actionProp + } + previewProp, err := expandComputeSecurityPolicyRulePreview(d.Get("preview"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("preview"); !tpgresource.IsEmptyValue(reflect.ValueOf(previewProp)) && (ok || !reflect.DeepEqual(v, previewProp)) { + obj["preview"] = previewProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/securityPolicies/{{security_policy}}/addRule?priority={{priority}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Creating new SecurityPolicyRule: %#v", obj) + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for SecurityPolicyRule: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, + }) + if err != nil { + return fmt.Errorf("Error creating SecurityPolicyRule: %s", err) + } + + // Store the ID now + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/global/securityPolicies/{{security_policy}}/priority/{{priority}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + err = ComputeOperationWaitTime( + config, res, project, "Creating SecurityPolicyRule", userAgent, + d.Timeout(schema.TimeoutCreate)) + + if err != nil { + // The resource didn't actually create + d.SetId("") + return fmt.Errorf("Error waiting to create SecurityPolicyRule: %s", err) + } + + log.Printf("[DEBUG] Finished creating SecurityPolicyRule %q: %#v", d.Id(), res) + + return resourceComputeSecurityPolicyRuleRead(d, meta) +} + +func resourceComputeSecurityPolicyRuleRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/securityPolicies/{{security_policy}}/getRule?priority={{priority}}") + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for SecurityPolicyRule: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeSecurityPolicyRule %q", d.Id())) + } + + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + + if err := d.Set("description", flattenComputeSecurityPolicyRuleDescription(res["description"], d, config)); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + if err := d.Set("priority", flattenComputeSecurityPolicyRulePriority(res["priority"], d, config)); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + if err := d.Set("match", flattenComputeSecurityPolicyRuleMatch(res["match"], d, config)); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + if err := d.Set("preconfigured_waf_config", flattenComputeSecurityPolicyRulePreconfiguredWafConfig(res["preconfiguredWafConfig"], d, config)); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + if err := d.Set("action", flattenComputeSecurityPolicyRuleAction(res["action"], d, config)); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + if err := d.Set("preview", flattenComputeSecurityPolicyRulePreview(res["preview"], d, config)); err != nil { + return fmt.Errorf("Error reading SecurityPolicyRule: %s", err) + } + + return nil +} + +func resourceComputeSecurityPolicyRuleUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for SecurityPolicyRule: %s", err) + } + billingProject = project + + obj := make(map[string]interface{}) + descriptionProp, err := expandComputeSecurityPolicyRuleDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + priorityProp, err := expandComputeSecurityPolicyRulePriority(d.Get("priority"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("priority"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, priorityProp)) { + obj["priority"] = priorityProp + } + matchProp, err := expandComputeSecurityPolicyRuleMatch(d.Get("match"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("match"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, matchProp)) { + obj["match"] = matchProp + } + preconfiguredWafConfigProp, err := expandComputeSecurityPolicyRulePreconfiguredWafConfig(d.Get("preconfigured_waf_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("preconfigured_waf_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, preconfiguredWafConfigProp)) { + obj["preconfiguredWafConfig"] = preconfiguredWafConfigProp + } + actionProp, err := expandComputeSecurityPolicyRuleAction(d.Get("action"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("action"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, actionProp)) { + obj["action"] = actionProp + } + previewProp, err := expandComputeSecurityPolicyRulePreview(d.Get("preview"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("preview"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, previewProp)) { + obj["preview"] = previewProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/securityPolicies/{{security_policy}}/patchRule?priority={{priority}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating SecurityPolicyRule %q: %#v", d.Id(), obj) + headers := make(http.Header) + updateMask := []string{} + + if d.HasChange("description") { + updateMask = append(updateMask, "description") + } + + if d.HasChange("priority") { + updateMask = append(updateMask, "priority") + } + + if d.HasChange("match") { + updateMask = append(updateMask, "match") + } + + if d.HasChange("preconfigured_waf_config") { + updateMask = append(updateMask, "preconfiguredWafConfig") + } + + if d.HasChange("action") { + updateMask = append(updateMask, "action") + } + + if d.HasChange("preview") { + updateMask = append(updateMask, "preview") + } + // updateMask is a URL parameter but not present in the schema, so ReplaceVars + // won't set it + url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + // if updateMask is empty we are not updating anything so skip the post + if len(updateMask) > 0 { + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, + }) + + if err != nil { + return fmt.Errorf("Error updating SecurityPolicyRule %q: %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Finished updating SecurityPolicyRule %q: %#v", d.Id(), res) + } + + err = ComputeOperationWaitTime( + config, res, project, "Updating SecurityPolicyRule", userAgent, + d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return err + } + } + + return resourceComputeSecurityPolicyRuleRead(d, meta) +} + +func resourceComputeSecurityPolicyRuleDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for SecurityPolicyRule: %s", err) + } + billingProject = project + + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/securityPolicies/{{security_policy}}/removeRule?priority={{priority}}") + if err != nil { + return err + } + + var obj map[string]interface{} + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + + log.Printf("[DEBUG] Deleting SecurityPolicyRule %q", d.Id()) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, "SecurityPolicyRule") + } + + err = ComputeOperationWaitTime( + config, res, project, "Deleting SecurityPolicyRule", userAgent, + d.Timeout(schema.TimeoutDelete)) + + if err != nil { + return err + } + + log.Printf("[DEBUG] Finished deleting SecurityPolicyRule %q: %#v", d.Id(), res) + return nil +} + +func resourceComputeSecurityPolicyRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*transport_tpg.Config) + if err := tpgresource.ParseImportId([]string{ + "^projects/(?P[^/]+)/global/securityPolicies/(?P[^/]+)/priority/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + }, d, config); err != nil { + return nil, err + } + + // Replace import id for the resource id + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/global/securityPolicies/{{security_policy}}/priority/{{priority}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil +} + +func flattenComputeSecurityPolicyRuleDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePriority(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenComputeSecurityPolicyRuleMatch(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["versioned_expr"] = + flattenComputeSecurityPolicyRuleMatchVersionedExpr(original["versionedExpr"], d, config) + transformed["expr"] = + flattenComputeSecurityPolicyRuleMatchExpr(original["expr"], d, config) + transformed["config"] = + flattenComputeSecurityPolicyRuleMatchConfig(original["config"], d, config) + return []interface{}{transformed} +} +func flattenComputeSecurityPolicyRuleMatchVersionedExpr(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRuleMatchExpr(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["expression"] = + flattenComputeSecurityPolicyRuleMatchExprExpression(original["expression"], d, config) + return []interface{}{transformed} +} +func flattenComputeSecurityPolicyRuleMatchExprExpression(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRuleMatchConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["src_ip_ranges"] = + flattenComputeSecurityPolicyRuleMatchConfigSrcIpRanges(original["srcIpRanges"], d, config) + return []interface{}{transformed} +} +func flattenComputeSecurityPolicyRuleMatchConfigSrcIpRanges(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["exclusion"] = + flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusion(original["exclusions"], d, config) + return []interface{}{transformed} +} +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "request_header": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeader(original["requestHeadersToExclude"], d, config), + "request_cookie": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookie(original["requestCookiesToExclude"], d, config), + "request_uri": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUri(original["requestUrisToExclude"], d, config), + "request_query_param": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParam(original["requestQueryParamsToExclude"], d, config), + "target_rule_set": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleSet(original["targetRuleSet"], d, config), + "target_rule_ids": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleIds(original["targetRuleIds"], d, config), + }) + } + return transformed +} +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeader(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "operator": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderOperator(original["op"], d, config), + "value": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderValue(original["val"], d, config), + }) + } + return transformed +} +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderOperator(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookie(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "operator": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieOperator(original["op"], d, config), + "value": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieValue(original["val"], d, config), + }) + } + return transformed +} +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieOperator(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUri(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "operator": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriOperator(original["op"], d, config), + "value": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriValue(original["val"], d, config), + }) + } + return transformed +} +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriOperator(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParam(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "operator": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamOperator(original["op"], d, config), + "value": flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamValue(original["val"], d, config), + }) + } + return transformed +} +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamOperator(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleSet(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleIds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRuleAction(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenComputeSecurityPolicyRulePreview(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandComputeSecurityPolicyRuleDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePriority(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRuleMatch(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedVersionedExpr, err := expandComputeSecurityPolicyRuleMatchVersionedExpr(original["versioned_expr"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedVersionedExpr); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["versionedExpr"] = transformedVersionedExpr + } + + transformedExpr, err := expandComputeSecurityPolicyRuleMatchExpr(original["expr"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedExpr); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["expr"] = transformedExpr + } + + transformedConfig, err := expandComputeSecurityPolicyRuleMatchConfig(original["config"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedConfig); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["config"] = transformedConfig + } + + return transformed, nil +} + +func expandComputeSecurityPolicyRuleMatchVersionedExpr(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRuleMatchExpr(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedExpression, err := expandComputeSecurityPolicyRuleMatchExprExpression(original["expression"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedExpression); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["expression"] = transformedExpression + } + + return transformed, nil +} + +func expandComputeSecurityPolicyRuleMatchExprExpression(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRuleMatchConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedSrcIpRanges, err := expandComputeSecurityPolicyRuleMatchConfigSrcIpRanges(original["src_ip_ranges"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSrcIpRanges); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["srcIpRanges"] = transformedSrcIpRanges + } + + return transformed, nil +} + +func expandComputeSecurityPolicyRuleMatchConfigSrcIpRanges(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedExclusion, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusion(original["exclusion"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedExclusion); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["exclusions"] = transformedExclusion + } + + return transformed, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedRequestHeader, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeader(original["request_header"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedRequestHeader); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["requestHeadersToExclude"] = transformedRequestHeader + } + + transformedRequestCookie, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookie(original["request_cookie"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedRequestCookie); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["requestCookiesToExclude"] = transformedRequestCookie + } + + transformedRequestUri, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUri(original["request_uri"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedRequestUri); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["requestUrisToExclude"] = transformedRequestUri + } + + transformedRequestQueryParam, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParam(original["request_query_param"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedRequestQueryParam); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["requestQueryParamsToExclude"] = transformedRequestQueryParam + } + + transformedTargetRuleSet, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleSet(original["target_rule_set"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTargetRuleSet); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["targetRuleSet"] = transformedTargetRuleSet + } + + transformedTargetRuleIds, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleIds(original["target_rule_ids"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTargetRuleIds); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["targetRuleIds"] = transformedTargetRuleIds + } + + req = append(req, transformed) + } + return req, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeader(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedOperator, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderOperator(original["operator"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOperator); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["op"] = transformedOperator + } + + transformedValue, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderValue(original["value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["val"] = transformedValue + } + + req = append(req, transformed) + } + return req, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderOperator(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestHeaderValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookie(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedOperator, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieOperator(original["operator"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOperator); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["op"] = transformedOperator + } + + transformedValue, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieValue(original["value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["val"] = transformedValue + } + + req = append(req, transformed) + } + return req, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieOperator(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestCookieValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedOperator, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriOperator(original["operator"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOperator); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["op"] = transformedOperator + } + + transformedValue, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriValue(original["value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["val"] = transformedValue + } + + req = append(req, transformed) + } + return req, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriOperator(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestUriValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParam(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedOperator, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamOperator(original["operator"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOperator); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["op"] = transformedOperator + } + + transformedValue, err := expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamValue(original["value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["val"] = transformedValue + } + + req = append(req, transformed) + } + return req, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamOperator(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionRequestQueryParamValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleSet(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreconfiguredWafConfigExclusionTargetRuleIds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRuleAction(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSecurityPolicyRulePreview(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} diff --git a/google-beta/services/compute/resource_compute_security_policy_rule_generated_test.go b/google-beta/services/compute/resource_compute_security_policy_rule_generated_test.go new file mode 100644 index 0000000000..a3744e3174 --- /dev/null +++ b/google-beta/services/compute/resource_compute_security_policy_rule_generated_test.go @@ -0,0 +1,182 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package compute_test + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func TestAccComputeSecurityPolicyRule_securityPolicyRuleBasicExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeSecurityPolicyRuleDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeSecurityPolicyRule_securityPolicyRuleBasicExample(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"security_policy"}, + }, + }, + }) +} + +func testAccComputeSecurityPolicyRule_securityPolicyRuleBasicExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "policyruletest%{random_suffix}" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "new rule" + priority = 100 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.10.0.0/16"] + } + } + action = "allow" + preview = true +} +`, context) +} + +func TestAccComputeSecurityPolicyRule_securityPolicyRuleMultipleRulesExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeSecurityPolicyRuleDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeSecurityPolicyRule_securityPolicyRuleMultipleRulesExample(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule_one", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"security_policy"}, + }, + }, + }) +} + +func testAccComputeSecurityPolicyRule_securityPolicyRuleMultipleRulesExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "policywithmultiplerules%{random_suffix}" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +resource "google_compute_security_policy_rule" "policy_rule_one" { + security_policy = google_compute_security_policy.default.name + description = "new rule one" + priority = 100 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.10.0.0/16"] + } + } + action = "allow" + preview = true +} + +resource "google_compute_security_policy_rule" "policy_rule_two" { + security_policy = google_compute_security_policy.default.name + description = "new rule two" + priority = 101 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["192.168.0.0/16", "10.0.0.0/8"] + } + } + action = "allow" + preview = true +} +`, context) +} + +func testAccCheckComputeSecurityPolicyRuleDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_compute_security_policy_rule" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } + + config := acctest.GoogleProviderConfig(t) + + url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{ComputeBasePath}}projects/{{project}}/global/securityPolicies/{{security_policy}}/getRule?priority={{priority}}") + if err != nil { + return err + } + + billingProject := "" + + if config.BillingProject != "" { + billingProject = config.BillingProject + } + + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: config.UserAgent, + }) + if err == nil { + return fmt.Errorf("ComputeSecurityPolicyRule still exists at %s", url) + } + } + + return nil + } +} diff --git a/google-beta/services/compute/resource_compute_security_policy_rule_sweeper.go b/google-beta/services/compute/resource_compute_security_policy_rule_sweeper.go new file mode 100644 index 0000000000..22c749b6d5 --- /dev/null +++ b/google-beta/services/compute/resource_compute_security_policy_rule_sweeper.go @@ -0,0 +1,139 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package compute + +import ( + "context" + "log" + "strings" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/sweeper" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func init() { + sweeper.AddTestSweepers("ComputeSecurityPolicyRule", testSweepComputeSecurityPolicyRule) +} + +// At the time of writing, the CI only passes us-central1 as the region +func testSweepComputeSecurityPolicyRule(region string) error { + resourceName := "ComputeSecurityPolicyRule" + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) + + config, err := sweeper.SharedConfigForRegion(region) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) + return err + } + + err = config.LoadAndValidate(context.Background()) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) + return err + } + + t := &testing.T{} + billingId := envvar.GetTestBillingAccountFromEnv(t) + + // Setup variables to replace in list template + d := &tpgresource.ResourceDataMock{ + FieldsInSchema: map[string]interface{}{ + "project": config.Project, + "region": region, + "location": region, + "zone": "-", + "billing_account": billingId, + }, + } + + listTemplate := strings.Split("https://compute.googleapis.com/compute/beta/projects/{{project}}/global/securityPolicies/{{security_policy}}", "?")[0] + listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) + return nil + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: config.Project, + RawURL: listUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) + return nil + } + + resourceList, ok := res["securityPolicyRules"] + if !ok { + log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") + return nil + } + + rl := resourceList.([]interface{}) + + log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for _, ri := range rl { + obj := ri.(map[string]interface{}) + if obj["name"] == nil { + log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) + return nil + } + + name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) + // Skip resources that shouldn't be sweeped + if !sweeper.IsSweepableTestResource(name) { + nonPrefixCount++ + continue + } + + deleteTemplate := "https://compute.googleapis.com/compute/beta/projects/{{project}}/global/securityPolicies/{{security_policy}}/removeRule?priority={{priority}}" + deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + return nil + } + deleteUrl = deleteUrl + name + + // Don't wait on operations as we may have a lot to delete + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: config.Project, + RawURL: deleteUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} diff --git a/google-beta/services/compute/resource_compute_security_policy_rule_test.go b/google-beta/services/compute/resource_compute_security_policy_rule_test.go new file mode 100644 index 0000000000..2d0fe020bf --- /dev/null +++ b/google-beta/services/compute/resource_compute_security_policy_rule_test.go @@ -0,0 +1,455 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package compute_test + +import ( + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "regexp" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccComputeSecurityPolicyRule_basicUpdate(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeSecurityPolicyRuleDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeSecurityPolicyRule_preBasicUpdate(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeSecurityPolicyRule_postBasicUpdate(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccComputeSecurityPolicyRule_withRuleExpr(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeSecurityPolicyRuleDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeSecurityPolicyRule_withRuleExpr(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccComputeSecurityPolicyRule_extendedUpdate(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeSecurityPolicyRuleDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeSecurityPolicyRule_extPreUpdate(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeSecurityPolicyRule_extPosUpdateSamePriority(context), + ExpectError: regexp.MustCompile("Cannot have rules with the same priorities."), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeSecurityPolicyRule_extPosUpdate(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccComputeSecurityPolicyRule_withPreconfiguredWafConfig(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeSecurityPolicyRuleDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeSecurityPolicyRule_withPreconfiguredWafConfig_create(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeSecurityPolicyRule_withPreconfiguredWafConfig_update(context), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccComputeSecurityPolicyRule_withPreconfiguredWafConfig_clear(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_compute_security_policy_rule.policy_rule", "preconfigured_waf_config.0"), + ), + }, + { + ResourceName: "google_compute_security_policy_rule.policy_rule", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccComputeSecurityPolicyRule_preBasicUpdate(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "tf-test%{random_suffix}" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "basic rule pre update" + action = "allow" + priority = 100 + preview = false + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["192.168.0.0/16", "10.0.0.0/8"] + } + } +} +`, context) +} + +func testAccComputeSecurityPolicyRule_postBasicUpdate(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "tf-test%{random_suffix}" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "basic rule post update" + action = "deny(403)" + priority = 100 + preview = true + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["172.16.0.0/12"] + } + } +} +`, context) +} + +func testAccComputeSecurityPolicyRule_withRuleExpr(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "tf-test%{random_suffix}" + description = "basic global security policy" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "basic description" + action = "allow" + priority = "2000" + match { + expr { + expression = "evaluatePreconfiguredExpr('xss-canary')" + } + } + preview = true +} +`, context) +} + +func testAccComputeSecurityPolicyRule_extPreUpdate(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "tf-test%{random_suffix}" + description = "basic global security policy" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "basic description" + action = "allow" + priority = "2000" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.0.0.0/24"] + } + } + preview = true +} +`, context) +} + +func testAccComputeSecurityPolicyRule_extPosUpdateSamePriority(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "tf-test%{random_suffix}" + description = "basic global security policy" +} + +//add this +resource "google_compute_security_policy_rule" "policy_rule2" { + security_policy = google_compute_security_policy.default.name + description = "basic description" + action = "deny(403)" + priority = "2000" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.0.0.0/24"] + } + } + preview = true +} + +//keep this +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "basic description" + action = "allow" + priority = "2000" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.0.0.0/24"] + } + } + preview = true +} +`, context) +} + +func testAccComputeSecurityPolicyRule_extPosUpdate(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "default" { + name = "tf-test%{random_suffix}" + description = "basic global security policy" +} + +//add this +resource "google_compute_security_policy_rule" "policy_rule2" { + security_policy = google_compute_security_policy.default.name + description = "basic description" + action = "deny(403)" + priority = "1000" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.0.0.0/24"] + } + } + preview = true +} + +//update this +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "updated description" + action = "allow" + priority = "2000" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.0.0.0/24"] + } + } + preview = true +} +`, context) +} + +func testAccComputeSecurityPolicyRule_withPreconfiguredWafConfig_create(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "policy" { + name = "tf-test%{random_suffix}" + description = "Global security policy - create" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.policy.name + description = "Rule with preconfiguredWafConfig - create" + action = "deny" + priority = "1000" + match { + expr { + expression = "evaluatePreconfiguredWaf('sqli-stable')" + } + } + preconfigured_waf_config { + exclusion { + request_cookie { + operator = "EQUALS_ANY" + } + request_header { + operator = "EQUALS" + value = "Referer" + } + request_uri { + operator = "STARTS_WITH" + value = "/admin" + } + request_query_param { + operator = "EQUALS" + value = "password" + } + request_query_param { + operator = "STARTS_WITH" + value = "freeform" + } + target_rule_set = "sqli-stable" + } + exclusion { + request_query_param { + operator = "CONTAINS" + value = "password" + } + request_query_param { + operator = "STARTS_WITH" + value = "freeform" + } + target_rule_set = "xss-stable" + } + } + preview = false +} +`, context) +} + +func testAccComputeSecurityPolicyRule_withPreconfiguredWafConfig_update(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "policy" { + name = "tf-test%{random_suffix}" + description = "Global security policy - update" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.policy.name + description = "Rule with preconfiguredWafConfig - update" + action = "deny" + priority = "1000" + match { + expr { + expression = "evaluatePreconfiguredWaf('rce-stable') || evaluatePreconfiguredWaf('xss-stable')" + } + } + preconfigured_waf_config { + exclusion { + request_uri { + operator = "STARTS_WITH" + value = "/admin" + } + target_rule_set = "rce-stable" + } + exclusion { + request_query_param { + operator = "CONTAINS" + value = "password" + } + request_query_param { + operator = "STARTS_WITH" + value = "freeform" + } + request_query_param { + operator = "EQUALS" + value = "description" + } + request_cookie { + operator = "CONTAINS" + value = "TokenExpired" + } + target_rule_set = "xss-stable" + target_rule_ids = [ + "owasp-crs-v030001-id941330-xss", + "owasp-crs-v030001-id941340-xss", + ] + } + } + preview = false +} +`, context) +} + +func testAccComputeSecurityPolicyRule_withPreconfiguredWafConfig_clear(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_compute_security_policy" "policy" { + name = "tf-test%{random_suffix}" + description = "Global security policy - clear" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.policy.name + description = "Rule with preconfiguredWafConfig - clear" + action = "deny" + priority = "1000" + match { + expr { + expression = "evaluatePreconfiguredWaf('rce-stable') || evaluatePreconfiguredWaf('xss-stable')" + } + } + preview = false +} +`, context) +} diff --git a/google-beta/services/compute/resource_compute_service_attachment.go b/google-beta/services/compute/resource_compute_service_attachment.go index 1fc749a3b4..d1df3c3daa 100644 --- a/google-beta/services/compute/resource_compute_service_attachment.go +++ b/google-beta/services/compute/resource_compute_service_attachment.go @@ -21,6 +21,7 @@ import ( "bytes" "fmt" "log" + "net/http" "reflect" "time" @@ -350,6 +351,7 @@ func resourceComputeServiceAttachmentCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -358,6 +360,7 @@ func resourceComputeServiceAttachmentCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceAttachment: %s", err) @@ -410,12 +413,14 @@ func resourceComputeServiceAttachmentRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeServiceAttachment %q", d.Id())) @@ -552,6 +557,7 @@ func resourceComputeServiceAttachmentUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating ServiceAttachment %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -566,6 +572,7 @@ func resourceComputeServiceAttachmentUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -612,6 +619,8 @@ func resourceComputeServiceAttachmentDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServiceAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -621,6 +630,7 @@ func resourceComputeServiceAttachmentDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServiceAttachment") diff --git a/google-beta/services/compute/resource_compute_snapshot.go b/google-beta/services/compute/resource_compute_snapshot.go index bbed4fb3b3..c29123a6a5 100644 --- a/google-beta/services/compute/resource_compute_snapshot.go +++ b/google-beta/services/compute/resource_compute_snapshot.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -226,8 +227,7 @@ can be because the original image had licenses attached (such as a Windows image). snapshotEncryptionKey nested object Encrypts the snapshot using a customer-supplied encryption key.`, Elem: &schema.Schema{ - Type: schema.TypeString, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Type: schema.TypeString, }, }, "snapshot_id": { @@ -352,6 +352,8 @@ func resourceComputeSnapshotCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + url = regexp.MustCompile("PRE_CREATE_REPLACE_ME").ReplaceAllLiteralString(url, sourceDiskProp.(string)) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -361,6 +363,7 @@ func resourceComputeSnapshotCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Snapshot: %s", err) @@ -413,12 +416,14 @@ func resourceComputeSnapshotRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeSnapshot %q", d.Id())) @@ -530,6 +535,8 @@ func resourceComputeSnapshotUpdate(d *schema.ResourceData, meta interface{}) err return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -543,6 +550,7 @@ func resourceComputeSnapshotUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Snapshot %q: %s", d.Id(), err) @@ -590,6 +598,8 @@ func resourceComputeSnapshotDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Snapshot %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -599,6 +609,7 @@ func resourceComputeSnapshotDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Snapshot") diff --git a/google-beta/services/compute/resource_compute_ssl_certificate.go b/google-beta/services/compute/resource_compute_ssl_certificate.go index fd04657a73..65f2ee14b1 100644 --- a/google-beta/services/compute/resource_compute_ssl_certificate.go +++ b/google-beta/services/compute/resource_compute_ssl_certificate.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -192,6 +193,7 @@ func resourceComputeSslCertificateCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -200,6 +202,7 @@ func resourceComputeSslCertificateCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SslCertificate: %s", err) @@ -252,12 +255,14 @@ func resourceComputeSslCertificateRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeSslCertificate %q", d.Id())) @@ -319,6 +324,8 @@ func resourceComputeSslCertificateDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SslCertificate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -328,6 +335,7 @@ func resourceComputeSslCertificateDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SslCertificate") diff --git a/google-beta/services/compute/resource_compute_ssl_policy.go b/google-beta/services/compute/resource_compute_ssl_policy.go index 96c685818a..f84b6f4c53 100644 --- a/google-beta/services/compute/resource_compute_ssl_policy.go +++ b/google-beta/services/compute/resource_compute_ssl_policy.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -227,6 +228,7 @@ func resourceComputeSslPolicyCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -235,6 +237,7 @@ func resourceComputeSslPolicyCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SslPolicy: %s", err) @@ -287,12 +290,14 @@ func resourceComputeSslPolicyRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeSslPolicy %q", d.Id())) @@ -379,6 +384,7 @@ func resourceComputeSslPolicyUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating SslPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -393,6 +399,7 @@ func resourceComputeSslPolicyUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -439,6 +446,8 @@ func resourceComputeSslPolicyDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SslPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -448,6 +457,7 @@ func resourceComputeSslPolicyDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SslPolicy") diff --git a/google-beta/services/compute/resource_compute_subnetwork.go b/google-beta/services/compute/resource_compute_subnetwork.go index d03de8d7bc..f81f049fd1 100644 --- a/google-beta/services/compute/resource_compute_subnetwork.go +++ b/google-beta/services/compute/resource_compute_subnetwork.go @@ -22,6 +22,7 @@ import ( "fmt" "log" "net" + "net/http" "reflect" "time" @@ -496,6 +497,7 @@ func resourceComputeSubnetworkCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -504,6 +506,7 @@ func resourceComputeSubnetworkCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Subnetwork: %s", err) @@ -556,12 +559,14 @@ func resourceComputeSubnetworkRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeSubnetwork %q", d.Id())) @@ -667,6 +672,8 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -680,6 +687,7 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Subnetwork %q: %s", d.Id(), err) @@ -709,6 +717,8 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -722,6 +732,7 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Subnetwork %q: %s", d.Id(), err) @@ -792,6 +803,8 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -805,6 +818,7 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Subnetwork %q: %s", d.Id(), err) @@ -857,6 +871,8 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -870,6 +886,7 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Subnetwork %q: %s", d.Id(), err) @@ -922,6 +939,8 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -935,6 +954,7 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Subnetwork %q: %s", d.Id(), err) @@ -987,6 +1007,8 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1000,6 +1022,7 @@ func resourceComputeSubnetworkUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Subnetwork %q: %s", d.Id(), err) @@ -1047,6 +1070,8 @@ func resourceComputeSubnetworkDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Subnetwork %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1056,6 +1081,7 @@ func resourceComputeSubnetworkDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Subnetwork") diff --git a/google-beta/services/compute/resource_compute_target_grpc_proxy.go b/google-beta/services/compute/resource_compute_target_grpc_proxy.go index 2add806c85..51d1c73ab4 100644 --- a/google-beta/services/compute/resource_compute_target_grpc_proxy.go +++ b/google-beta/services/compute/resource_compute_target_grpc_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -187,6 +188,7 @@ func resourceComputeTargetGrpcProxyCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -195,6 +197,7 @@ func resourceComputeTargetGrpcProxyCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetGrpcProxy: %s", err) @@ -247,12 +250,14 @@ func resourceComputeTargetGrpcProxyRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeTargetGrpcProxy %q", d.Id())) @@ -325,6 +330,7 @@ func resourceComputeTargetGrpcProxyUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating TargetGrpcProxy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -339,6 +345,7 @@ func resourceComputeTargetGrpcProxyUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -385,6 +392,8 @@ func resourceComputeTargetGrpcProxyDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetGrpcProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -394,6 +403,7 @@ func resourceComputeTargetGrpcProxyDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetGrpcProxy") diff --git a/google-beta/services/compute/resource_compute_target_http_proxy.go b/google-beta/services/compute/resource_compute_target_http_proxy.go index 424516082f..d36d2cc068 100644 --- a/google-beta/services/compute/resource_compute_target_http_proxy.go +++ b/google-beta/services/compute/resource_compute_target_http_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -179,6 +180,7 @@ func resourceComputeTargetHttpProxyCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -187,6 +189,7 @@ func resourceComputeTargetHttpProxyCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetHttpProxy: %s", err) @@ -239,12 +242,14 @@ func resourceComputeTargetHttpProxyRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeTargetHttpProxy %q", d.Id())) @@ -314,6 +319,8 @@ func resourceComputeTargetHttpProxyUpdate(d *schema.ResourceData, meta interface return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -327,6 +334,7 @@ func resourceComputeTargetHttpProxyUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetHttpProxy %q: %s", d.Id(), err) @@ -374,6 +382,8 @@ func resourceComputeTargetHttpProxyDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetHttpProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -383,6 +393,7 @@ func resourceComputeTargetHttpProxyDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetHttpProxy") diff --git a/google-beta/services/compute/resource_compute_target_https_proxy.go b/google-beta/services/compute/resource_compute_target_https_proxy.go index 09ae56b80f..80a4baaf44 100644 --- a/google-beta/services/compute/resource_compute_target_https_proxy.go +++ b/google-beta/services/compute/resource_compute_target_https_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -288,6 +289,7 @@ func resourceComputeTargetHttpsProxyCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -296,6 +298,7 @@ func resourceComputeTargetHttpsProxyCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetHttpsProxy: %s", err) @@ -348,12 +351,14 @@ func resourceComputeTargetHttpsProxyRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeTargetHttpsProxy %q", d.Id())) @@ -448,11 +453,18 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac obj["quicOverride"] = quicOverrideProp } + obj, err = resourceComputeTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/targetHttpsProxies/{{name}}/setQuicOverride") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -466,6 +478,7 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetHttpsProxy %q: %s", d.Id(), err) @@ -496,11 +509,18 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac obj["sslCertificates"] = sslCertificatesProp } + obj, err = resourceComputeTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/targetHttpsProxies/{{name}}/setSslCertificates") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -514,6 +534,7 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetHttpsProxy %q: %s", d.Id(), err) @@ -538,11 +559,18 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac obj["certificateMap"] = certificateMapProp } + obj, err = resourceComputeTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/targetHttpsProxies/{{name}}/setCertificateMap") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -556,6 +584,7 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetHttpsProxy %q: %s", d.Id(), err) @@ -580,11 +609,18 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac obj["sslPolicy"] = sslPolicyProp } + obj, err = resourceComputeTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/targetHttpsProxies/{{name}}/setSslPolicy") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -598,6 +634,7 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetHttpsProxy %q: %s", d.Id(), err) @@ -622,11 +659,18 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac obj["urlMap"] = urlMapProp } + obj, err = resourceComputeTargetHttpsProxyUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/targetHttpsProxies/{{name}}/setUrlMap") if err != nil { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -640,6 +684,7 @@ func resourceComputeTargetHttpsProxyUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetHttpsProxy %q: %s", d.Id(), err) @@ -687,6 +732,8 @@ func resourceComputeTargetHttpsProxyDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetHttpsProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -696,6 +743,7 @@ func resourceComputeTargetHttpsProxyDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetHttpsProxy") @@ -925,6 +973,19 @@ func resourceComputeTargetHttpsProxyEncoder(d *schema.ResourceData, meta interfa return obj, nil } +func resourceComputeTargetHttpsProxyUpdateEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { + + if _, ok := obj["certificateManagerCertificates"]; ok { + // The field certificateManagerCertificates should not be included in the API request, and it should be renamed to `sslCertificates` + // The API does not allow using both certificate manager certificates and sslCertificates. If that changes + // in the future, the encoder logic should change accordingly because this will mean that both fields are no longer mutual exclusive. + log.Printf("[DEBUG] converting the field CertificateManagerCertificates to sslCertificates before sending the request") + obj["sslCertificates"] = obj["certificateManagerCertificates"] + delete(obj, "certificateManagerCertificates") + } + return obj, nil +} + func resourceComputeTargetHttpsProxyDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { // Since both sslCertificates and certificateManagerCertificates maps to the same API field (sslCertificates), we need to check the types // of certificates that exist in the array and decide whether to change the field to certificateManagerCertificate or not. diff --git a/google-beta/services/compute/resource_compute_target_instance.go b/google-beta/services/compute/resource_compute_target_instance.go index 8d81be8bfa..e83d2e09b4 100644 --- a/google-beta/services/compute/resource_compute_target_instance.go +++ b/google-beta/services/compute/resource_compute_target_instance.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -202,6 +203,7 @@ func resourceComputeTargetInstanceCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -210,6 +212,7 @@ func resourceComputeTargetInstanceCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetInstance: %s", err) @@ -295,12 +298,14 @@ func resourceComputeTargetInstanceRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeTargetInstance %q", d.Id())) @@ -373,6 +378,8 @@ func resourceComputeTargetInstanceUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -386,6 +393,7 @@ func resourceComputeTargetInstanceUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetInstance %q: %s", d.Id(), err) @@ -433,6 +441,8 @@ func resourceComputeTargetInstanceDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetInstance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -442,6 +452,7 @@ func resourceComputeTargetInstanceDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetInstance") diff --git a/google-beta/services/compute/resource_compute_target_ssl_proxy.go b/google-beta/services/compute/resource_compute_target_ssl_proxy.go index d665feddcc..510068e0ee 100644 --- a/google-beta/services/compute/resource_compute_target_ssl_proxy.go +++ b/google-beta/services/compute/resource_compute_target_ssl_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -208,6 +209,7 @@ func resourceComputeTargetSslProxyCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -216,6 +218,7 @@ func resourceComputeTargetSslProxyCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetSslProxy: %s", err) @@ -268,12 +271,14 @@ func resourceComputeTargetSslProxyRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeTargetSslProxy %q", d.Id())) @@ -349,6 +354,8 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -362,6 +369,7 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetSslProxy %q: %s", d.Id(), err) @@ -391,6 +399,8 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -404,6 +414,7 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetSslProxy %q: %s", d.Id(), err) @@ -433,6 +444,8 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -446,6 +459,7 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetSslProxy %q: %s", d.Id(), err) @@ -475,6 +489,8 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -488,6 +504,7 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetSslProxy %q: %s", d.Id(), err) @@ -517,6 +534,8 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -530,6 +549,7 @@ func resourceComputeTargetSslProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetSslProxy %q: %s", d.Id(), err) @@ -577,6 +597,8 @@ func resourceComputeTargetSslProxyDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetSslProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -586,6 +608,7 @@ func resourceComputeTargetSslProxyDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetSslProxy") diff --git a/google-beta/services/compute/resource_compute_target_tcp_proxy.go b/google-beta/services/compute/resource_compute_target_tcp_proxy.go index 48ff3aef93..5144c340bc 100644 --- a/google-beta/services/compute/resource_compute_target_tcp_proxy.go +++ b/google-beta/services/compute/resource_compute_target_tcp_proxy.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -176,6 +177,7 @@ func resourceComputeTargetTcpProxyCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -184,6 +186,7 @@ func resourceComputeTargetTcpProxyCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TargetTcpProxy: %s", err) @@ -236,12 +239,14 @@ func resourceComputeTargetTcpProxyRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeTargetTcpProxy %q", d.Id())) @@ -311,6 +316,8 @@ func resourceComputeTargetTcpProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -324,6 +331,7 @@ func resourceComputeTargetTcpProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetTcpProxy %q: %s", d.Id(), err) @@ -353,6 +361,8 @@ func resourceComputeTargetTcpProxyUpdate(d *schema.ResourceData, meta interface{ return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -366,6 +376,7 @@ func resourceComputeTargetTcpProxyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating TargetTcpProxy %q: %s", d.Id(), err) @@ -413,6 +424,8 @@ func resourceComputeTargetTcpProxyDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TargetTcpProxy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -422,6 +435,7 @@ func resourceComputeTargetTcpProxyDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TargetTcpProxy") diff --git a/google-beta/services/compute/resource_compute_url_map.go b/google-beta/services/compute/resource_compute_url_map.go index 8d45672927..5ccd4a7035 100644 --- a/google-beta/services/compute/resource_compute_url_map.go +++ b/google-beta/services/compute/resource_compute_url_map.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -2888,6 +2889,7 @@ func resourceComputeUrlMapCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -2896,6 +2898,7 @@ func resourceComputeUrlMapCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating UrlMap: %s", err) @@ -2948,12 +2951,14 @@ func resourceComputeUrlMapRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeUrlMap %q", d.Id())) @@ -3089,6 +3094,7 @@ func resourceComputeUrlMapUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating UrlMap %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -3103,6 +3109,7 @@ func resourceComputeUrlMapUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -3149,6 +3156,8 @@ func resourceComputeUrlMapDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting UrlMap %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -3158,6 +3167,7 @@ func resourceComputeUrlMapDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "UrlMap") diff --git a/google-beta/services/compute/resource_compute_vpn_gateway.go b/google-beta/services/compute/resource_compute_vpn_gateway.go index f8257463bc..52003dc45b 100644 --- a/google-beta/services/compute/resource_compute_vpn_gateway.go +++ b/google-beta/services/compute/resource_compute_vpn_gateway.go @@ -20,6 +20,7 @@ package compute import ( "fmt" "log" + "net/http" "reflect" "time" @@ -160,6 +161,7 @@ func resourceComputeVpnGatewayCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -168,6 +170,7 @@ func resourceComputeVpnGatewayCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VpnGateway: %s", err) @@ -220,12 +223,14 @@ func resourceComputeVpnGatewayRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeVpnGateway %q", d.Id())) @@ -287,6 +292,8 @@ func resourceComputeVpnGatewayDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting VpnGateway %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -296,6 +303,7 @@ func resourceComputeVpnGatewayDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VpnGateway") diff --git a/google-beta/services/compute/resource_compute_vpn_tunnel.go b/google-beta/services/compute/resource_compute_vpn_tunnel.go index bf1ef74b48..4e75c7289d 100644 --- a/google-beta/services/compute/resource_compute_vpn_tunnel.go +++ b/google-beta/services/compute/resource_compute_vpn_tunnel.go @@ -22,6 +22,7 @@ import ( "fmt" "log" "net" + "net/http" "reflect" "strings" "time" @@ -489,6 +490,7 @@ func resourceComputeVpnTunnelCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -497,6 +499,7 @@ func resourceComputeVpnTunnelCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VpnTunnel: %s", err) @@ -609,12 +612,14 @@ func resourceComputeVpnTunnelRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ComputeVpnTunnel %q", d.Id())) @@ -735,6 +740,8 @@ func resourceComputeVpnTunnelUpdate(d *schema.ResourceData, meta interface{}) er return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -748,6 +755,7 @@ func resourceComputeVpnTunnelUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating VpnTunnel %q: %s", d.Id(), err) @@ -795,6 +803,8 @@ func resourceComputeVpnTunnelDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting VpnTunnel %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -804,6 +814,7 @@ func resourceComputeVpnTunnelDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VpnTunnel") diff --git a/google-beta/services/container/node_config.go b/google-beta/services/container/node_config.go index 39e0e54e3d..47b6ab8788 100644 --- a/google-beta/services/container/node_config.go +++ b/google-beta/services/container/node_config.go @@ -1222,7 +1222,6 @@ func flattenResourceManagerTags(c *container.ResourceManagerTags) map[string]int for k, v := range c.Tags { rmt[k] = v } - } return rmt diff --git a/google-beta/services/container/resource_container_cluster.go b/google-beta/services/container/resource_container_cluster.go index 5587c1a9bf..0d3c16e4e7 100644 --- a/google-beta/services/container/resource_container_cluster.go +++ b/google-beta/services/container/resource_container_cluster.go @@ -43,7 +43,7 @@ var ( Type: schema.TypeBool, Optional: true, Computed: true, - Description: `Whether master is accessbile via Google Compute Engine Public IP addresses.`, + Description: `Whether Kubernetes master is accessible via Google Compute Engine Public IPs.`, }, }, } @@ -77,6 +77,7 @@ var ( "addons_config.0.gke_backup_agent_config", "addons_config.0.config_connector_config", "addons_config.0.gcs_fuse_csi_driver_config", + "addons_config.0.stateful_ha_config", "addons_config.0.istio_config", "addons_config.0.kalm_config", } @@ -494,6 +495,23 @@ func ResourceContainerCluster() *schema.Resource { }, }, }, + "stateful_ha_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the Stateful HA addon, which provides automatic configurable failover for stateful applications. Defaults to disabled; set enabled = true to enable.`, + ConflictsWith: []string{"enable_autopilot"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, }, }, }, @@ -1423,6 +1441,11 @@ func ResourceContainerCluster() *schema.Resource { }, }, }, + "resource_manager_tags": { + Type: schema.TypeMap, + Optional: true, + Description: `A map of resource manager tags. Resource manager tag keys and values have the same definition as resource manager tags. Keys must be in the format tagKeys/{tag_key_id}, and values are in the format tagValues/456. The field is ignored (both PUT & PATCH) when empty.`, + }, }, }, }, @@ -1773,7 +1796,7 @@ func ResourceContainerCluster() *schema.Resource { "enabled": { Type: schema.TypeBool, Required: true, - Description: `When enabled, services with exterenal ips specified will be allowed.`, + Description: `When enabled, services with external ips specified will be allowed.`, }, }, }, @@ -1891,7 +1914,12 @@ func ResourceContainerCluster() *schema.Resource { ValidateFunc: validation.StringInSlice([]string{"DATAPATH_PROVIDER_UNSPECIFIED", "LEGACY_DATAPATH", "ADVANCED_DATAPATH"}, false), DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("DATAPATH_PROVIDER_UNSPECIFIED"), }, - + "enable_cilium_clusterwide_network_policy": { + Type: schema.TypeBool, + Optional: true, + Description: `Whether Cilium cluster-wide network policy is enabled on this cluster.`, + Default: false, + }, "enable_intranode_visibility": { Type: schema.TypeBool, Optional: true, @@ -2215,15 +2243,16 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er ClusterTelemetry: expandClusterTelemetry(d.Get("cluster_telemetry")), EnableTpu: d.Get("enable_tpu").(bool), NetworkConfig: &container.NetworkConfig{ - EnableIntraNodeVisibility: d.Get("enable_intranode_visibility").(bool), - DefaultSnatStatus: expandDefaultSnatStatus(d.Get("default_snat_status")), - DatapathProvider: d.Get("datapath_provider").(string), - PrivateIpv6GoogleAccess: d.Get("private_ipv6_google_access").(string), - EnableL4ilbSubsetting: d.Get("enable_l4_ilb_subsetting").(bool), - DnsConfig: expandDnsConfig(d.Get("dns_config")), - GatewayApiConfig: expandGatewayApiConfig(d.Get("gateway_api_config")), - EnableMultiNetworking: d.Get("enable_multi_networking").(bool), - EnableFqdnNetworkPolicy: d.Get("enable_fqdn_network_policy").(bool), + EnableIntraNodeVisibility: d.Get("enable_intranode_visibility").(bool), + DefaultSnatStatus: expandDefaultSnatStatus(d.Get("default_snat_status")), + DatapathProvider: d.Get("datapath_provider").(string), + EnableCiliumClusterwideNetworkPolicy: d.Get("enable_cilium_clusterwide_network_policy").(bool), + PrivateIpv6GoogleAccess: d.Get("private_ipv6_google_access").(string), + EnableL4ilbSubsetting: d.Get("enable_l4_ilb_subsetting").(bool), + DnsConfig: expandDnsConfig(d.Get("dns_config")), + GatewayApiConfig: expandGatewayApiConfig(d.Get("gateway_api_config")), + EnableMultiNetworking: d.Get("enable_multi_networking").(bool), + EnableFqdnNetworkPolicy: d.Get("enable_fqdn_network_policy").(bool), }, MasterAuth: expandMasterAuth(d.Get("master_auth")), NotificationConfig: expandNotificationConfig(d.Get("notification_config")), @@ -2745,6 +2774,9 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("datapath_provider", cluster.NetworkConfig.DatapathProvider); err != nil { return fmt.Errorf("Error setting datapath_provider: %s", err) } + if err := d.Set("enable_cilium_clusterwide_network_policy", cluster.NetworkConfig.EnableCiliumClusterwideNetworkPolicy); err != nil { + return fmt.Errorf("Error setting enable_cilium_clusterwide_network_policy: %s", err) + } if err := d.Set("default_snat_status", flattenDefaultSnatStatus(cluster.NetworkConfig.DefaultSnatStatus)); err != nil { return err } @@ -3238,6 +3270,22 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er log.Printf("[INFO] GKE cluster %s FQDN Network Policy has been updated to %v", d.Id(), enabled) } + if d.HasChange("enable_cilium_clusterwide_network_policy") { + enabled := d.Get("enable_cilium_clusterwide_network_policy").(bool) + req := &container.UpdateClusterRequest{ + Update: &container.ClusterUpdate{ + DesiredEnableCiliumClusterwideNetworkPolicy: enabled, + }, + } + updateF := updateFunc(req, "updating cilium clusterwide network policy") + // Call update serially. + if err := transport_tpg.LockedCall(lockKey, updateF); err != nil { + return err + } + + log.Printf("[INFO] GKE cluster %s Cilium Clusterwide Network Policy has been updated to %v", d.Id(), enabled) + } + if d.HasChange("cost_management_config") { c := d.Get("cost_management_config") req := &container.UpdateClusterRequest{ @@ -4103,6 +4151,24 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er log.Printf("[INFO] GKE cluster %s node pool auto config network tags have been updated", d.Id()) } + if d.HasChange("node_pool_auto_config.0.resource_manager_tags") { + rmtags := d.Get("node_pool_auto_config.0.resource_manager_tags") + + req := &container.UpdateClusterRequest{ + Update: &container.ClusterUpdate{ + DesiredNodePoolAutoConfigResourceManagerTags: expandResourceManagerTags(rmtags), + }, + } + + updateF := updateFunc(req, "updating GKE cluster node pool auto config resource manager tags") + // Call update serially. + if err := transport_tpg.LockedCall(lockKey, updateF); err != nil { + return err + } + + log.Printf("[INFO] GKE cluster %s node pool auto config resource manager tags have been updated", d.Id()) + } + d.Partial(false) if d.HasChange("cluster_telemetry") { @@ -4370,6 +4436,14 @@ func expandClusterAddonsConfig(configured interface{}) *container.AddonsConfig { } } + if v, ok := config["stateful_ha_config"]; ok && len(v.([]interface{})) > 0 { + addon := v.([]interface{})[0].(map[string]interface{}) + ac.StatefulHaConfig = &container.StatefulHAConfig{ + Enabled: addon["enabled"].(bool), + ForceSendFields: []string{"Enabled"}, + } + } + if v, ok := config["istio_config"]; ok && len(v.([]interface{})) > 0 { addon := v.([]interface{})[0].(map[string]interface{}) ac.IstioConfig = &container.IstioConfig{ @@ -5351,6 +5425,10 @@ func expandNodePoolAutoConfig(configured interface{}) *container.NodePoolAutoCon npac.NetworkTags = expandNodePoolAutoConfigNetworkTags(v) } + if v, ok := config["resource_manager_tags"]; ok && len(v.(map[string]interface{})) > 0 { + npac.ResourceManagerTags = expandResourceManagerTags(v) + } + return npac } @@ -5529,6 +5607,13 @@ func flattenClusterAddonsConfig(c *container.AddonsConfig) []map[string]interfac }, } } + if c.StatefulHaConfig != nil { + result["stateful_ha_config"] = []map[string]interface{}{ + { + "enabled": c.StatefulHaConfig.Enabled, + }, + } + } if c.IstioConfig != nil { result["istio_config"] = []map[string]interface{}{ @@ -5546,6 +5631,7 @@ func flattenClusterAddonsConfig(c *container.AddonsConfig) []map[string]interfac }, } } + return []map[string]interface{}{result} } @@ -6148,6 +6234,9 @@ func flattenNodePoolAutoConfig(c *container.NodePoolAutoConfig) []map[string]int if c.NetworkTags != nil { result["network_tags"] = flattenNodePoolAutoConfigNetworkTags(c.NetworkTags) } + if c.ResourceManagerTags != nil { + result["resource_manager_tags"] = flattenResourceManagerTags(c.ResourceManagerTags) + } return []map[string]interface{}{result} } diff --git a/google-beta/services/container/resource_container_cluster_migratev1.go b/google-beta/services/container/resource_container_cluster_migratev1.go index 1e405a8d01..15aad3b31e 100644 --- a/google-beta/services/container/resource_container_cluster_migratev1.go +++ b/google-beta/services/container/resource_container_cluster_migratev1.go @@ -1564,7 +1564,7 @@ func resourceContainerClusterResourceV1() *schema.Resource { "enabled": { Type: schema.TypeBool, Required: true, - Description: `When enabled, services with exterenal ips specified will be allowed.`, + Description: `When enabled, services with external ips specified will be allowed.`, }, }, }, diff --git a/google-beta/services/container/resource_container_cluster_test.go b/google-beta/services/container/resource_container_cluster_test.go index 1b575f46db..7a62cb9b9d 100644 --- a/google-beta/services/container/resource_container_cluster_test.go +++ b/google-beta/services/container/resource_container_cluster_test.go @@ -2961,6 +2961,62 @@ func TestAccContainerCluster_withAutopilotNetworkTags(t *testing.T) { }) } +func TestAccContainerCluster_withAutopilotResourceManagerTags(t *testing.T) { + t.Parallel() + + pid := envvar.GetTestProjectFromEnv() + + randomSuffix := acctest.RandString(t, 10) + clusterName := fmt.Sprintf("tf-test-cluster-%s", randomSuffix) + clusterNetName := fmt.Sprintf("tf-test-container-net-%s", randomSuffix) + clusterSubnetName := fmt.Sprintf("tf-test-container-subnet-%s", randomSuffix) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + ExternalProviders: map[string]resource.ExternalProvider{ + "time": {}, + }, + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_withAutopilotResourceManagerTags(pid, clusterName, clusterNetName, clusterSubnetName, randomSuffix), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("google_container_cluster.with_autopilot", "self_link"), + resource.TestCheckResourceAttrSet("google_container_cluster.with_autopilot", "node_pool_auto_config.0.resource_manager_tags.%"), + ), + }, + { + ResourceName: "google_container_cluster.with_autopilot", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, + }, + { + Config: testAccContainerCluster_withAutopilotResourceManagerTagsUpdate1(pid, clusterName, clusterNetName, clusterSubnetName, randomSuffix), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet("google_container_cluster.with_autopilot", "node_pool_auto_config.0.resource_manager_tags.%"), + ), + }, + { + ResourceName: "google_container_cluster.with_autopilot", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, + }, + { + Config: testAccContainerCluster_withAutopilotResourceManagerTagsUpdate2(pid, clusterName, clusterNetName, clusterSubnetName, randomSuffix), + }, + { + ResourceName: "google_container_cluster.with_autopilot", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, + }, + }, + }) +} + func TestAccContainerCluster_withWorkloadIdentityConfig(t *testing.T) { t.Parallel() @@ -3711,7 +3767,7 @@ func TestAccContainerCluster_errorCleanDanglingCluster(t *testing.T) { initConfig := testAccContainerCluster_withInitialCIDR(containerNetName, clusterName) overlapConfig := testAccContainerCluster_withCIDROverlap(initConfig, clusterNameError) - overlapConfigWithTimeout := testAccContainerCluster_withCIDROverlapWithTimeout(initConfig, clusterNameErrorWithTimeout, "40s") + overlapConfigWithTimeout := testAccContainerCluster_withCIDROverlapWithTimeout(initConfig, clusterNameErrorWithTimeout, "1s") checkTaintApplied := func(st *terraform.State) error { // Return an error if there is no tainted (i.e. marked for deletion) cluster. @@ -3757,7 +3813,7 @@ func TestAccContainerCluster_errorCleanDanglingCluster(t *testing.T) { Check: checkTaintApplied, }, { - // Next attempt to create the overlapping cluster with a 40s timeout. This will fail with a different error. + // Next attempt to create the overlapping cluster with a 1s timeout. This will fail with a different error. Config: overlapConfigWithTimeout, ExpectError: regexp.MustCompile("timeout while waiting for state to become 'DONE'"), }, @@ -3969,6 +4025,87 @@ func TestAccContainerCluster_withAdvancedDatapath(t *testing.T) { }) } +func TestAccContainerCluster_enableCiliumPolicies(t *testing.T) { + t.Parallel() + + clusterName := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(t, 10)) + networkName := acctest.BootstrapSharedTestNetwork(t, "gke-cluster") + subnetworkName := acctest.BootstrapSubnet(t, "gke-cluster", networkName) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_withDatapathProvider(clusterName, "ADVANCED_DATAPATH", networkName, subnetworkName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_container_cluster.primary", "enable_cilium_clusterwide_network_policy", "false"), + ), + }, + { + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + { + Config: testAccContainerCluster_enableCiliumPolicies(clusterName, networkName, subnetworkName, true), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_container_cluster.primary", "enable_cilium_clusterwide_network_policy", "true"), + ), + }, + { + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + }, + }) +} + +func TestAccContainerCluster_enableCiliumPolicies_withAutopilot(t *testing.T) { + t.Parallel() + + randomSuffix := acctest.RandString(t, 10) + clusterName := fmt.Sprintf("tf-test-cluster-%s", randomSuffix) + clusterNetName := fmt.Sprintf("tf-test-container-net-%s", randomSuffix) + clusterSubnetName := fmt.Sprintf("tf-test-container-subnet-%s", randomSuffix) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_enableCiliumPolicies_withAutopilot(clusterName, clusterNetName, clusterSubnetName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_container_cluster.with_autopilot", "enable_cilium_clusterwide_network_policy", "false"), + ), + }, + { + ResourceName: "google_container_cluster.with_autopilot", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + { + Config: testAccContainerCluster_enableCiliumPolicies_withAutopilotUpdate(clusterName, clusterNetName, clusterSubnetName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_container_cluster.with_autopilot", "enable_cilium_clusterwide_network_policy", "true"), + ), + }, + { + ResourceName: "google_container_cluster.with_autopilot", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + }, + }) +} + func TestAccContainerCluster_withResourceUsageExportConfig(t *testing.T) { t.Parallel() @@ -4871,6 +5008,9 @@ resource "google_container_cluster" "primary" { gcs_fuse_csi_driver_config { enabled = false } + stateful_ha_config { + enabled = false + } istio_config { disabled = true auth = "AUTH_MUTUAL_TLS" @@ -4936,6 +5076,9 @@ resource "google_container_cluster" "primary" { gcs_fuse_csi_driver_config { enabled = true } + stateful_ha_config { + enabled = true + } istio_config { disabled = false auth = "AUTH_NONE" @@ -8358,19 +8501,10 @@ func testAccContainerCluster_withDatabaseEncryption(clusterName string, kmsData data "google_project" "project" { } -data "google_iam_policy" "test_kms_binding" { - binding { - role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - ] - } -} - -resource "google_kms_key_ring_iam_policy" "test_key_ring_iam_policy" { +resource "google_kms_key_ring_iam_member" "test_key_ring_iam_policy" { key_ring_id = "%[1]s" - policy_data = data.google_iam_policy.test_kms_binding.policy_data + role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" } data "google_kms_key_ring_iam_policy" "test_key_ring_iam_policy" { @@ -8414,15 +8548,46 @@ resource "google_container_cluster" "primary" { `, clusterName, datapathProvider, networkName, subnetworkName) } -func testAccContainerCluster_withMasterAuthorizedNetworksDisabled(containerNetName string, clusterName string) string { +func testAccContainerCluster_enableCiliumPolicies(clusterName, networkName, subnetworkName string, enableCilium bool) string { + ciliumPolicies := "" + if enableCilium { + ciliumPolicies = "enable_cilium_clusterwide_network_policy = true" + } else { + ciliumPolicies = "enable_cilium_clusterwide_network_policy = false" + } + + return fmt.Sprintf(` +resource "google_container_cluster" "primary" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + ip_allocation_policy { + } + + datapath_provider = "ADVANCED_DATAPATH" + %s + + release_channel { + channel = "RAPID" + } + + network = "%s" + subnetwork = "%s" + + deletion_protection = false +} +`, clusterName, ciliumPolicies, networkName, subnetworkName) +} + +func testAccContainerCluster_enableCiliumPolicies_withAutopilot(clusterName, networkName, subnetworkName string) string { return fmt.Sprintf(` resource "google_compute_network" "container_network" { - name = "%s" + name = "%[2]s" auto_create_subnetworks = false } resource "google_compute_subnetwork" "container_subnetwork" { - name = google_compute_network.container_network.name + name = "%[3]s" network = google_compute_network.container_network.name ip_cidr_range = "10.0.36.0/24" region = "us-central1" @@ -8439,103 +8604,230 @@ resource "google_compute_subnetwork" "container_subnetwork" { } } -resource "google_container_cluster" "with_private_cluster" { - name = "%s" - location = "us-central1-a" - initial_node_count = 1 - - networking_mode = "VPC_NATIVE" - network = google_compute_network.container_network.name - subnetwork = google_compute_subnetwork.container_subnetwork.name +resource "google_container_cluster" "with_autopilot" { + name = "%[1]s" + location = "us-central1" + enable_autopilot = true - private_cluster_config { - enable_private_endpoint = false - enable_private_nodes = true - master_ipv4_cidr_block = "10.42.0.0/28" + release_channel { + channel = "RAPID" } + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name ip_allocation_policy { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } - deletion_protection = false -} -`, containerNetName, clusterName) -} -func testAccContainerCluster_withEnableKubernetesAlpha(cluster, np, networkName, subnetworkName string) string { - return fmt.Sprintf(` -resource "google_container_cluster" "primary" { - name = "%s" - location = "us-central1-a" - enable_kubernetes_alpha = true + addons_config { + horizontal_pod_autoscaling { + disabled = false + } + } - node_pool { - name = "%s" - initial_node_count = 1 - management { - auto_repair = false - auto_upgrade = false - } + vertical_pod_autoscaling { + enabled = true } - deletion_protection = false - network = "%s" - subnetwork = "%s" -} -`, cluster, np, networkName, subnetworkName) -} -func testAccContainerCluster_withoutEnableKubernetesBetaAPIs(clusterName, networkName, subnetworkName string) string { - return fmt.Sprintf(` -data "google_container_engine_versions" "central1a" { - location = "us-central1-a" -} + datapath_provider = "ADVANCED_DATAPATH" -resource "google_container_cluster" "primary" { - name = "%s" - location = "us-central1-a" - min_master_version = data.google_container_engine_versions.central1a.release_channel_latest_version["STABLE"] - initial_node_count = 1 deletion_protection = false - network = "%s" - subnetwork = "%s" + + timeouts { + create = "30m" + update = "40m" + } } `, clusterName, networkName, subnetworkName) } -func testAccContainerCluster_withEnableKubernetesBetaAPIs(cluster, networkName, subnetworkName string) string { +func testAccContainerCluster_enableCiliumPolicies_withAutopilotUpdate(clusterName, networkName, subnetworkName string) string { return fmt.Sprintf(` -data "google_container_engine_versions" "uscentral1a" { - location = "us-central1-a" +resource "google_compute_network" "container_network" { + name = "%[2]s" + auto_create_subnetworks = false } -resource "google_container_cluster" "primary" { - name = "%s" - location = "us-central1-a" - min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] - initial_node_count = 1 - deletion_protection = false +resource "google_compute_subnetwork" "container_subnetwork" { + name = "%[3]s" + network = google_compute_network.container_network.name + ip_cidr_range = "10.0.36.0/24" + region = "us-central1" + private_ip_google_access = true - # This feature has been available since GKE 1.27, and currently the only - # supported Beta API is authentication.k8s.io/v1beta1/selfsubjectreviews. - # However, in the future, more Beta APIs will be supported, such as the - # resource.k8s.io group. At the same time, some existing Beta APIs will be - # deprecated as the feature will be GAed, and the Beta API will be eventually - # removed. In the case of the SelfSubjectReview API, it is planned to be GAed - # in Kubernetes as of 1.28. And, the Beta API of SelfSubjectReview will be removed - # after at least 3 minor version bumps, so it will be removed as of Kubernetes 1.31 - # or later. - # https://pr.k8s.io/117713 - # https://kubernetes.io/docs/reference/using-api/deprecation-guide/ - # - # The new Beta APIs will be available since GKE 1.28 - # - admissionregistration.k8s.io/v1beta1/validatingadmissionpolicies - # - admissionregistration.k8s.io/v1beta1/validatingadmissionpolicybindings - # https://pr.k8s.io/118644 - # - # Removing the Beta API from Kubernetes will break the test. - # TODO: Replace the Beta API with one available on the version of GKE - # if the test is broken. + secondary_ip_range { + range_name = "pod" + ip_cidr_range = "10.0.0.0/19" + } + + secondary_ip_range { + range_name = "svc" + ip_cidr_range = "10.0.32.0/22" + } +} + +resource "google_container_cluster" "with_autopilot" { + name = "%[1]s" + location = "us-central1" + enable_autopilot = true + + release_channel { + channel = "RAPID" + } + + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name + ip_allocation_policy { + cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name + services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name + } + + addons_config { + horizontal_pod_autoscaling { + disabled = false + } + } + + vertical_pod_autoscaling { + enabled = true + } + + datapath_provider = "ADVANCED_DATAPATH" + enable_cilium_clusterwide_network_policy = true + + deletion_protection = false + + timeouts { + create = "30m" + update = "40m" + } +} +`, clusterName, networkName, subnetworkName) +} + +func testAccContainerCluster_withMasterAuthorizedNetworksDisabled(containerNetName string, clusterName string) string { + return fmt.Sprintf(` +resource "google_compute_network" "container_network" { + name = "%s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "container_subnetwork" { + name = google_compute_network.container_network.name + network = google_compute_network.container_network.name + ip_cidr_range = "10.0.36.0/24" + region = "us-central1" + private_ip_google_access = true + + secondary_ip_range { + range_name = "pod" + ip_cidr_range = "10.0.0.0/19" + } + + secondary_ip_range { + range_name = "svc" + ip_cidr_range = "10.0.32.0/22" + } +} + +resource "google_container_cluster" "with_private_cluster" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + + networking_mode = "VPC_NATIVE" + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name + + private_cluster_config { + enable_private_endpoint = false + enable_private_nodes = true + master_ipv4_cidr_block = "10.42.0.0/28" + } + + ip_allocation_policy { + cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name + services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name + } + deletion_protection = false +} +`, containerNetName, clusterName) +} + +func testAccContainerCluster_withEnableKubernetesAlpha(cluster, np, networkName, subnetworkName string) string { + return fmt.Sprintf(` +resource "google_container_cluster" "primary" { + name = "%s" + location = "us-central1-a" + enable_kubernetes_alpha = true + + node_pool { + name = "%s" + initial_node_count = 1 + management { + auto_repair = false + auto_upgrade = false + } + } + deletion_protection = false + network = "%s" + subnetwork = "%s" +} +`, cluster, np, networkName, subnetworkName) +} + +func testAccContainerCluster_withoutEnableKubernetesBetaAPIs(clusterName, networkName, subnetworkName string) string { + return fmt.Sprintf(` +data "google_container_engine_versions" "central1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "primary" { + name = "%s" + location = "us-central1-a" + min_master_version = data.google_container_engine_versions.central1a.release_channel_latest_version["STABLE"] + initial_node_count = 1 + deletion_protection = false + network = "%s" + subnetwork = "%s" +} +`, clusterName, networkName, subnetworkName) +} + +func testAccContainerCluster_withEnableKubernetesBetaAPIs(cluster, networkName, subnetworkName string) string { + return fmt.Sprintf(` +data "google_container_engine_versions" "uscentral1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "primary" { + name = "%s" + location = "us-central1-a" + min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] + initial_node_count = 1 + deletion_protection = false + + # This feature has been available since GKE 1.27, and currently the only + # supported Beta API is authentication.k8s.io/v1beta1/selfsubjectreviews. + # However, in the future, more Beta APIs will be supported, such as the + # resource.k8s.io group. At the same time, some existing Beta APIs will be + # deprecated as the feature will be GAed, and the Beta API will be eventually + # removed. In the case of the SelfSubjectReview API, it is planned to be GAed + # in Kubernetes as of 1.28. And, the Beta API of SelfSubjectReview will be removed + # after at least 3 minor version bumps, so it will be removed as of Kubernetes 1.31 + # or later. + # https://pr.k8s.io/117713 + # https://kubernetes.io/docs/reference/using-api/deprecation-guide/ + # + # The new Beta APIs will be available since GKE 1.28 + # - admissionregistration.k8s.io/v1beta1/validatingadmissionpolicies + # - admissionregistration.k8s.io/v1beta1/validatingadmissionpolicybindings + # https://pr.k8s.io/118644 + # + # Removing the Beta API from Kubernetes will break the test. + # TODO: Replace the Beta API with one available on the version of GKE + # if the test is broken. enable_k8s_beta_apis { enabled_apis = ["authentication.k8s.io/v1beta1/selfsubjectreviews"] } @@ -8572,12 +8864,10 @@ resource "google_service_account" "service_account" { display_name = "Service Account" } -resource "google_project_iam_binding" "project" { +resource "google_project_iam_member" "project" { project = "%[2]s" role = "roles/container.nodeServiceAccount" - members = [ - "serviceAccount:%[1]s@%[2]s.iam.gserviceaccount.com", - ] + member = "serviceAccount:%[1]s@%[2]s.iam.gserviceaccount.com" }`, serviceAccount, projectID) clusterAutoscaling = fmt.Sprintf(` @@ -9604,31 +9894,35 @@ data "google_project" "project" { project_id = "%[1]s" } -resource "google_project_iam_binding" "tagHoldAdmin" { +resource "google_project_iam_member" "tagHoldAdmin" { project = "%[1]s" role = "roles/resourcemanager.tagHoldAdmin" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" } -resource "google_project_iam_binding" "tagUser" { +resource "google_project_iam_member" "tagUser1" { project = "%[1]s" role = "roles/resourcemanager.tagUser" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" - depends_on = [google_project_iam_binding.tagHoldAdmin] + depends_on = [google_project_iam_member.tagHoldAdmin] } resource "time_sleep" "wait_120_seconds" { create_duration = "120s" depends_on = [ - google_project_iam_binding.tagHoldAdmin, - google_project_iam_binding.tagUser + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, ] } @@ -9680,3 +9974,427 @@ resource "google_container_cluster" "primary" { } `, projectID, randomSuffix, clusterName, networkName, subnetworkName) } + +func testAccContainerCluster_withAutopilotResourceManagerTags(projectID, clusterName, networkName, subnetworkName, randomSuffix string) string { + return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%[1]s" +} + +resource "google_project_iam_member" "tagHoldAdmin" { + project = "%[1]s" + role = "roles/resourcemanager.tagHoldAdmin" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" +} + +resource "google_project_iam_member" "tagUser1" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "time_sleep" "wait_120_seconds" { + create_duration = "120s" + + depends_on = [ + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, + ] +} + +resource "google_tags_tag_key" "key1" { + parent = "projects/%[1]s" + short_name = "foobarbaz1-%[2]s" + description = "For foo/bar1 resources" + purpose = "GCE_FIREWALL" + purpose_data = { + network = "%[1]s/%[4]s" + } + + depends_on = [google_compute_network.container_network] +} + +resource "google_tags_tag_value" "value1" { + parent = "tagKeys/${google_tags_tag_key.key1.name}" + short_name = "foo1-%[2]s" + description = "For foo1 resources" +} + +resource "google_tags_tag_key" "key2" { + parent = "projects/%[1]s" + short_name = "foobarbaz2-%[2]s" + description = "For foo/bar2 resources" + purpose = "GCE_FIREWALL" + purpose_data = { + network = "%[1]s/%[4]s" + } + + depends_on = [ + google_compute_network.container_network, + google_tags_tag_key.key1 + ] +} + +resource "google_tags_tag_value" "value2" { + parent = "tagKeys/${google_tags_tag_key.key2.name}" + short_name = "foo2-%[2]s" + description = "For foo2 resources" +} + +resource "google_compute_network" "container_network" { + name = "%[4]s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "container_subnetwork" { + name = "%[5]s" + network = google_compute_network.container_network.name + ip_cidr_range = "10.0.36.0/24" + region = "us-central1" + private_ip_google_access = true + + secondary_ip_range { + range_name = "pod" + ip_cidr_range = "10.0.0.0/19" + } + + secondary_ip_range { + range_name = "svc" + ip_cidr_range = "10.0.32.0/22" + } +} + +data "google_container_engine_versions" "uscentral1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "with_autopilot" { + name = "%[3]s" + location = "us-central1" + min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] + enable_autopilot = true + + deletion_protection = false + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name + ip_allocation_policy { + cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name + services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name + } + + node_pool_auto_config { + resource_manager_tags = { + "tagKeys/${google_tags_tag_key.key1.name}" = "tagValues/${google_tags_tag_value.value1.name}" + } + } + + addons_config { + horizontal_pod_autoscaling { + disabled = false + } + } + vertical_pod_autoscaling { + enabled = true + } + + timeouts { + create = "30m" + update = "40m" + } + + depends_on = [time_sleep.wait_120_seconds] +} +`, projectID, randomSuffix, clusterName, networkName, subnetworkName) +} + +func testAccContainerCluster_withAutopilotResourceManagerTagsUpdate1(projectID, clusterName, networkName, subnetworkName, randomSuffix string) string { + return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%[1]s" +} + +resource "google_project_iam_member" "tagHoldAdmin" { + project = "%[1]s" + role = "roles/resourcemanager.tagHoldAdmin" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" +} + +resource "google_project_iam_member" "tagUser1" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "time_sleep" "wait_120_seconds" { + create_duration = "120s" + + depends_on = [ + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, + ] +} + +resource "google_tags_tag_key" "key1" { + parent = "projects/%[1]s" + short_name = "foobarbaz1-%[2]s" + description = "For foo/bar1 resources" + purpose = "GCE_FIREWALL" + purpose_data = { + network = "%[1]s/%[4]s" + } + + depends_on = [google_compute_network.container_network] +} + +resource "google_tags_tag_value" "value1" { + parent = "tagKeys/${google_tags_tag_key.key1.name}" + short_name = "foo1-%[2]s" + description = "For foo1 resources" +} + +resource "google_tags_tag_key" "key2" { + parent = "projects/%[1]s" + short_name = "foobarbaz2-%[2]s" + description = "For foo/bar2 resources" + purpose = "GCE_FIREWALL" + purpose_data = { + network = "%[1]s/%[4]s" + } + + depends_on = [ + google_compute_network.container_network, + google_tags_tag_key.key1 + ] +} + +resource "google_tags_tag_value" "value2" { + parent = "tagKeys/${google_tags_tag_key.key2.name}" + short_name = "foo2-%[2]s" + description = "For foo2 resources" +} + +resource "google_compute_network" "container_network" { + name = "%[4]s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "container_subnetwork" { + name = "%[5]s" + network = google_compute_network.container_network.name + ip_cidr_range = "10.0.36.0/24" + region = "us-central1" + private_ip_google_access = true + + secondary_ip_range { + range_name = "pod" + ip_cidr_range = "10.0.0.0/19" + } + + secondary_ip_range { + range_name = "svc" + ip_cidr_range = "10.0.32.0/22" + } +} + +data "google_container_engine_versions" "uscentral1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "with_autopilot" { + name = "%[3]s" + location = "us-central1" + min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] + enable_autopilot = true + + deletion_protection = false + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name + ip_allocation_policy { + cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name + services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name + } + + node_pool_auto_config { + resource_manager_tags = { + "tagKeys/${google_tags_tag_key.key1.name}" = "tagValues/${google_tags_tag_value.value1.name}" + "tagKeys/${google_tags_tag_key.key2.name}" = "tagValues/${google_tags_tag_value.value2.name}" + } + } + + addons_config { + horizontal_pod_autoscaling { + disabled = false + } + } + vertical_pod_autoscaling { + enabled = true + } + + timeouts { + create = "30m" + update = "40m" + } + + depends_on = [time_sleep.wait_120_seconds] +} +`, projectID, randomSuffix, clusterName, networkName, subnetworkName) +} + +func testAccContainerCluster_withAutopilotResourceManagerTagsUpdate2(projectID, clusterName, networkName, subnetworkName, randomSuffix string) string { + return fmt.Sprintf(` +data "google_project" "project" { + project_id = "%[1]s" +} + +resource "google_project_iam_member" "tagHoldAdmin" { + project = "%[1]s" + role = "roles/resourcemanager.tagHoldAdmin" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" +} + +resource "google_project_iam_member" "tagUser1" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "time_sleep" "wait_120_seconds" { + create_duration = "120s" + + depends_on = [ + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, + ] +} + +resource "google_tags_tag_key" "key1" { + parent = "projects/%[1]s" + short_name = "foobarbaz1-%[2]s" + description = "For foo/bar1 resources" + purpose = "GCE_FIREWALL" + purpose_data = { + network = "%[1]s/%[4]s" + } + + depends_on = [google_compute_network.container_network] +} + +resource "google_tags_tag_value" "value1" { + parent = "tagKeys/${google_tags_tag_key.key1.name}" + short_name = "foo1-%[2]s" + description = "For foo1 resources" +} + +resource "google_tags_tag_key" "key2" { + parent = "projects/%[1]s" + short_name = "foobarbaz2-%[2]s" + description = "For foo/bar2 resources" + purpose = "GCE_FIREWALL" + purpose_data = { + network = "%[1]s/%[4]s" + } + + depends_on = [ + google_compute_network.container_network, + google_tags_tag_key.key1 + ] +} + +resource "google_tags_tag_value" "value2" { + parent = "tagKeys/${google_tags_tag_key.key2.name}" + short_name = "foo2-%[2]s" + description = "For foo2 resources" +} + +resource "google_compute_network" "container_network" { + name = "%[4]s" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "container_subnetwork" { + name = "%[5]s" + network = google_compute_network.container_network.name + ip_cidr_range = "10.0.36.0/24" + region = "us-central1" + private_ip_google_access = true + + secondary_ip_range { + range_name = "pod" + ip_cidr_range = "10.0.0.0/19" + } + + secondary_ip_range { + range_name = "svc" + ip_cidr_range = "10.0.32.0/22" + } +} + +data "google_container_engine_versions" "uscentral1a" { + location = "us-central1-a" +} + +resource "google_container_cluster" "with_autopilot" { + name = "%[3]s" + location = "us-central1" + min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] + enable_autopilot = true + + deletion_protection = false + network = google_compute_network.container_network.name + subnetwork = google_compute_subnetwork.container_subnetwork.name + ip_allocation_policy { + cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name + services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name + } + + addons_config { + horizontal_pod_autoscaling { + disabled = false + } + } + vertical_pod_autoscaling { + enabled = true + } + + timeouts { + create = "30m" + update = "40m" + } + + depends_on = [time_sleep.wait_120_seconds] +} +`, projectID, randomSuffix, clusterName, networkName, subnetworkName) +} diff --git a/google-beta/services/container/resource_container_node_pool_test.go b/google-beta/services/container/resource_container_node_pool_test.go index bf6f343d39..4e7af7b04c 100644 --- a/google-beta/services/container/resource_container_node_pool_test.go +++ b/google-beta/services/container/resource_container_node_pool_test.go @@ -513,7 +513,6 @@ func TestAccContainerNodePool_withSandboxConfig(t *testing.T) { } func TestAccContainerNodePool_withKubeletConfig(t *testing.T) { - t.Skipf("Skipping test %s due to https://github.com/hashicorp/terraform-provider-google/issues/16064", t.Name()) t.Parallel() cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(t, 10)) @@ -527,7 +526,7 @@ func TestAccContainerNodePool_withKubeletConfig(t *testing.T) { CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccContainerNodePool_withKubeletConfig(cluster, np, "static", "100us", networkName, subnetworkName, true, 2048), + Config: testAccContainerNodePool_withKubeletConfig(cluster, np, "static", "100ms", networkName, subnetworkName, true, 2048), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("google_container_node_pool.with_kubelet_config", "node_config.0.kubelet_config.0.cpu_cfs_quota", "true"), @@ -852,7 +851,6 @@ func TestAccContainerNodePool_withBootDiskKmsKey(t *testing.T) { } func TestAccContainerNodePool_withUpgradeSettings(t *testing.T) { - t.Skipf("Skipping test %s due to https://github.com/hashicorp/terraform-provider-google/issues/16064", t.Name()) t.Parallel() cluster := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(t, 10)) @@ -4244,31 +4242,35 @@ data "google_project" "project" { project_id = "%[1]s" } -resource "google_project_iam_binding" "tagHoldAdmin" { +resource "google_project_iam_member" "tagHoldAdmin" { project = "%[1]s" role = "roles/resourcemanager.tagHoldAdmin" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" } -resource "google_project_iam_binding" "tagUser" { +resource "google_project_iam_member" "tagUser1" { project = "%[1]s" role = "roles/resourcemanager.tagUser" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" - depends_on = [google_project_iam_binding.tagHoldAdmin] + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] } resource "time_sleep" "wait_120_seconds" { create_duration = "120s" depends_on = [ - google_project_iam_binding.tagHoldAdmin, - google_project_iam_binding.tagUser + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, ] } @@ -4360,31 +4362,35 @@ data "google_project" "project" { project_id = "%[1]s" } -resource "google_project_iam_binding" "tagHoldAdmin" { +resource "google_project_iam_member" "tagHoldAdmin" { project = "%[1]s" role = "roles/resourcemanager.tagHoldAdmin" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" } -resource "google_project_iam_binding" "tagUser" { +resource "google_project_iam_member" "tagUser1" { project = "%[1]s" role = "roles/resourcemanager.tagUser" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" - depends_on = [google_project_iam_binding.tagHoldAdmin] + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] } resource "time_sleep" "wait_120_seconds" { create_duration = "120s" depends_on = [ - google_project_iam_binding.tagHoldAdmin, - google_project_iam_binding.tagUser + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, ] } @@ -4477,31 +4483,35 @@ data "google_project" "project" { project_id = "%[1]s" } -resource "google_project_iam_binding" "tagHoldAdmin" { +resource "google_project_iam_member" "tagHoldAdmin" { project = "%[1]s" role = "roles/resourcemanager.tagHoldAdmin" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" } -resource "google_project_iam_binding" "tagUser" { +resource "google_project_iam_member" "tagUser1" { project = "%[1]s" role = "roles/resourcemanager.tagUser" - members = [ - "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com", - "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com", - ] + member = "serviceAccount:service-${data.google_project.project.number}@container-engine-robot.iam.gserviceaccount.com" + + depends_on = [google_project_iam_member.tagHoldAdmin] +} + +resource "google_project_iam_member" "tagUser2" { + project = "%[1]s" + role = "roles/resourcemanager.tagUser" + member = "serviceAccount:${data.google_project.project.number}@cloudservices.gserviceaccount.com" - depends_on = [google_project_iam_binding.tagHoldAdmin] + depends_on = [google_project_iam_member.tagHoldAdmin] } resource "time_sleep" "wait_120_seconds" { create_duration = "120s" depends_on = [ - google_project_iam_binding.tagHoldAdmin, - google_project_iam_binding.tagUser + google_project_iam_member.tagHoldAdmin, + google_project_iam_member.tagUser1, + google_project_iam_member.tagUser2, ] } diff --git a/google-beta/services/containeranalysis/resource_container_analysis_note.go b/google-beta/services/containeranalysis/resource_container_analysis_note.go index 31345e32a2..647bfab49e 100644 --- a/google-beta/services/containeranalysis/resource_container_analysis_note.go +++ b/google-beta/services/containeranalysis/resource_container_analysis_note.go @@ -20,6 +20,7 @@ package containeranalysis import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -256,6 +257,7 @@ func resourceContainerAnalysisNoteCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -264,6 +266,7 @@ func resourceContainerAnalysisNoteCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Note: %s", err) @@ -306,12 +309,14 @@ func resourceContainerAnalysisNoteRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ContainerAnalysisNote %q", d.Id())) @@ -438,6 +443,7 @@ func resourceContainerAnalysisNoteUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating Note %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("short_description") { @@ -485,6 +491,7 @@ func resourceContainerAnalysisNoteUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -532,6 +539,8 @@ func resourceContainerAnalysisNoteDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Note %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -541,6 +550,7 @@ func resourceContainerAnalysisNoteDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Note") diff --git a/google-beta/services/containeranalysis/resource_container_analysis_occurrence.go b/google-beta/services/containeranalysis/resource_container_analysis_occurrence.go index a9dd34d119..9353d304f3 100644 --- a/google-beta/services/containeranalysis/resource_container_analysis_occurrence.go +++ b/google-beta/services/containeranalysis/resource_container_analysis_occurrence.go @@ -20,6 +20,7 @@ package containeranalysis import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -238,6 +239,7 @@ func resourceContainerAnalysisOccurrenceCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -246,6 +248,7 @@ func resourceContainerAnalysisOccurrenceCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Occurrence: %s", err) @@ -291,12 +294,14 @@ func resourceContainerAnalysisOccurrenceRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ContainerAnalysisOccurrence %q", d.Id())) @@ -393,6 +398,7 @@ func resourceContainerAnalysisOccurrenceUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating Occurrence %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("remediation") { @@ -424,6 +430,7 @@ func resourceContainerAnalysisOccurrenceUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -471,6 +478,8 @@ func resourceContainerAnalysisOccurrenceDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Occurrence %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -480,6 +489,7 @@ func resourceContainerAnalysisOccurrenceDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Occurrence") diff --git a/google-beta/services/containerattached/resource_container_attached_cluster.go b/google-beta/services/containerattached/resource_container_attached_cluster.go index 6251947935..4f80a331b5 100644 --- a/google-beta/services/containerattached/resource_container_attached_cluster.go +++ b/google-beta/services/containerattached/resource_container_attached_cluster.go @@ -20,6 +20,7 @@ package containerattached import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -508,6 +509,7 @@ func resourceContainerAttachedClusterCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -516,6 +518,7 @@ func resourceContainerAttachedClusterCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Cluster: %s", err) @@ -582,12 +585,14 @@ func resourceContainerAttachedClusterRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ContainerAttachedCluster %q", d.Id())) @@ -756,6 +761,7 @@ func resourceContainerAttachedClusterUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating Cluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -852,6 +858,7 @@ func resourceContainerAttachedClusterUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -899,6 +906,7 @@ func resourceContainerAttachedClusterDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) if v, ok := d.GetOk("deletion_policy"); ok { if v == "DELETE_IGNORE_ERRORS" { url, err = transport_tpg.AddQueryParams(url, map[string]string{"ignore_errors": "true"}) @@ -917,6 +925,7 @@ func resourceContainerAttachedClusterDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Cluster") diff --git a/google-beta/services/corebilling/resource_billing_project_info.go b/google-beta/services/corebilling/resource_billing_project_info.go index af372c5057..83b76b2b13 100644 --- a/google-beta/services/corebilling/resource_billing_project_info.go +++ b/google-beta/services/corebilling/resource_billing_project_info.go @@ -20,6 +20,7 @@ package corebilling import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -110,6 +111,7 @@ func resourceCoreBillingProjectInfoCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PUT", @@ -118,6 +120,7 @@ func resourceCoreBillingProjectInfoCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ProjectInfo: %s", err) @@ -160,12 +163,14 @@ func resourceCoreBillingProjectInfoRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("CoreBillingProjectInfo %q", d.Id())) @@ -228,6 +233,7 @@ func resourceCoreBillingProjectInfoUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating ProjectInfo %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -242,6 +248,7 @@ func resourceCoreBillingProjectInfoUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -280,6 +287,8 @@ func resourceCoreBillingProjectInfoDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ProjectInfo %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -289,6 +298,7 @@ func resourceCoreBillingProjectInfoDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ProjectInfo") diff --git a/google-beta/services/databasemigrationservice/resource_database_migration_service_connection_profile.go b/google-beta/services/databasemigrationservice/resource_database_migration_service_connection_profile.go index cd899a4f43..73d8f714c0 100644 --- a/google-beta/services/databasemigrationservice/resource_database_migration_service_connection_profile.go +++ b/google-beta/services/databasemigrationservice/resource_database_migration_service_connection_profile.go @@ -20,6 +20,7 @@ package databasemigrationservice import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -849,6 +850,7 @@ func resourceDatabaseMigrationServiceConnectionProfileCreate(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -857,6 +859,7 @@ func resourceDatabaseMigrationServiceConnectionProfileCreate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ConnectionProfile: %s", err) @@ -909,12 +912,14 @@ func resourceDatabaseMigrationServiceConnectionProfileRead(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DatabaseMigrationServiceConnectionProfile %q", d.Id())) @@ -1035,6 +1040,7 @@ func resourceDatabaseMigrationServiceConnectionProfileUpdate(d *schema.ResourceD } log.Printf("[DEBUG] Updating ConnectionProfile %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1086,6 +1092,7 @@ func resourceDatabaseMigrationServiceConnectionProfileUpdate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1133,6 +1140,8 @@ func resourceDatabaseMigrationServiceConnectionProfileDelete(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ConnectionProfile %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1142,6 +1151,7 @@ func resourceDatabaseMigrationServiceConnectionProfileDelete(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ConnectionProfile") diff --git a/google-beta/services/databasemigrationservice/resource_database_migration_service_private_connection.go b/google-beta/services/databasemigrationservice/resource_database_migration_service_private_connection.go index aecce1a20a..a4b3529883 100644 --- a/google-beta/services/databasemigrationservice/resource_database_migration_service_private_connection.go +++ b/google-beta/services/databasemigrationservice/resource_database_migration_service_private_connection.go @@ -20,6 +20,7 @@ package databasemigrationservice import ( "fmt" "log" + "net/http" "reflect" "time" @@ -207,6 +208,7 @@ func resourceDatabaseMigrationServicePrivateConnectionCreate(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -215,6 +217,7 @@ func resourceDatabaseMigrationServicePrivateConnectionCreate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PrivateConnection: %s", err) @@ -267,12 +270,14 @@ func resourceDatabaseMigrationServicePrivateConnectionRead(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DatabaseMigrationServicePrivateConnection %q", d.Id())) @@ -342,6 +347,8 @@ func resourceDatabaseMigrationServicePrivateConnectionDelete(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PrivateConnection %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -351,6 +358,7 @@ func resourceDatabaseMigrationServicePrivateConnectionDelete(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PrivateConnection") diff --git a/google-beta/services/datacatalog/resource_data_catalog_entry.go b/google-beta/services/datacatalog/resource_data_catalog_entry.go index 5b4cbec811..41ac95b21b 100644 --- a/google-beta/services/datacatalog/resource_data_catalog_entry.go +++ b/google-beta/services/datacatalog/resource_data_catalog_entry.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -336,6 +337,7 @@ func resourceDataCatalogEntryCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -344,6 +346,7 @@ func resourceDataCatalogEntryCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Entry: %s", err) @@ -387,12 +390,14 @@ func resourceDataCatalogEntryRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataCatalogEntry %q", d.Id())) @@ -497,6 +502,7 @@ func resourceDataCatalogEntryUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Entry %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("linked_resource") { @@ -551,6 +557,7 @@ func resourceDataCatalogEntryUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -588,6 +595,8 @@ func resourceDataCatalogEntryDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Entry %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -597,6 +606,7 @@ func resourceDataCatalogEntryDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Entry") diff --git a/google-beta/services/datacatalog/resource_data_catalog_entry_group.go b/google-beta/services/datacatalog/resource_data_catalog_entry_group.go index d0a9e962db..abfd0be709 100644 --- a/google-beta/services/datacatalog/resource_data_catalog_entry_group.go +++ b/google-beta/services/datacatalog/resource_data_catalog_entry_group.go @@ -20,6 +20,7 @@ package datacatalog import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -137,6 +138,7 @@ func resourceDataCatalogEntryGroupCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -145,6 +147,7 @@ func resourceDataCatalogEntryGroupCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EntryGroup: %s", err) @@ -190,12 +193,14 @@ func resourceDataCatalogEntryGroupRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataCatalogEntryGroup %q", d.Id())) @@ -261,6 +266,7 @@ func resourceDataCatalogEntryGroupUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating EntryGroup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -292,6 +298,7 @@ func resourceDataCatalogEntryGroupUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -332,6 +339,8 @@ func resourceDataCatalogEntryGroupDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EntryGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -341,6 +350,7 @@ func resourceDataCatalogEntryGroupDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EntryGroup") diff --git a/google-beta/services/datacatalog/resource_data_catalog_policy_tag.go b/google-beta/services/datacatalog/resource_data_catalog_policy_tag.go index 74ac14a050..310e404bdc 100644 --- a/google-beta/services/datacatalog/resource_data_catalog_policy_tag.go +++ b/google-beta/services/datacatalog/resource_data_catalog_policy_tag.go @@ -20,6 +20,7 @@ package datacatalog import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -135,6 +136,7 @@ func resourceDataCatalogPolicyTagCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -143,6 +145,7 @@ func resourceDataCatalogPolicyTagCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PolicyTag: %s", err) @@ -182,12 +185,14 @@ func resourceDataCatalogPolicyTagRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataCatalogPolicyTag %q", d.Id())) @@ -247,6 +252,7 @@ func resourceDataCatalogPolicyTagUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating PolicyTag %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -282,6 +288,7 @@ func resourceDataCatalogPolicyTagUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -316,6 +323,8 @@ func resourceDataCatalogPolicyTagDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PolicyTag %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -325,6 +334,7 @@ func resourceDataCatalogPolicyTagDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PolicyTag") diff --git a/google-beta/services/datacatalog/resource_data_catalog_tag.go b/google-beta/services/datacatalog/resource_data_catalog_tag.go index 0a8aad752c..24de1c966e 100644 --- a/google-beta/services/datacatalog/resource_data_catalog_tag.go +++ b/google-beta/services/datacatalog/resource_data_catalog_tag.go @@ -20,6 +20,7 @@ package datacatalog import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -188,6 +189,7 @@ func resourceDataCatalogTagCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -196,6 +198,7 @@ func resourceDataCatalogTagCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Tag: %s", err) @@ -235,12 +238,14 @@ func resourceDataCatalogTagRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataCatalogTag %q", d.Id())) @@ -311,6 +316,7 @@ func resourceDataCatalogTagUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Tag %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("fields") { @@ -342,6 +348,7 @@ func resourceDataCatalogTagUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -376,6 +383,8 @@ func resourceDataCatalogTagDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Tag %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -385,6 +394,7 @@ func resourceDataCatalogTagDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Tag") diff --git a/google-beta/services/datacatalog/resource_data_catalog_tag_template.go b/google-beta/services/datacatalog/resource_data_catalog_tag_template.go index 1e833ddea9..dfa516a7eb 100644 --- a/google-beta/services/datacatalog/resource_data_catalog_tag_template.go +++ b/google-beta/services/datacatalog/resource_data_catalog_tag_template.go @@ -20,6 +20,7 @@ package datacatalog import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -293,6 +294,7 @@ func resourceDataCatalogTagTemplateCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -301,6 +303,7 @@ func resourceDataCatalogTagTemplateCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TagTemplate: %s", err) @@ -346,12 +349,14 @@ func resourceDataCatalogTagTemplateRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataCatalogTagTemplate %q", d.Id())) @@ -417,6 +422,7 @@ func resourceDataCatalogTagTemplateUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating TagTemplate %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -445,6 +451,7 @@ func resourceDataCatalogTagTemplateUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -615,6 +622,8 @@ func resourceDataCatalogTagTemplateDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TagTemplate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -624,6 +633,7 @@ func resourceDataCatalogTagTemplateDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TagTemplate") diff --git a/google-beta/services/datacatalog/resource_data_catalog_taxonomy.go b/google-beta/services/datacatalog/resource_data_catalog_taxonomy.go index 67329e2121..0c3c0ead93 100644 --- a/google-beta/services/datacatalog/resource_data_catalog_taxonomy.go +++ b/google-beta/services/datacatalog/resource_data_catalog_taxonomy.go @@ -20,6 +20,7 @@ package datacatalog import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -151,6 +152,7 @@ func resourceDataCatalogTaxonomyCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -159,6 +161,7 @@ func resourceDataCatalogTaxonomyCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Taxonomy: %s", err) @@ -204,12 +207,14 @@ func resourceDataCatalogTaxonomyRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataCatalogTaxonomy %q", d.Id())) @@ -276,6 +281,7 @@ func resourceDataCatalogTaxonomyUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Taxonomy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -311,6 +317,7 @@ func resourceDataCatalogTaxonomyUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -351,6 +358,8 @@ func resourceDataCatalogTaxonomyDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Taxonomy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -360,6 +369,7 @@ func resourceDataCatalogTaxonomyDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Taxonomy") diff --git a/google-beta/services/dataflow/resource_dataflow_flex_template_job_test.go b/google-beta/services/dataflow/resource_dataflow_flex_template_job_test.go index 562e05807c..a40d8e4a8f 100644 --- a/google-beta/services/dataflow/resource_dataflow_flex_template_job_test.go +++ b/google-beta/services/dataflow/resource_dataflow_flex_template_job_test.go @@ -304,6 +304,14 @@ func TestAccDataflowFlexTemplateJob_withKmsKey(t *testing.T) { bucket := "tf-test-dataflow-bucket-" + randStr topic := "tf-test-topic" + randStr + if acctest.BootstrapPSARole(t, "service-", "compute-system", "roles/cloudkms.cryptoKeyEncrypterDecrypter") { + t.Fatal("Stopping the test because a role was added to the policy.") + } + + if acctest.BootstrapPSARole(t, "service-", "dataflow-service-producer-prod", "roles/cloudkms.cryptoKeyEncrypterDecrypter") { + t.Fatal("Stopping the test because a role was added to the policy.") + } + acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), @@ -1323,18 +1331,6 @@ resource "google_storage_bucket_object" "schema" { EOF } -resource "google_project_iam_member" "kms-project-dataflow-binding" { - project = data.google_project.project.project_id - role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - member = "serviceAccount:service-${data.google_project.project.number}@dataflow-service-producer-prod.iam.gserviceaccount.com" -} - -resource "google_project_iam_member" "kms-project-compute-binding" { - project = data.google_project.project.project_id - role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - member = "serviceAccount:service-${data.google_project.project.number}@compute-system.iam.gserviceaccount.com" -} - resource "google_kms_key_ring" "keyring" { name = "%s" location = "global" diff --git a/google-beta/services/dataflow/resource_dataflow_job_test.go b/google-beta/services/dataflow/resource_dataflow_job_test.go index a66dedea60..f9e9c997ca 100644 --- a/google-beta/services/dataflow/resource_dataflow_job_test.go +++ b/google-beta/services/dataflow/resource_dataflow_job_test.go @@ -423,6 +423,10 @@ func TestAccDataflowJob_withKmsKey(t *testing.T) { t.Fatal("Stopping the test because a role was added to the policy.") } + if acctest.BootstrapPSARole(t, "service-", "dataflow-service-producer-prod", "roles/cloudkms.cryptoKeyEncrypterDecrypter") { + t.Fatal("Stopping the test because a role was added to the policy.") + } + acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), @@ -1195,15 +1199,6 @@ resource "google_dataflow_job" "big_data" { func testAccDataflowJob_kms(key_ring, crypto_key, bucket, job, zone string) string { return fmt.Sprintf(` -data "google_project" "project" { -} - -resource "google_project_iam_member" "kms-project-dataflow-binding" { - project = data.google_project.project.project_id - role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - member = "serviceAccount:service-${data.google_project.project.number}@dataflow-service-producer-prod.iam.gserviceaccount.com" -} - resource "google_kms_key_ring" "keyring" { name = "%s" location = "global" diff --git a/google-beta/services/dataform/resource_dataform_repository.go b/google-beta/services/dataform/resource_dataform_repository.go index 71233080da..0d46ba2052 100644 --- a/google-beta/services/dataform/resource_dataform_repository.go +++ b/google-beta/services/dataform/resource_dataform_repository.go @@ -20,6 +20,7 @@ package dataform import ( "fmt" "log" + "net/http" "reflect" "time" @@ -262,6 +263,7 @@ func resourceDataformRepositoryCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -270,6 +272,7 @@ func resourceDataformRepositoryCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Repository: %s", err) @@ -312,12 +315,14 @@ func resourceDataformRepositoryRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataformRepository %q", d.Id())) @@ -417,6 +422,7 @@ func resourceDataformRepositoryUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Repository %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -431,6 +437,7 @@ func resourceDataformRepositoryUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -469,6 +476,8 @@ func resourceDataformRepositoryDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Repository %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -478,6 +487,7 @@ func resourceDataformRepositoryDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Repository") diff --git a/google-beta/services/dataform/resource_dataform_repository_release_config.go b/google-beta/services/dataform/resource_dataform_repository_release_config.go index 35134cb377..5a79bcdd8e 100644 --- a/google-beta/services/dataform/resource_dataform_repository_release_config.go +++ b/google-beta/services/dataform/resource_dataform_repository_release_config.go @@ -20,6 +20,7 @@ package dataform import ( "fmt" "log" + "net/http" "reflect" "time" @@ -246,6 +247,7 @@ func resourceDataformRepositoryReleaseConfigCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -254,6 +256,7 @@ func resourceDataformRepositoryReleaseConfigCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RepositoryReleaseConfig: %s", err) @@ -296,12 +299,14 @@ func resourceDataformRepositoryReleaseConfigRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataformRepositoryReleaseConfig %q", d.Id())) @@ -380,6 +385,7 @@ func resourceDataformRepositoryReleaseConfigUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating RepositoryReleaseConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -394,6 +400,7 @@ func resourceDataformRepositoryReleaseConfigUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -432,6 +439,8 @@ func resourceDataformRepositoryReleaseConfigDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RepositoryReleaseConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -441,6 +450,7 @@ func resourceDataformRepositoryReleaseConfigDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RepositoryReleaseConfig") diff --git a/google-beta/services/dataform/resource_dataform_repository_workflow_config.go b/google-beta/services/dataform/resource_dataform_repository_workflow_config.go index 08b161e35f..99d3f84879 100644 --- a/google-beta/services/dataform/resource_dataform_repository_workflow_config.go +++ b/google-beta/services/dataform/resource_dataform_repository_workflow_config.go @@ -20,6 +20,7 @@ package dataform import ( "fmt" "log" + "net/http" "reflect" "time" @@ -254,6 +255,7 @@ func resourceDataformRepositoryWorkflowConfigCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -262,6 +264,7 @@ func resourceDataformRepositoryWorkflowConfigCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RepositoryWorkflowConfig: %s", err) @@ -304,12 +307,14 @@ func resourceDataformRepositoryWorkflowConfigRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataformRepositoryWorkflowConfig %q", d.Id())) @@ -388,6 +393,7 @@ func resourceDataformRepositoryWorkflowConfigUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating RepositoryWorkflowConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -402,6 +408,7 @@ func resourceDataformRepositoryWorkflowConfigUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -440,6 +447,8 @@ func resourceDataformRepositoryWorkflowConfigDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RepositoryWorkflowConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -449,6 +458,7 @@ func resourceDataformRepositoryWorkflowConfigDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RepositoryWorkflowConfig") diff --git a/google-beta/services/datafusion/resource_data_fusion_instance.go b/google-beta/services/datafusion/resource_data_fusion_instance.go index 75d47beee1..db26bd2d8a 100644 --- a/google-beta/services/datafusion/resource_data_fusion_instance.go +++ b/google-beta/services/datafusion/resource_data_fusion_instance.go @@ -20,6 +20,7 @@ package datafusion import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -480,6 +481,7 @@ func resourceDataFusionInstanceCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -488,6 +490,7 @@ func resourceDataFusionInstanceCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -554,12 +557,14 @@ func resourceDataFusionInstanceRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataFusionInstance %q", d.Id())) @@ -733,6 +738,7 @@ func resourceDataFusionInstanceUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enable_stackdriver_logging") { @@ -768,6 +774,7 @@ func resourceDataFusionInstanceUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -814,6 +821,8 @@ func resourceDataFusionInstanceDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -823,6 +832,7 @@ func resourceDataFusionInstanceDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/datafusion/resource_data_fusion_instance_test.go b/google-beta/services/datafusion/resource_data_fusion_instance_test.go index 8799c1452f..793fbe3bcd 100644 --- a/google-beta/services/datafusion/resource_data_fusion_instance_test.go +++ b/google-beta/services/datafusion/resource_data_fusion_instance_test.go @@ -48,7 +48,7 @@ resource "google_data_fusion_instance" "foobar" { region = "us-central1" type = "BASIC" # See supported versions here https://cloud.google.com/data-fusion/docs/support/version-support-policy - version = "6.7.0" + version = "6.9.1" # Mark for testing to avoid service networking connection usage that is not cleaned up options = { prober_test_run = "true" @@ -74,7 +74,7 @@ resource "google_data_fusion_instance" "foobar" { label1 = "value1" label2 = "value2" } - version = "6.8.0" + version = "6.9.2" accelerators { accelerator_type = "CCAI_INSIGHTS" @@ -160,12 +160,12 @@ func TestAccDataFusionInstanceVersion_dataFusionInstanceUpdate(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "version": "6.7.2", + "version": "6.9.1", } contextUpdate := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "version": "6.8.0", + "version": "6.9.2", } acctest.VcrTest(t, resource.TestCase{ diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_deidentify_template.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_deidentify_template.go index ea0e87c1e2..482891c6d0 100644 --- a/google-beta/services/datalossprevention/resource_data_loss_prevention_deidentify_template.go +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_deidentify_template.go @@ -20,6 +20,7 @@ package datalossprevention import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -4253,6 +4254,7 @@ func resourceDataLossPreventionDeidentifyTemplateCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -4261,6 +4263,7 @@ func resourceDataLossPreventionDeidentifyTemplateCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DeidentifyTemplate: %s", err) @@ -4300,12 +4303,14 @@ func resourceDataLossPreventionDeidentifyTemplateRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataLossPreventionDeidentifyTemplate %q", d.Id())) @@ -4385,6 +4390,7 @@ func resourceDataLossPreventionDeidentifyTemplateUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating DeidentifyTemplate %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -4420,6 +4426,7 @@ func resourceDataLossPreventionDeidentifyTemplateUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -4454,6 +4461,8 @@ func resourceDataLossPreventionDeidentifyTemplateDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DeidentifyTemplate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -4463,6 +4472,7 @@ func resourceDataLossPreventionDeidentifyTemplateDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DeidentifyTemplate") diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config.go new file mode 100644 index 0000000000..15566f2ae2 --- /dev/null +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config.go @@ -0,0 +1,2174 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package datalossprevention + +import ( + "fmt" + "log" + "net/http" + "reflect" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" +) + +func ResourceDataLossPreventionDiscoveryConfig() *schema.Resource { + return &schema.Resource{ + Create: resourceDataLossPreventionDiscoveryConfigCreate, + Read: resourceDataLossPreventionDiscoveryConfigRead, + Update: resourceDataLossPreventionDiscoveryConfigUpdate, + Delete: resourceDataLossPreventionDiscoveryConfigDelete, + + Importer: &schema.ResourceImporter{ + State: resourceDataLossPreventionDiscoveryConfigImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + Update: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(20 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Location to create the discovery config in.`, + }, + "parent": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The parent of the discovery config in any of the following formats: + +* 'projects/{{project}}/locations/{{location}}' +* 'organizations/{{organization_id}}/locations/{{location}}'`, + }, + "actions": { + Type: schema.TypeList, + Optional: true, + Description: `Actions to execute at the completion of scanning`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "export_data": { + Type: schema.TypeList, + Optional: true, + Description: `Export data profiles into a provided location`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "profile_table": { + Type: schema.TypeList, + Optional: true, + Description: `Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dataset_id": { + Type: schema.TypeString, + Optional: true, + Description: `Dataset Id of the table`, + }, + "project_id": { + Type: schema.TypeString, + Optional: true, + Description: `The Google Cloud Platform project ID of the project containing the table. If omitted, the project ID is inferred from the API call.`, + }, + "table_id": { + Type: schema.TypeString, + Optional: true, + Description: `Name of the table`, + }, + }, + }, + }, + }, + }, + }, + "pub_sub_notification": { + Type: schema.TypeList, + Optional: true, + Description: `Publish a message into the Pub/Sub topic.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "detail_of_message": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"TABLE_PROFILE", "RESOURCE_NAME", ""}), + Description: `How much data to include in the pub/sub message. Possible values: ["TABLE_PROFILE", "RESOURCE_NAME"]`, + }, + "event": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"NEW_PROFILE", "CHANGED_PROFILE", "SCORE_INCREASED", "ERROR_CHANGED", ""}), + Description: `The type of event that triggers a Pub/Sub. At most one PubSubNotification per EventType is permitted. Possible values: ["NEW_PROFILE", "CHANGED_PROFILE", "SCORE_INCREASED", "ERROR_CHANGED"]`, + }, + "pubsub_condition": { + Type: schema.TypeList, + Optional: true, + Description: `Conditions for triggering pubsub`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "expressions": { + Type: schema.TypeList, + Optional: true, + Description: `An expression`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "conditions": { + Type: schema.TypeList, + Optional: true, + Description: `Conditions to apply to the expression`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "minimum_risk_score": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"HIGH", "MEDIUM_OR_HIGH", ""}), + Description: `The minimum data risk score that triggers the condition. Possible values: ["HIGH", "MEDIUM_OR_HIGH"]`, + }, + "minimum_sensitivity_score": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"HIGH", "MEDIUM_OR_HIGH", ""}), + Description: `The minimum sensitivity level that triggers the condition. Possible values: ["HIGH", "MEDIUM_OR_HIGH"]`, + }, + }, + }, + }, + "logical_operator": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"OR", "AND", ""}), + Description: `The operator to apply to the collection of conditions Possible values: ["OR", "AND"]`, + }, + }, + }, + }, + }, + }, + }, + "topic": { + Type: schema.TypeString, + Optional: true, + Description: `Cloud Pub/Sub topic to send notifications to. Format is projects/{project}/topics/{topic}.`, + }, + }, + }, + }, + }, + }, + }, + "display_name": { + Type: schema.TypeString, + Optional: true, + Description: `Display Name (max 1000 Chars)`, + }, + "inspect_templates": { + Type: schema.TypeList, + Optional: true, + Description: `Detection logic for profile generation`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "org_config": { + Type: schema.TypeList, + Optional: true, + Description: `A nested object resource`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "location": { + Type: schema.TypeList, + Optional: true, + Description: `The data to scan folder org or project`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "folder_id": { + Type: schema.TypeString, + Optional: true, + Description: `The ID for the folder within an organization to scan`, + }, + "organization_id": { + Type: schema.TypeString, + Optional: true, + Description: `The ID of an organization to scan`, + }, + }, + }, + }, + "project_id": { + Type: schema.TypeString, + Optional: true, + Description: `The project that will run the scan. The DLP service account that exists within this project must have access to all resources that are profiled, and the cloud DLP API must be enabled.`, + }, + }, + }, + }, + "status": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"RUNNING", "PAUSED", ""}), + Description: `Required. A status for this configuration Possible values: ["RUNNING", "PAUSED"]`, + }, + "targets": { + Type: schema.TypeList, + Optional: true, + Description: `Target to match against for determining what to scan and how frequently`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "big_query_target": { + Type: schema.TypeList, + Optional: true, + Description: `BigQuery target for Discovery. The first target to match a table will be the one applied.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cadence": { + Type: schema.TypeList, + Optional: true, + Description: `How often and when to update profiles. New tables that match both the fiter and conditions are scanned as quickly as possible depending on system capacity.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "schema_modified_cadence": { + Type: schema.TypeList, + Optional: true, + Description: `Governs when to update data profiles when a schema is modified`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "frequency": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"UPDATE_FREQUENCY_NEVER", "UPDATE_FREQUENCY_DAILY", "UPDATE_FREQUENCY_MONTHLY", ""}), + Description: `How frequently profiles may be updated when schemas are modified. Default to monthly Possible values: ["UPDATE_FREQUENCY_NEVER", "UPDATE_FREQUENCY_DAILY", "UPDATE_FREQUENCY_MONTHLY"]`, + }, + "types": { + Type: schema.TypeList, + Optional: true, + Description: `The type of events to consider when deciding if the table's schema has been modified and should have the profile updated. Defaults to NEW_COLUMN. Possible values: ["SCHEMA_NEW_COLUMNS", "SCHEMA_REMOVED_COLUMNS"]`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidateEnum([]string{"SCHEMA_NEW_COLUMNS", "SCHEMA_REMOVED_COLUMNS"}), + }, + }, + }, + }, + }, + "table_modified_cadence": { + Type: schema.TypeList, + Optional: true, + Description: `Governs when to update profile when a table is modified.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "frequency": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"UPDATE_FREQUENCY_NEVER", "UPDATE_FREQUENCY_DAILY", "UPDATE_FREQUENCY_MONTHLY", ""}), + Description: `How frequently data profiles can be updated when tables are modified. Defaults to never. Possible values: ["UPDATE_FREQUENCY_NEVER", "UPDATE_FREQUENCY_DAILY", "UPDATE_FREQUENCY_MONTHLY"]`, + }, + "types": { + Type: schema.TypeList, + Optional: true, + Description: `The type of events to consider when deciding if the table has been modified and should have the profile updated. Defaults to MODIFIED_TIMESTAMP Possible values: ["TABLE_MODIFIED_TIMESTAMP"]`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidateEnum([]string{"TABLE_MODIFIED_TIMESTAMP"}), + }, + }, + }, + }, + }, + }, + }, + }, + "conditions": { + Type: schema.TypeList, + Optional: true, + Description: `In addition to matching the filter, these conditions must be true before a profile is generated`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "created_after": { + Type: schema.TypeString, + Optional: true, + Description: `A timestamp in RFC3339 UTC "Zulu" format with nanosecond resolution and upto nine fractional digits.`, + }, + "or_conditions": { + Type: schema.TypeList, + Optional: true, + Description: `At least one of the conditions must be true for a table to be scanned.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "min_age": { + Type: schema.TypeString, + Optional: true, + Description: `Duration format. The minimum age a table must have before Cloud DLP can profile it. Value greater than 1.`, + }, + "min_row_count": { + Type: schema.TypeInt, + Optional: true, + Description: `Minimum number of rows that should be present before Cloud DLP profiles as a table.`, + }, + }, + }, + }, + "type_collection": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"BIG_QUERY_COLLECTION_ALL_TYPES", "BIG_QUERY_COLLECTION_ONLY_SUPPORTED_TYPES", ""}), + Description: `Restrict discovery to categories of table types. Currently view, materialized view, snapshot and non-biglake external tables are supported. Possible values: ["BIG_QUERY_COLLECTION_ALL_TYPES", "BIG_QUERY_COLLECTION_ONLY_SUPPORTED_TYPES"]`, + }, + "types": { + Type: schema.TypeList, + Optional: true, + Description: `Restrict discovery to specific table type`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "types": { + Type: schema.TypeList, + Optional: true, + Description: `A set of BiqQuery table types Possible values: ["BIG_QUERY_TABLE_TYPE_TABLE", "BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE"]`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidateEnum([]string{"BIG_QUERY_TABLE_TYPE_TABLE", "BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE"}), + }, + }, + }, + }, + }, + }, + }, + }, + "disabled": { + Type: schema.TypeList, + Optional: true, + Description: `Tables that match this filter will not have profiles created.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{}, + }, + }, + "filter": { + Type: schema.TypeList, + Optional: true, + Description: `Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "other_tables": { + Type: schema.TypeList, + Optional: true, + Description: `Catch-all. This should always be the last filter in the list because anything above it will apply first.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{}, + }, + }, + "tables": { + Type: schema.TypeList, + Optional: true, + Description: `A specific set of tables for this filter to apply to. A table collection must be specified in only one filter per config.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "include_regexes": { + Type: schema.TypeList, + Optional: true, + Description: `A collection of regular expressions to match a BQ table against.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "patterns": { + Type: schema.TypeList, + Optional: true, + Description: `A single BigQuery regular expression pattern to match against one or more tables, datasets, or projects that contain BigQuery tables.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dataset_id_regex": { + Type: schema.TypeString, + Optional: true, + Description: `if unset, this property matches all datasets`, + }, + "project_id_regex": { + Type: schema.TypeString, + Optional: true, + Description: `For organizations, if unset, will match all projects. Has no effect for data profile configurations created within a project.`, + }, + "table_id_regex": { + Type: schema.TypeString, + Optional: true, + Description: `if unset, this property matches all tables`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + Description: `Output only. The creation timestamp of a DiscoveryConfig.`, + }, + "errors": { + Type: schema.TypeList, + Computed: true, + Description: `Output only. A stream of errors encountered when the config was activated. Repeated errors may result in the config automatically being paused. Output only field. Will return the last 100 errors. Whenever the config is modified this list will be cleared.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "details": { + Type: schema.TypeList, + Optional: true, + Description: `Detailed error codes and messages.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "code": { + Type: schema.TypeInt, + Optional: true, + Description: `The status code, which should be an enum value of google.rpc.Code.`, + }, + "details": { + Type: schema.TypeList, + Optional: true, + Description: `A list of messages that carry the error details.`, + Elem: &schema.Schema{ + Type: schema.TypeMap, + }, + }, + "message": { + Type: schema.TypeString, + Optional: true, + Description: `A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.`, + }, + }, + }, + }, + "timestamp": { + Type: schema.TypeString, + Optional: true, + Description: `The times the error occurred. List includes the oldest timestamp and the last 9 timestamps.`, + }, + }, + }, + }, + "last_run_time": { + Type: schema.TypeString, + Computed: true, + Description: `Output only. The timestamp of the last time this config was executed`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Description: `Unique resource name for the DiscoveryConfig, assigned by the service when the DiscoveryConfig is created.`, + }, + "update_time": { + Type: schema.TypeString, + Computed: true, + Description: `Output only. The last update timestamp of a DiscoveryConfig.`, + }, + }, + UseJSONNumber: true, + } +} + +func resourceDataLossPreventionDiscoveryConfigCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + obj := make(map[string]interface{}) + displayNameProp, err := expandDataLossPreventionDiscoveryConfigDisplayName(d.Get("display_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { + obj["displayName"] = displayNameProp + } + orgConfigProp, err := expandDataLossPreventionDiscoveryConfigOrgConfig(d.Get("org_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("org_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(orgConfigProp)) && (ok || !reflect.DeepEqual(v, orgConfigProp)) { + obj["orgConfig"] = orgConfigProp + } + inspectTemplatesProp, err := expandDataLossPreventionDiscoveryConfigInspectTemplates(d.Get("inspect_templates"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("inspect_templates"); !tpgresource.IsEmptyValue(reflect.ValueOf(inspectTemplatesProp)) && (ok || !reflect.DeepEqual(v, inspectTemplatesProp)) { + obj["inspectTemplates"] = inspectTemplatesProp + } + actionsProp, err := expandDataLossPreventionDiscoveryConfigActions(d.Get("actions"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("actions"); !tpgresource.IsEmptyValue(reflect.ValueOf(actionsProp)) && (ok || !reflect.DeepEqual(v, actionsProp)) { + obj["actions"] = actionsProp + } + targetsProp, err := expandDataLossPreventionDiscoveryConfigTargets(d.Get("targets"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("targets"); !tpgresource.IsEmptyValue(reflect.ValueOf(targetsProp)) && (ok || !reflect.DeepEqual(v, targetsProp)) { + obj["targets"] = targetsProp + } + statusProp, err := expandDataLossPreventionDiscoveryConfigStatus(d.Get("status"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("status"); !tpgresource.IsEmptyValue(reflect.ValueOf(statusProp)) && (ok || !reflect.DeepEqual(v, statusProp)) { + obj["status"] = statusProp + } + + obj, err = resourceDataLossPreventionDiscoveryConfigEncoder(d, meta, obj) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{DataLossPreventionBasePath}}{{parent}}/discoveryConfigs") + if err != nil { + return err + } + + log.Printf("[DEBUG] Creating new DiscoveryConfig: %#v", obj) + billingProject := "" + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, + }) + if err != nil { + return fmt.Errorf("Error creating DiscoveryConfig: %s", err) + } + if err := d.Set("name", flattenDataLossPreventionDiscoveryConfigName(res["name"], d, config)); err != nil { + return fmt.Errorf(`Error setting computed identity field "name": %s`, err) + } + + // Store the ID now + id, err := tpgresource.ReplaceVars(d, config, "{{parent}}/discoveryConfigs/{{name}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + log.Printf("[DEBUG] Finished creating DiscoveryConfig %q: %#v", d.Id(), res) + + return resourceDataLossPreventionDiscoveryConfigRead(d, meta) +} + +func resourceDataLossPreventionDiscoveryConfigRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{DataLossPreventionBasePath}}{{parent}}/discoveryConfigs/{{name}}") + if err != nil { + return err + } + + billingProject := "" + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataLossPreventionDiscoveryConfig %q", d.Id())) + } + + res, err = resourceDataLossPreventionDiscoveryConfigDecoder(d, meta, res) + if err != nil { + return err + } + + if res == nil { + // Decoding the object has resulted in it being gone. It may be marked deleted + log.Printf("[DEBUG] Removing DataLossPreventionDiscoveryConfig because it no longer exists.") + d.SetId("") + return nil + } + + if err := d.Set("name", flattenDataLossPreventionDiscoveryConfigName(res["name"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("display_name", flattenDataLossPreventionDiscoveryConfigDisplayName(res["displayName"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("org_config", flattenDataLossPreventionDiscoveryConfigOrgConfig(res["orgConfig"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("inspect_templates", flattenDataLossPreventionDiscoveryConfigInspectTemplates(res["inspectTemplates"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("actions", flattenDataLossPreventionDiscoveryConfigActions(res["actions"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("targets", flattenDataLossPreventionDiscoveryConfigTargets(res["targets"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("errors", flattenDataLossPreventionDiscoveryConfigErrors(res["errors"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("create_time", flattenDataLossPreventionDiscoveryConfigCreateTime(res["createTime"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("update_time", flattenDataLossPreventionDiscoveryConfigUpdateTime(res["updateTime"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("last_run_time", flattenDataLossPreventionDiscoveryConfigLastRunTime(res["lastRunTime"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + if err := d.Set("status", flattenDataLossPreventionDiscoveryConfigStatus(res["status"], d, config)); err != nil { + return fmt.Errorf("Error reading DiscoveryConfig: %s", err) + } + + return nil +} + +func resourceDataLossPreventionDiscoveryConfigUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + obj := make(map[string]interface{}) + displayNameProp, err := expandDataLossPreventionDiscoveryConfigDisplayName(d.Get("display_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { + obj["displayName"] = displayNameProp + } + orgConfigProp, err := expandDataLossPreventionDiscoveryConfigOrgConfig(d.Get("org_config"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("org_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, orgConfigProp)) { + obj["orgConfig"] = orgConfigProp + } + inspectTemplatesProp, err := expandDataLossPreventionDiscoveryConfigInspectTemplates(d.Get("inspect_templates"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("inspect_templates"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, inspectTemplatesProp)) { + obj["inspectTemplates"] = inspectTemplatesProp + } + actionsProp, err := expandDataLossPreventionDiscoveryConfigActions(d.Get("actions"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("actions"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, actionsProp)) { + obj["actions"] = actionsProp + } + targetsProp, err := expandDataLossPreventionDiscoveryConfigTargets(d.Get("targets"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("targets"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, targetsProp)) { + obj["targets"] = targetsProp + } + statusProp, err := expandDataLossPreventionDiscoveryConfigStatus(d.Get("status"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("status"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, statusProp)) { + obj["status"] = statusProp + } + + obj, err = resourceDataLossPreventionDiscoveryConfigUpdateEncoder(d, meta, obj) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{DataLossPreventionBasePath}}{{parent}}/discoveryConfigs/{{name}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating DiscoveryConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) + updateMask := []string{} + + if d.HasChange("display_name") { + updateMask = append(updateMask, "displayName") + } + + if d.HasChange("org_config") { + updateMask = append(updateMask, "orgConfig") + } + + if d.HasChange("inspect_templates") { + updateMask = append(updateMask, "inspectTemplates") + } + + if d.HasChange("actions") { + updateMask = append(updateMask, "actions") + } + + if d.HasChange("targets") { + updateMask = append(updateMask, "targets") + } + + if d.HasChange("status") { + updateMask = append(updateMask, "status") + } + // updateMask is a URL parameter but not present in the schema, so ReplaceVars + // won't set it + url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + // if updateMask is empty we are not updating anything so skip the post + if len(updateMask) > 0 { + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "PATCH", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, + }) + + if err != nil { + return fmt.Errorf("Error updating DiscoveryConfig %q: %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Finished updating DiscoveryConfig %q: %#v", d.Id(), res) + } + + } + + return resourceDataLossPreventionDiscoveryConfigRead(d, meta) +} + +func resourceDataLossPreventionDiscoveryConfigDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + url, err := tpgresource.ReplaceVars(d, config, "{{DataLossPreventionBasePath}}{{parent}}/discoveryConfigs/{{name}}") + if err != nil { + return err + } + + var obj map[string]interface{} + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + + log.Printf("[DEBUG] Deleting DiscoveryConfig %q", d.Id()) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, "DiscoveryConfig") + } + + log.Printf("[DEBUG] Finished deleting DiscoveryConfig %q: %#v", d.Id(), res) + return nil +} + +func resourceDataLossPreventionDiscoveryConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*transport_tpg.Config) + + // Custom import to handle parent possibilities + if err := tpgresource.ParseImportId([]string{"(?P.+)"}, d, config); err != nil { + return nil, err + } + parts := strings.Split(d.Get("name").(string), "/") + if len(parts) == 6 { + if err := d.Set("name", parts[5]); err != nil { + return nil, fmt.Errorf("Error setting name: %s", err) + } + } else if len(parts) == 4 { + if err := d.Set("name", parts[3]); err != nil { + return nil, fmt.Errorf("Error setting name: %s", err) + } + } else { + return nil, fmt.Errorf("Unexpected import id: %s, expected form {{parent}}/discoveryConfig/{{name}}", d.Get("name").(string)) + } + // Remove "/discoveryConfig/{{name}}" from the id + parts = parts[:len(parts)-2] + if err := d.Set("parent", strings.Join(parts, "/")); err != nil { + return nil, fmt.Errorf("Error setting parent: %s", err) + } + + // Replace import id for the resource id + id, err := tpgresource.ReplaceVars(d, config, "{{parent}}/discoveryConfigs/{{name}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil +} + +func flattenDataLossPreventionDiscoveryConfigName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + return tpgresource.NameFromSelfLinkStateFunc(v) +} + +func flattenDataLossPreventionDiscoveryConfigDisplayName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigOrgConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["project_id"] = + flattenDataLossPreventionDiscoveryConfigOrgConfigProjectId(original["projectId"], d, config) + transformed["location"] = + flattenDataLossPreventionDiscoveryConfigOrgConfigLocation(original["location"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigOrgConfigProjectId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigOrgConfigLocation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["organization_id"] = + flattenDataLossPreventionDiscoveryConfigOrgConfigLocationOrganizationId(original["organizationId"], d, config) + transformed["folder_id"] = + flattenDataLossPreventionDiscoveryConfigOrgConfigLocationFolderId(original["folderId"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigOrgConfigLocationOrganizationId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigOrgConfigLocationFolderId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigInspectTemplates(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "export_data": flattenDataLossPreventionDiscoveryConfigActionsExportData(original["exportData"], d, config), + "pub_sub_notification": flattenDataLossPreventionDiscoveryConfigActionsPubSubNotification(original["pubSubNotification"], d, config), + }) + } + return transformed +} +func flattenDataLossPreventionDiscoveryConfigActionsExportData(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["profile_table"] = + flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTable(original["profileTable"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTable(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["project_id"] = + flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTableProjectId(original["projectId"], d, config) + transformed["dataset_id"] = + flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTableDatasetId(original["datasetId"], d, config) + transformed["table_id"] = + flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTableTableId(original["tableId"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTableProjectId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTableDatasetId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsExportDataProfileTableTableId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotification(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["topic"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationTopic(original["topic"], d, config) + transformed["event"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationEvent(original["event"], d, config) + transformed["pubsub_condition"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubCondition(original["pubsubCondition"], d, config) + transformed["detail_of_message"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationDetailOfMessage(original["detailOfMessage"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationTopic(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationEvent(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubCondition(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["expressions"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressions(original["expressions"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["logical_operator"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsLogicalOperator(original["logicalOperator"], d, config) + transformed["conditions"] = + flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditions(original["conditions"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsLogicalOperator(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "minimum_risk_score": flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumRiskScore(original["minimumRiskScore"], d, config), + "minimum_sensitivity_score": flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumSensitivityScore(original["minimumSensitivityScore"], d, config), + }) + } + return transformed +} +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumRiskScore(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumSensitivityScore(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigActionsPubSubNotificationDetailOfMessage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargets(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "big_query_target": flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTarget(original["bigQueryTarget"], d, config), + }) + } + return transformed +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTarget(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["filter"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilter(original["filter"], d, config) + transformed["conditions"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditions(original["conditions"], d, config) + transformed["cadence"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadence(original["cadence"], d, config) + transformed["disabled"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetDisabled(original["disabled"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilter(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["tables"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTables(original["tables"], d, config) + transformed["other_tables"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterOtherTables(original["otherTables"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTables(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["include_regexes"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexes(original["includeRegexes"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["patterns"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatterns(original["patterns"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatterns(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "project_id_regex": flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsProjectIdRegex(original["projectIdRegex"], d, config), + "dataset_id_regex": flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsDatasetIdRegex(original["datasetIdRegex"], d, config), + "table_id_regex": flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsTableIdRegex(original["tableIdRegex"], d, config), + }) + } + return transformed +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsProjectIdRegex(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsDatasetIdRegex(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsTableIdRegex(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterOtherTables(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + transformed := make(map[string]interface{}) + return []interface{}{transformed} +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["created_after"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsCreatedAfter(original["createdAfter"], d, config) + transformed["or_conditions"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditions(original["orConditions"], d, config) + transformed["types"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypes(original["types"], d, config) + transformed["type_collection"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypeCollection(original["typeCollection"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsCreatedAfter(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["min_age"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinAge(original["minAge"], d, config) + transformed["min_row_count"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinRowCount(original["minRowCount"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinAge(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinRowCount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["types"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypesTypes(original["types"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypesTypes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypeCollection(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadence(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["schema_modified_cadence"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadence(original["schemaModifiedCadence"], d, config) + transformed["table_modified_cadence"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadence(original["tableModifiedCadence"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadence(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["types"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceTypes(original["types"], d, config) + transformed["frequency"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceFrequency(original["frequency"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceTypes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceFrequency(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadence(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["types"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceTypes(original["types"], d, config) + transformed["frequency"] = + flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceFrequency(original["frequency"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceTypes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceFrequency(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigTargetsBigQueryTargetDisabled(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + transformed := make(map[string]interface{}) + return []interface{}{transformed} +} + +func flattenDataLossPreventionDiscoveryConfigErrors(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "details": flattenDataLossPreventionDiscoveryConfigErrorsDetails(original["details"], d, config), + "timestamp": flattenDataLossPreventionDiscoveryConfigErrorsTimestamp(original["timestamp"], d, config), + }) + } + return transformed +} +func flattenDataLossPreventionDiscoveryConfigErrorsDetails(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["code"] = + flattenDataLossPreventionDiscoveryConfigErrorsDetailsCode(original["code"], d, config) + transformed["message"] = + flattenDataLossPreventionDiscoveryConfigErrorsDetailsMessage(original["message"], d, config) + transformed["details"] = + flattenDataLossPreventionDiscoveryConfigErrorsDetailsDetails(original["details"], d, config) + return []interface{}{transformed} +} +func flattenDataLossPreventionDiscoveryConfigErrorsDetailsCode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenDataLossPreventionDiscoveryConfigErrorsDetailsMessage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigErrorsDetailsDetails(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigErrorsTimestamp(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigUpdateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigLastRunTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenDataLossPreventionDiscoveryConfigStatus(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandDataLossPreventionDiscoveryConfigDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigOrgConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProjectId, err := expandDataLossPreventionDiscoveryConfigOrgConfigProjectId(original["project_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedProjectId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["projectId"] = transformedProjectId + } + + transformedLocation, err := expandDataLossPreventionDiscoveryConfigOrgConfigLocation(original["location"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedLocation); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["location"] = transformedLocation + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigOrgConfigProjectId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigOrgConfigLocation(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedOrganizationId, err := expandDataLossPreventionDiscoveryConfigOrgConfigLocationOrganizationId(original["organization_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOrganizationId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["organizationId"] = transformedOrganizationId + } + + transformedFolderId, err := expandDataLossPreventionDiscoveryConfigOrgConfigLocationFolderId(original["folder_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedFolderId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["folderId"] = transformedFolderId + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigOrgConfigLocationOrganizationId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigOrgConfigLocationFolderId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigInspectTemplates(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedExportData, err := expandDataLossPreventionDiscoveryConfigActionsExportData(original["export_data"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedExportData); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["exportData"] = transformedExportData + } + + transformedPubSubNotification, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotification(original["pub_sub_notification"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPubSubNotification); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["pubSubNotification"] = transformedPubSubNotification + } + + req = append(req, transformed) + } + return req, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsExportData(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProfileTable, err := expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTable(original["profile_table"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedProfileTable); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["profileTable"] = transformedProfileTable + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTable(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProjectId, err := expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTableProjectId(original["project_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedProjectId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["projectId"] = transformedProjectId + } + + transformedDatasetId, err := expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTableDatasetId(original["dataset_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDatasetId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["datasetId"] = transformedDatasetId + } + + transformedTableId, err := expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTableTableId(original["table_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTableId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tableId"] = transformedTableId + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTableProjectId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTableDatasetId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsExportDataProfileTableTableId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotification(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedTopic, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationTopic(original["topic"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTopic); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["topic"] = transformedTopic + } + + transformedEvent, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationEvent(original["event"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEvent); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["event"] = transformedEvent + } + + transformedPubsubCondition, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubCondition(original["pubsub_condition"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPubsubCondition); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["pubsubCondition"] = transformedPubsubCondition + } + + transformedDetailOfMessage, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationDetailOfMessage(original["detail_of_message"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDetailOfMessage); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["detailOfMessage"] = transformedDetailOfMessage + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationTopic(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationEvent(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubCondition(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedExpressions, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressions(original["expressions"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedExpressions); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["expressions"] = transformedExpressions + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedLogicalOperator, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsLogicalOperator(original["logical_operator"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedLogicalOperator); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["logicalOperator"] = transformedLogicalOperator + } + + transformedConditions, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditions(original["conditions"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedConditions); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["conditions"] = transformedConditions + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsLogicalOperator(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedMinimumRiskScore, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumRiskScore(original["minimum_risk_score"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMinimumRiskScore); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["minimumRiskScore"] = transformedMinimumRiskScore + } + + transformedMinimumSensitivityScore, err := expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumSensitivityScore(original["minimum_sensitivity_score"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMinimumSensitivityScore); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["minimumSensitivityScore"] = transformedMinimumSensitivityScore + } + + req = append(req, transformed) + } + return req, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumRiskScore(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationPubsubConditionExpressionsConditionsMinimumSensitivityScore(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigActionsPubSubNotificationDetailOfMessage(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargets(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedBigQueryTarget, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTarget(original["big_query_target"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedBigQueryTarget); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["bigQueryTarget"] = transformedBigQueryTarget + } + + req = append(req, transformed) + } + return req, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTarget(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedFilter, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilter(original["filter"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedFilter); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["filter"] = transformedFilter + } + + transformedConditions, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditions(original["conditions"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedConditions); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["conditions"] = transformedConditions + } + + transformedCadence, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadence(original["cadence"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCadence); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["cadence"] = transformedCadence + } + + transformedDisabled, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetDisabled(original["disabled"], d, config) + if err != nil { + return nil, err + } else { + transformed["disabled"] = transformedDisabled + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilter(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedTables, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTables(original["tables"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTables); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tables"] = transformedTables + } + + transformedOtherTables, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterOtherTables(original["other_tables"], d, config) + if err != nil { + return nil, err + } else { + transformed["otherTables"] = transformedOtherTables + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTables(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedIncludeRegexes, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexes(original["include_regexes"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedIncludeRegexes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["includeRegexes"] = transformedIncludeRegexes + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedPatterns, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatterns(original["patterns"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPatterns); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["patterns"] = transformedPatterns + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatterns(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedProjectIdRegex, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsProjectIdRegex(original["project_id_regex"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedProjectIdRegex); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["projectIdRegex"] = transformedProjectIdRegex + } + + transformedDatasetIdRegex, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsDatasetIdRegex(original["dataset_id_regex"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDatasetIdRegex); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["datasetIdRegex"] = transformedDatasetIdRegex + } + + transformedTableIdRegex, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsTableIdRegex(original["table_id_regex"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTableIdRegex); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tableIdRegex"] = transformedTableIdRegex + } + + req = append(req, transformed) + } + return req, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsProjectIdRegex(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsDatasetIdRegex(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterTablesIncludeRegexesPatternsTableIdRegex(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetFilterOtherTables(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + + if l[0] == nil { + transformed := make(map[string]interface{}) + return transformed, nil + } + transformed := make(map[string]interface{}) + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedCreatedAfter, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsCreatedAfter(original["created_after"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCreatedAfter); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["createdAfter"] = transformedCreatedAfter + } + + transformedOrConditions, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditions(original["or_conditions"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOrConditions); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["orConditions"] = transformedOrConditions + } + + transformedTypes, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypes(original["types"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTypes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["types"] = transformedTypes + } + + transformedTypeCollection, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypeCollection(original["type_collection"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTypeCollection); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["typeCollection"] = transformedTypeCollection + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsCreatedAfter(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedMinAge, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinAge(original["min_age"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMinAge); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["minAge"] = transformedMinAge + } + + transformedMinRowCount, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinRowCount(original["min_row_count"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMinRowCount); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["minRowCount"] = transformedMinRowCount + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinAge(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsOrConditionsMinRowCount(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedTypes, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypesTypes(original["types"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTypes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["types"] = transformedTypes + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypesTypes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetConditionsTypeCollection(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadence(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedSchemaModifiedCadence, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadence(original["schema_modified_cadence"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSchemaModifiedCadence); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["schemaModifiedCadence"] = transformedSchemaModifiedCadence + } + + transformedTableModifiedCadence, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadence(original["table_modified_cadence"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTableModifiedCadence); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tableModifiedCadence"] = transformedTableModifiedCadence + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadence(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedTypes, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceTypes(original["types"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTypes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["types"] = transformedTypes + } + + transformedFrequency, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceFrequency(original["frequency"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedFrequency); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["frequency"] = transformedFrequency + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceTypes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceSchemaModifiedCadenceFrequency(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadence(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedTypes, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceTypes(original["types"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTypes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["types"] = transformedTypes + } + + transformedFrequency, err := expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceFrequency(original["frequency"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedFrequency); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["frequency"] = transformedFrequency + } + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceTypes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetCadenceTableModifiedCadenceFrequency(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDataLossPreventionDiscoveryConfigTargetsBigQueryTargetDisabled(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + + if l[0] == nil { + transformed := make(map[string]interface{}) + return transformed, nil + } + transformed := make(map[string]interface{}) + + return transformed, nil +} + +func expandDataLossPreventionDiscoveryConfigStatus(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func resourceDataLossPreventionDiscoveryConfigEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { + newObj := make(map[string]interface{}) + newObj["discoveryConfig"] = obj + return newObj, nil +} + +func resourceDataLossPreventionDiscoveryConfigUpdateEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { + newObj := make(map[string]interface{}) + newObj["discoveryConfig"] = obj + return newObj, nil +} + +func resourceDataLossPreventionDiscoveryConfigDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { + v, ok := res["discoveryConfig"] + if !ok || v == nil { + return res, nil + } + + return v.(map[string]interface{}), nil +} diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_generated_test.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_generated_test.go new file mode 100644 index 0000000000..e50fd512a7 --- /dev/null +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_generated_test.go @@ -0,0 +1,494 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package datalossprevention_test + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func TestAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigBasicExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigBasicExample(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parent", "location"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigBasicExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +`, context) +} + +func TestAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActionsExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActionsExample(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.actions", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parent", "location"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActionsExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_discovery_config" "actions" { + parent = "projects/%{project}/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + actions { + export_data { + profile_table { + project_id = "project" + dataset_id = "dataset" + table_id = "table" + } + } + } + actions { + pub_sub_notification { + topic = "projects/%{project}/topics/${google_pubsub_topic.actions.name}" + event = "NEW_PROFILE" + pubsub_condition { + expressions { + logical_operator = "OR" + conditions { + minimum_sensitivity_score = "HIGH" + } + } + } + detail_of_message = "TABLE_PROFILE" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_pubsub_topic" "actions" { + name = "fake-topic" +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +`, context) +} + +func TestAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgRunningExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "organization": envvar.GetTestOrgFromEnv(t), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgRunningExample(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.org_running", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parent", "location"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgRunningExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_discovery_config" "org_running" { + parent = "organizations/%{organization}/locations/us" + location = "us" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + org_config { + project_id = "%{project}" + location { + organization_id = "%{organization}" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] + status = "RUNNING" +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +`, context) +} + +func TestAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgFolderPausedExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "organization": envvar.GetTestOrgFromEnv(t), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgFolderPausedExample(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.org_folder_paused", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parent", "location"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgFolderPausedExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_discovery_config" "org_folder_paused" { + parent = "organizations/%{organization}/locations/us" + location = "us" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + org_config { + project_id = "%{project}" + location { + folder_id = 123 + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] + status = "PAUSED" +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +`, context) +} + +func TestAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigConditionsCadenceExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigConditionsCadenceExample(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.conditions_cadence", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parent", "location"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigConditionsCadenceExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_discovery_config" "conditions_cadence" { + parent = "projects/%{project}/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + conditions { + type_collection = "BIG_QUERY_COLLECTION_ALL_TYPES" + } + cadence { + schema_modified_cadence { + types = ["SCHEMA_NEW_COLUMNS"] + frequency = "UPDATE_FREQUENCY_DAILY" + } + table_modified_cadence { + types = ["TABLE_MODIFIED_TIMESTAMP"] + frequency = "UPDATE_FREQUENCY_DAILY" + } + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +`, context) +} + +func TestAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigFilterRegexesAndConditionsExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigFilterRegexesAndConditionsExample(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.filter_regexes_and_conditions", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parent", "location"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigFilterRegexesAndConditionsExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_discovery_config" "filter_regexes_and_conditions" { + parent = "projects/%{project}/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + tables { + include_regexes { + patterns { + project_id_regex = ".*" + dataset_id_regex = ".*" + table_id_regex = ".*" + } + } + } + } + conditions { + created_after = "2023-10-02T15:01:23Z" + types { + types = ["BIG_QUERY_TABLE_TYPE_TABLE", "BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE"] + } + or_conditions { + min_row_count = 10 + min_age = "10800s" + } + } + } + } + targets { + big_query_target { + filter { + other_tables {} + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +`, context) +} + +func testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_data_loss_prevention_discovery_config" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } + + config := acctest.GoogleProviderConfig(t) + + url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{DataLossPreventionBasePath}}{{parent}}/discoveryConfigs/{{name}}") + if err != nil { + return err + } + + billingProject := "" + + if config.BillingProject != "" { + billingProject = config.BillingProject + } + + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: config.UserAgent, + }) + if err == nil { + return fmt.Errorf("DataLossPreventionDiscoveryConfig still exists at %s", url) + } + } + + return nil + } +} diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_sweeper.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_sweeper.go new file mode 100644 index 0000000000..727bc2216e --- /dev/null +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_sweeper.go @@ -0,0 +1,139 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package datalossprevention + +import ( + "context" + "log" + "strings" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/sweeper" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func init() { + sweeper.AddTestSweepers("DataLossPreventionDiscoveryConfig", testSweepDataLossPreventionDiscoveryConfig) +} + +// At the time of writing, the CI only passes us-central1 as the region +func testSweepDataLossPreventionDiscoveryConfig(region string) error { + resourceName := "DataLossPreventionDiscoveryConfig" + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) + + config, err := sweeper.SharedConfigForRegion(region) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) + return err + } + + err = config.LoadAndValidate(context.Background()) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) + return err + } + + t := &testing.T{} + billingId := envvar.GetTestBillingAccountFromEnv(t) + + // Setup variables to replace in list template + d := &tpgresource.ResourceDataMock{ + FieldsInSchema: map[string]interface{}{ + "project": config.Project, + "region": region, + "location": region, + "zone": "-", + "billing_account": billingId, + }, + } + + listTemplate := strings.Split("https://dlp.googleapis.com/v2/{{parent}}/discoveryConfigs", "?")[0] + listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) + return nil + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: config.Project, + RawURL: listUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) + return nil + } + + resourceList, ok := res["discoveryConfigs"] + if !ok { + log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") + return nil + } + + rl := resourceList.([]interface{}) + + log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for _, ri := range rl { + obj := ri.(map[string]interface{}) + if obj["name"] == nil { + log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) + return nil + } + + name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) + // Skip resources that shouldn't be sweeped + if !sweeper.IsSweepableTestResource(name) { + nonPrefixCount++ + continue + } + + deleteTemplate := "https://dlp.googleapis.com/v2/{{parent}}/discoveryConfigs/{{name}}" + deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + return nil + } + deleteUrl = deleteUrl + name + + // Don't wait on operations as we may have a lot to delete + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: config.Project, + RawURL: deleteUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_test.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_test.go new file mode 100644 index 0000000000..9b26e4c07f --- /dev/null +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_discovery_config_test.go @@ -0,0 +1,588 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package datalossprevention_test + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" +) + +func TestAccDataLossPreventionDiscoveryConfig_Update(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccDataLossPreventionDiscoveryConfig_BasicUpdate, + "org": testAccDataLossPreventionDiscoveryConfig_OrgUpdate, + "actions": testAccDataLossPreventionDiscoveryConfig_ActionsUpdate, + "conditions": testAccDataLossPreventionDiscoveryConfig_ConditionsCadenceUpdate, + "filter": testAccDataLossPreventionDiscoveryConfig_FilterUpdate, + } + for name, tc := range testCases { + // shadow the tc variable into scope so that when + // the loop continues, if t.Run hasn't executed tc(t) + // yet, we don't have a race condition + // see https://github.com/golang/go/wiki/CommonMistakes#using-goroutines-on-loop-iterator-variables + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccDataLossPreventionDiscoveryConfig_BasicUpdate(t *testing.T) { + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigStart(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigUpdate(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_OrgUpdate(t *testing.T) { + + context := map[string]interface{}{ + "organization": envvar.GetTestOrgFromEnv(t), + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgRunning(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgFolderPaused(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_ActionsUpdate(t *testing.T) { + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigStart(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActions(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActionsSensitivity(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_ConditionsCadenceUpdate(t *testing.T) { + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigStart(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigConditionsCadence(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_FilterUpdate(t *testing.T) { + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataLossPreventionDiscoveryConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigStart(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + { + Config: testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigFilterRegexesAndConditions(context), + }, + { + ResourceName: "google_data_loss_prevention_discovery_config.basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "parent", "last_run_time", "update_time"}, + }, + }, + }) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigStart(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/%{location}" + location = "%{location}" + display_name = "display name" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigUpdate(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "custom_type" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + custom_info_types { + info_type { + name = "MY_CUSTOM_TYPE" + } + + likelihood = "UNLIKELY" + + regex { + pattern = "test*" + } + } + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/%{location}" + location = "%{location}" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + conditions { + or_conditions { + min_row_count = 10 + min_age = "10800s" + } + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.custom_type.name}"] +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActions(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_pubsub_topic" "basic" { + name = "test-topic" +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/%{location}" + location = "%{location}" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + actions { + export_data { + profile_table { + project_id = "project" + dataset_id = "dataset" + table_id = "table" + } + } + } + actions { + pub_sub_notification { + topic = "projects/%{project}/topics/${google_pubsub_topic.basic.name}" + event = "NEW_PROFILE" + pubsub_condition { + expressions { + logical_operator = "OR" + conditions { + minimum_risk_score = "HIGH" + } + } + } + detail_of_message = "TABLE_PROFILE" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigActionsSensitivity(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_pubsub_topic" "basic" { + name = "test-topic" +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/%{location}" + location = "%{location}" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + actions { + export_data { + profile_table { + project_id = "project" + dataset_id = "dataset" + table_id = "table" + } + } + } + actions { + pub_sub_notification { + topic = "projects/%{project}/topics/${google_pubsub_topic.basic.name}" + event = "NEW_PROFILE" + pubsub_condition { + expressions { + logical_operator = "OR" + conditions { + minimum_sensitivity_score = "HIGH" + } + } + } + detail_of_message = "TABLE_PROFILE" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgRunning(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "organizations/%{organization}/locations/%{location}" + location = "%{location}" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + org_config { + project_id = "%{project}" + location { + organization_id = "%{organization}" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] + status = "RUNNING" +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigOrgFolderPaused(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "organizations/%{organization}/locations/%{location}" + location = "%{location}" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + org_config { + project_id = "%{project}" + location { + folder_id = 123 + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] + status = "PAUSED" +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigConditionsCadence(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/%{location}" + location = "%{location}" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + conditions { + type_collection = "BIG_QUERY_COLLECTION_ALL_TYPES" + } + cadence { + schema_modified_cadence { + types = ["SCHEMA_NEW_COLUMNS"] + frequency = "UPDATE_FREQUENCY_DAILY" + } + table_modified_cadence { + types = ["TABLE_MODIFIED_TIMESTAMP"] + frequency = "UPDATE_FREQUENCY_DAILY" + } + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} +`, context) +} + +func testAccDataLossPreventionDiscoveryConfig_dlpDiscoveryConfigFilterRegexesAndConditions(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/%{project}" + description = "Description" + display_name = "Display" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} + +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/%{project}/locations/%{location}" + location = "%{location}" + status = "RUNNING" + + targets { + big_query_target { + filter { + tables { + include_regexes { + patterns { + project_id_regex = ".*" + dataset_id_regex = ".*" + table_id_regex = ".*" + } + } + } + } + conditions { + created_after = "2023-10-02T15:01:23Z" + types { + types = ["BIG_QUERY_TABLE_TYPE_TABLE", "BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE"] + } + or_conditions { + min_row_count = 10 + min_age = "21600s" + } + } + } + } + targets { + big_query_target { + filter { + other_tables {} + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} +`, context) +} diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_inspect_template.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_inspect_template.go index 6d95e3e302..82649ca5af 100644 --- a/google-beta/services/datalossprevention/resource_data_loss_prevention_inspect_template.go +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_inspect_template.go @@ -20,6 +20,7 @@ package datalossprevention import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -789,6 +790,7 @@ func resourceDataLossPreventionInspectTemplateCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -797,6 +799,7 @@ func resourceDataLossPreventionInspectTemplateCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InspectTemplate: %s", err) @@ -836,12 +839,14 @@ func resourceDataLossPreventionInspectTemplateRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataLossPreventionInspectTemplate %q", d.Id())) @@ -915,6 +920,7 @@ func resourceDataLossPreventionInspectTemplateUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating InspectTemplate %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -950,6 +956,7 @@ func resourceDataLossPreventionInspectTemplateUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -984,6 +991,8 @@ func resourceDataLossPreventionInspectTemplateDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting InspectTemplate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -993,6 +1002,7 @@ func resourceDataLossPreventionInspectTemplateDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "InspectTemplate") diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_job_trigger.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_job_trigger.go index 2231410114..196931c49d 100644 --- a/google-beta/services/datalossprevention/resource_data_loss_prevention_job_trigger.go +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_job_trigger.go @@ -20,6 +20,7 @@ package datalossprevention import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -1469,6 +1470,7 @@ func resourceDataLossPreventionJobTriggerCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1477,6 +1479,7 @@ func resourceDataLossPreventionJobTriggerCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating JobTrigger: %s", err) @@ -1516,12 +1519,14 @@ func resourceDataLossPreventionJobTriggerRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataLossPreventionJobTrigger %q", d.Id())) @@ -1622,6 +1627,7 @@ func resourceDataLossPreventionJobTriggerUpdate(d *schema.ResourceData, meta int } log.Printf("[DEBUG] Updating JobTrigger %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -1665,6 +1671,7 @@ func resourceDataLossPreventionJobTriggerUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1699,6 +1706,8 @@ func resourceDataLossPreventionJobTriggerDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting JobTrigger %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1708,6 +1717,7 @@ func resourceDataLossPreventionJobTriggerDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "JobTrigger") diff --git a/google-beta/services/datalossprevention/resource_data_loss_prevention_stored_info_type.go b/google-beta/services/datalossprevention/resource_data_loss_prevention_stored_info_type.go index 51fa52f239..4a95d0899a 100644 --- a/google-beta/services/datalossprevention/resource_data_loss_prevention_stored_info_type.go +++ b/google-beta/services/datalossprevention/resource_data_loss_prevention_stored_info_type.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -343,6 +344,7 @@ func resourceDataLossPreventionStoredInfoTypeCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -351,6 +353,7 @@ func resourceDataLossPreventionStoredInfoTypeCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating StoredInfoType: %s", err) @@ -439,12 +442,14 @@ func resourceDataLossPreventionStoredInfoTypeRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataLossPreventionStoredInfoType %q", d.Id())) @@ -536,6 +541,7 @@ func resourceDataLossPreventionStoredInfoTypeUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating StoredInfoType %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -579,6 +585,7 @@ func resourceDataLossPreventionStoredInfoTypeUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -613,6 +620,8 @@ func resourceDataLossPreventionStoredInfoTypeDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting StoredInfoType %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -622,6 +631,7 @@ func resourceDataLossPreventionStoredInfoTypeDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "StoredInfoType") diff --git a/google-beta/services/datapipeline/resource_data_pipeline_pipeline.go b/google-beta/services/datapipeline/resource_data_pipeline_pipeline.go index ea411a8896..f924a9e6d2 100644 --- a/google-beta/services/datapipeline/resource_data_pipeline_pipeline.go +++ b/google-beta/services/datapipeline/resource_data_pipeline_pipeline.go @@ -20,6 +20,7 @@ package datapipeline import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -586,6 +587,7 @@ func resourceDataPipelinePipelineCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -594,6 +596,7 @@ func resourceDataPipelinePipelineCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Pipeline: %s", err) @@ -636,12 +639,14 @@ func resourceDataPipelinePipelineRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataPipelinePipeline %q", d.Id())) @@ -735,6 +740,7 @@ func resourceDataPipelinePipelineUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Pipeline %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -774,6 +780,7 @@ func resourceDataPipelinePipelineUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -814,6 +821,8 @@ func resourceDataPipelinePipelineDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Pipeline %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -823,6 +832,7 @@ func resourceDataPipelinePipelineDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Pipeline") diff --git a/google-beta/services/dataplex/resource_dataplex_datascan.go b/google-beta/services/dataplex/resource_dataplex_datascan.go index 28bb089329..e99fca79f1 100644 --- a/google-beta/services/dataplex/resource_dataplex_datascan.go +++ b/google-beta/services/dataplex/resource_dataplex_datascan.go @@ -20,6 +20,7 @@ package dataplex import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -650,6 +651,7 @@ func resourceDataplexDatascanCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -658,6 +660,7 @@ func resourceDataplexDatascanCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Datascan: %s", err) @@ -710,12 +713,14 @@ func resourceDataplexDatascanRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataplexDatascan %q", d.Id())) @@ -836,6 +841,7 @@ func resourceDataplexDatascanUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Datascan %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -883,6 +889,7 @@ func resourceDataplexDatascanUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -930,6 +937,8 @@ func resourceDataplexDatascanDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Datascan %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -939,6 +948,7 @@ func resourceDataplexDatascanDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Datascan") diff --git a/google-beta/services/dataplex/resource_dataplex_task.go b/google-beta/services/dataplex/resource_dataplex_task.go index fd8015784c..08f51ceca7 100644 --- a/google-beta/services/dataplex/resource_dataplex_task.go +++ b/google-beta/services/dataplex/resource_dataplex_task.go @@ -20,6 +20,7 @@ package dataplex import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -640,6 +641,7 @@ func resourceDataplexTaskCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -648,6 +650,7 @@ func resourceDataplexTaskCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Task: %s", err) @@ -700,12 +703,14 @@ func resourceDataplexTaskRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataplexTask %q", d.Id())) @@ -829,6 +834,7 @@ func resourceDataplexTaskUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Task %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -880,6 +886,7 @@ func resourceDataplexTaskUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -927,6 +934,8 @@ func resourceDataplexTaskDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Task %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -936,6 +945,7 @@ func resourceDataplexTaskDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Task") diff --git a/google-beta/services/dataproc/resource_dataproc_autoscaling_policy.go b/google-beta/services/dataproc/resource_dataproc_autoscaling_policy.go index bf0247de9d..f036a85e7e 100644 --- a/google-beta/services/dataproc/resource_dataproc_autoscaling_policy.go +++ b/google-beta/services/dataproc/resource_dataproc_autoscaling_policy.go @@ -20,6 +20,7 @@ package dataproc import ( "fmt" "log" + "net/http" "reflect" "time" @@ -303,6 +304,7 @@ func resourceDataprocAutoscalingPolicyCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -311,6 +313,7 @@ func resourceDataprocAutoscalingPolicyCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AutoscalingPolicy: %s", err) @@ -356,12 +359,14 @@ func resourceDataprocAutoscalingPolicyRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataprocAutoscalingPolicy %q", d.Id())) @@ -437,6 +442,7 @@ func resourceDataprocAutoscalingPolicyUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating AutoscalingPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -451,6 +457,7 @@ func resourceDataprocAutoscalingPolicyUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -489,6 +496,8 @@ func resourceDataprocAutoscalingPolicyDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AutoscalingPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -498,6 +507,7 @@ func resourceDataprocAutoscalingPolicyDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AutoscalingPolicy") diff --git a/google-beta/services/dataproc/resource_dataproc_cluster_test.go b/google-beta/services/dataproc/resource_dataproc_cluster_test.go index 3e6b34b507..0b58d9c946 100644 --- a/google-beta/services/dataproc/resource_dataproc_cluster_test.go +++ b/google-beta/services/dataproc/resource_dataproc_cluster_test.go @@ -152,7 +152,7 @@ func TestAccDataprocCluster_withAccelerators(t *testing.T) { var cluster dataproc.Cluster project := envvar.GetTestProjectFromEnv() - acceleratorType := "nvidia-tesla-k80" + acceleratorType := "nvidia-tesla-t4" zone := "us-central1-c" networkName := acctest.BootstrapSharedTestNetwork(t, "dataproc-cluster") subnetworkName := acctest.BootstrapSubnet(t, "dataproc-cluster", networkName) @@ -176,7 +176,7 @@ func TestAccDataprocCluster_withAccelerators(t *testing.T) { func testAccCheckDataprocAuxiliaryNodeGroupAccelerator(cluster *dataproc.Cluster, project string) resource.TestCheckFunc { return func(s *terraform.State) error { - expectedUri := fmt.Sprintf("projects/%s/zones/.*/acceleratorTypes/nvidia-tesla-k80", project) + expectedUri := fmt.Sprintf("projects/%s/zones/.*/acceleratorTypes/nvidia-tesla-t4", project) r := regexp.MustCompile(expectedUri) nodeGroup := cluster.Config.AuxiliaryNodeGroups[0].NodeGroup.NodeGroupConfig.Accelerators @@ -194,7 +194,7 @@ func testAccCheckDataprocAuxiliaryNodeGroupAccelerator(cluster *dataproc.Cluster func testAccCheckDataprocClusterAccelerator(cluster *dataproc.Cluster, project string, masterCount int, workerCount int) resource.TestCheckFunc { return func(s *terraform.State) error { - expectedUri := fmt.Sprintf("projects/%s/zones/.*/acceleratorTypes/nvidia-tesla-k80", project) + expectedUri := fmt.Sprintf("projects/%s/zones/.*/acceleratorTypes/nvidia-tesla-t4", project) r := regexp.MustCompile(expectedUri) master := cluster.Config.MasterConfig.Accelerators @@ -551,7 +551,7 @@ func TestAccDataprocCluster_spotWithAuxiliaryNodeGroups(t *testing.T) { resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.roles.0", "DRIVER"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.num_instances", "2"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.machine_type", "n1-standard-2"), - resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.min_cpu_platform", "AMD Rome"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.min_cpu_platform", "Intel Haswell"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.disk_config.0.boot_disk_size_gb", "35"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.disk_config.0.boot_disk_type", "pd-standard"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_auxiliary_node_groups", "cluster_config.0.auxiliary_node_groups.0.node_group.0.node_group_config.0.disk_config.0.num_local_ssds", "1"), @@ -702,7 +702,10 @@ func TestAccDataprocCluster_withServiceAcc(t *testing.T) { acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckDataprocClusterDestroy(t), + ExternalProviders: map[string]resource.ExternalProvider{ + "time": {}, + }, + CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ { Config: testAccDataprocCluster_withServiceAcc(sa, rnd, subnetworkName), @@ -1959,7 +1962,7 @@ resource "google_dataproc_cluster" "with_auxiliary_node_groups" { node_group_config{ num_instances=2 machine_type="n1-standard-2" - min_cpu_platform = "AMD Rome" + min_cpu_platform = "Intel Haswell" disk_config { boot_disk_size_gb = 35 boot_disk_type = "pd-standard" @@ -1967,7 +1970,7 @@ resource "google_dataproc_cluster" "with_auxiliary_node_groups" { } accelerators { accelerator_count = 1 - accelerator_type = "nvidia-tesla-k80" + accelerator_type = "nvidia-tesla-t4" } } } @@ -2222,6 +2225,13 @@ resource "google_project_iam_member" "service_account" { member = "serviceAccount:${google_service_account.service_account.email}" } +# Wait for IAM propagation +resource "time_sleep" "wait_120_seconds" { + depends_on = [google_project_iam_member.service_account] + + create_duration = "120s" +} + resource "google_dataproc_cluster" "with_service_account" { name = "dproc-cluster-test-%s" region = "us-central1" @@ -2259,7 +2269,7 @@ resource "google_dataproc_cluster" "with_service_account" { } } - depends_on = [google_project_iam_member.service_account] + depends_on = [time_sleep.wait_120_seconds] } `, sa, rnd, subnetworkName) } diff --git a/google-beta/services/dataprocmetastore/resource_dataproc_metastore_federation.go b/google-beta/services/dataprocmetastore/resource_dataproc_metastore_federation.go index c2ecb358ec..fa8af7e8ae 100644 --- a/google-beta/services/dataprocmetastore/resource_dataproc_metastore_federation.go +++ b/google-beta/services/dataprocmetastore/resource_dataproc_metastore_federation.go @@ -20,6 +20,7 @@ package dataprocmetastore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -203,6 +204,7 @@ func resourceDataprocMetastoreFederationCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -211,6 +213,7 @@ func resourceDataprocMetastoreFederationCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Federation: %s", err) @@ -263,12 +266,14 @@ func resourceDataprocMetastoreFederationRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataprocMetastoreFederation %q", d.Id())) @@ -347,6 +352,7 @@ func resourceDataprocMetastoreFederationUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating Federation %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("backend_metastores") { @@ -378,6 +384,7 @@ func resourceDataprocMetastoreFederationUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -425,6 +432,8 @@ func resourceDataprocMetastoreFederationDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Federation %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -434,6 +443,7 @@ func resourceDataprocMetastoreFederationDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Federation") diff --git a/google-beta/services/dataprocmetastore/resource_dataproc_metastore_service.go b/google-beta/services/dataprocmetastore/resource_dataproc_metastore_service.go index 858bbcd28c..2f897809da 100644 --- a/google-beta/services/dataprocmetastore/resource_dataproc_metastore_service.go +++ b/google-beta/services/dataprocmetastore/resource_dataproc_metastore_service.go @@ -20,6 +20,7 @@ package dataprocmetastore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -562,6 +563,7 @@ func resourceDataprocMetastoreServiceCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -570,6 +572,7 @@ func resourceDataprocMetastoreServiceCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Service: %s", err) @@ -622,12 +625,14 @@ func resourceDataprocMetastoreServiceRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DataprocMetastoreService %q", d.Id())) @@ -790,6 +795,7 @@ func resourceDataprocMetastoreServiceUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating Service %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("port") { @@ -853,6 +859,7 @@ func resourceDataprocMetastoreServiceUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -900,6 +907,8 @@ func resourceDataprocMetastoreServiceDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Service %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -909,6 +918,7 @@ func resourceDataprocMetastoreServiceDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Service") diff --git a/google-beta/services/datastore/resource_datastore_index.go b/google-beta/services/datastore/resource_datastore_index.go index e2c204d35a..b1adc76c01 100644 --- a/google-beta/services/datastore/resource_datastore_index.go +++ b/google-beta/services/datastore/resource_datastore_index.go @@ -20,6 +20,7 @@ package datastore import ( "fmt" "log" + "net/http" "reflect" "time" @@ -151,6 +152,7 @@ func resourceDatastoreIndexCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -159,6 +161,7 @@ func resourceDatastoreIndexCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.DatastoreIndex409Contention}, }) if err != nil { @@ -226,12 +229,14 @@ func resourceDatastoreIndexRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.DatastoreIndex409Contention}, }) if err != nil { @@ -285,6 +290,8 @@ func resourceDatastoreIndexDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Index %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -294,6 +301,7 @@ func resourceDatastoreIndexDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.DatastoreIndex409Contention}, }) if err != nil { diff --git a/google-beta/services/datastore/resource_datastore_index_generated_test.go b/google-beta/services/datastore/resource_datastore_index_generated_test.go index 6315b660d6..e1b104f7fc 100644 --- a/google-beta/services/datastore/resource_datastore_index_generated_test.go +++ b/google-beta/services/datastore/resource_datastore_index_generated_test.go @@ -26,6 +26,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" ) @@ -34,6 +35,7 @@ func TestAccDatastoreIndex_datastoreIndexExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ + "project_id": envvar.GetTestProjectFromEnv(), "random_suffix": acctest.RandString(t, 10), } @@ -56,6 +58,19 @@ func TestAccDatastoreIndex_datastoreIndexExample(t *testing.T) { func testAccDatastoreIndex_datastoreIndexExample(context map[string]interface{}) string { return acctest.Nprintf(` +resource "google_firestore_database" "database" { + project = "%{project_id}" + # google_datastore_index resources only support the (default) database. + # However, google_firestore_index can express any Datastore Mode index + # and should be preferred in all cases. + name = "(default)" + location_id = "nam5" + type = "DATASTORE_MODE" + + delete_protection_state = "DELETE_PROTECTION_DISABLED" + deletion_policy = "DELETE" +} + resource "google_datastore_index" "default" { kind = "foo" properties { @@ -66,6 +81,8 @@ resource "google_datastore_index" "default" { name = "tf_test_property_b%{random_suffix}" direction = "ASCENDING" } + + depends_on = [google_firestore_database.database] } `, context) } diff --git a/google-beta/services/datastream/resource_datastream_connection_profile.go b/google-beta/services/datastream/resource_datastream_connection_profile.go index 23b53beacc..a201ec5efd 100644 --- a/google-beta/services/datastream/resource_datastream_connection_profile.go +++ b/google-beta/services/datastream/resource_datastream_connection_profile.go @@ -20,6 +20,7 @@ package datastream import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -448,6 +449,7 @@ func resourceDatastreamConnectionProfileCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -456,6 +458,7 @@ func resourceDatastreamConnectionProfileCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ConnectionProfile: %s", err) @@ -522,12 +525,14 @@ func resourceDatastreamConnectionProfileRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DatastreamConnectionProfile %q", d.Id())) @@ -654,6 +659,7 @@ func resourceDatastreamConnectionProfileUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating ConnectionProfile %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -713,6 +719,7 @@ func resourceDatastreamConnectionProfileUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -760,6 +767,8 @@ func resourceDatastreamConnectionProfileDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ConnectionProfile %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -769,6 +778,7 @@ func resourceDatastreamConnectionProfileDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ConnectionProfile") diff --git a/google-beta/services/datastream/resource_datastream_private_connection.go b/google-beta/services/datastream/resource_datastream_private_connection.go index b47f6bdd31..de76c8435f 100644 --- a/google-beta/services/datastream/resource_datastream_private_connection.go +++ b/google-beta/services/datastream/resource_datastream_private_connection.go @@ -22,6 +22,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "time" @@ -250,6 +251,7 @@ func resourceDatastreamPrivateConnectionCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -258,6 +260,7 @@ func resourceDatastreamPrivateConnectionCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PrivateConnection: %s", err) @@ -328,12 +331,14 @@ func resourceDatastreamPrivateConnectionRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DatastreamPrivateConnection %q", d.Id())) @@ -403,6 +408,8 @@ func resourceDatastreamPrivateConnectionDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PrivateConnection %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -412,6 +419,7 @@ func resourceDatastreamPrivateConnectionDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PrivateConnection") diff --git a/google-beta/services/datastream/resource_datastream_stream.go b/google-beta/services/datastream/resource_datastream_stream.go index 08c908601d..743eb5d153 100644 --- a/google-beta/services/datastream/resource_datastream_stream.go +++ b/google-beta/services/datastream/resource_datastream_stream.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -1391,6 +1392,7 @@ func resourceDatastreamStreamCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1399,6 +1401,7 @@ func resourceDatastreamStreamCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Stream: %s", err) @@ -1476,12 +1479,14 @@ func resourceDatastreamStreamRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DatastreamStream %q", d.Id())) @@ -1598,6 +1603,7 @@ func resourceDatastreamStreamUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Stream %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1672,6 +1678,7 @@ func resourceDatastreamStreamUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1722,6 +1729,8 @@ func resourceDatastreamStreamDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Stream %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1731,6 +1740,7 @@ func resourceDatastreamStreamDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Stream") diff --git a/google-beta/services/deploymentmanager/resource_deployment_manager_deployment.go b/google-beta/services/deploymentmanager/resource_deployment_manager_deployment.go index 3d610e7431..9962eedbd1 100644 --- a/google-beta/services/deploymentmanager/resource_deployment_manager_deployment.go +++ b/google-beta/services/deploymentmanager/resource_deployment_manager_deployment.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -280,6 +281,7 @@ func resourceDeploymentManagerDeploymentCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -288,6 +290,7 @@ func resourceDeploymentManagerDeploymentCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Deployment: %s", err) @@ -341,12 +344,14 @@ func resourceDeploymentManagerDeploymentRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DeploymentManagerDeployment %q", d.Id())) @@ -426,6 +431,8 @@ func resourceDeploymentManagerDeploymentUpdate(d *schema.ResourceData, meta inte return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -439,6 +446,7 @@ func resourceDeploymentManagerDeploymentUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Deployment %q: %s", d.Id(), err) @@ -503,6 +511,8 @@ func resourceDeploymentManagerDeploymentUpdate(d *schema.ResourceData, meta inte return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -516,6 +526,7 @@ func resourceDeploymentManagerDeploymentUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Deployment %q: %s", d.Id(), err) @@ -563,6 +574,8 @@ func resourceDeploymentManagerDeploymentDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Deployment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -572,6 +585,7 @@ func resourceDeploymentManagerDeploymentDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Deployment") diff --git a/google-beta/services/dialogflow/resource_dialogflow_agent.go b/google-beta/services/dialogflow/resource_dialogflow_agent.go index e8be6183f7..995d1259ab 100644 --- a/google-beta/services/dialogflow/resource_dialogflow_agent.go +++ b/google-beta/services/dialogflow/resource_dialogflow_agent.go @@ -20,6 +20,7 @@ package dialogflow import ( "fmt" "log" + "net/http" "reflect" "time" @@ -252,6 +253,7 @@ func resourceDialogflowAgentCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -260,6 +262,7 @@ func resourceDialogflowAgentCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Agent: %s", err) @@ -302,12 +305,14 @@ func resourceDialogflowAgentRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowAgent %q", d.Id())) @@ -440,6 +445,7 @@ func resourceDialogflowAgentUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Agent %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -454,6 +460,7 @@ func resourceDialogflowAgentUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -492,6 +499,8 @@ func resourceDialogflowAgentDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Agent %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -501,6 +510,7 @@ func resourceDialogflowAgentDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Agent") diff --git a/google-beta/services/dialogflow/resource_dialogflow_entity_type.go b/google-beta/services/dialogflow/resource_dialogflow_entity_type.go index dee8a158ea..ea9f5ab1b2 100644 --- a/google-beta/services/dialogflow/resource_dialogflow_entity_type.go +++ b/google-beta/services/dialogflow/resource_dialogflow_entity_type.go @@ -20,6 +20,7 @@ package dialogflow import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -173,6 +174,7 @@ func resourceDialogflowEntityTypeCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -181,6 +183,7 @@ func resourceDialogflowEntityTypeCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EntityType: %s", err) @@ -244,12 +247,14 @@ func resourceDialogflowEntityTypeRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowEntityType %q", d.Id())) @@ -325,6 +330,7 @@ func resourceDialogflowEntityTypeUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating EntityType %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -339,6 +345,7 @@ func resourceDialogflowEntityTypeUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -377,6 +384,8 @@ func resourceDialogflowEntityTypeDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EntityType %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -386,6 +395,7 @@ func resourceDialogflowEntityTypeDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EntityType") diff --git a/google-beta/services/dialogflow/resource_dialogflow_fulfillment.go b/google-beta/services/dialogflow/resource_dialogflow_fulfillment.go index ce7c549072..c8d53896d0 100644 --- a/google-beta/services/dialogflow/resource_dialogflow_fulfillment.go +++ b/google-beta/services/dialogflow/resource_dialogflow_fulfillment.go @@ -20,6 +20,7 @@ package dialogflow import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -180,6 +181,7 @@ func resourceDialogflowFulfillmentCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -188,6 +190,7 @@ func resourceDialogflowFulfillmentCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Fulfillment: %s", err) @@ -251,12 +254,14 @@ func resourceDialogflowFulfillmentRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowFulfillment %q", d.Id())) @@ -332,6 +337,7 @@ func resourceDialogflowFulfillmentUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating Fulfillment %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -371,6 +377,7 @@ func resourceDialogflowFulfillmentUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -411,6 +418,8 @@ func resourceDialogflowFulfillmentDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Fulfillment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -420,6 +429,7 @@ func resourceDialogflowFulfillmentDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Fulfillment") diff --git a/google-beta/services/dialogflow/resource_dialogflow_intent.go b/google-beta/services/dialogflow/resource_dialogflow_intent.go index 136eb56135..fc53dd61f2 100644 --- a/google-beta/services/dialogflow/resource_dialogflow_intent.go +++ b/google-beta/services/dialogflow/resource_dialogflow_intent.go @@ -20,6 +20,7 @@ package dialogflow import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -282,6 +283,7 @@ func resourceDialogflowIntentCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -290,6 +292,7 @@ func resourceDialogflowIntentCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Intent: %s", err) @@ -353,12 +356,14 @@ func resourceDialogflowIntentRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowIntent %q", d.Id())) @@ -497,6 +502,7 @@ func resourceDialogflowIntentUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Intent %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -511,6 +517,7 @@ func resourceDialogflowIntentUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -549,6 +556,8 @@ func resourceDialogflowIntentDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Intent %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -558,6 +567,7 @@ func resourceDialogflowIntentDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Intent") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_agent.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_agent.go index 398cbc9177..b30fd81d89 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_agent.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_agent.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -388,6 +389,7 @@ func resourceDialogflowCXAgentCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -396,6 +398,7 @@ func resourceDialogflowCXAgentCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Agent: %s", err) @@ -441,12 +444,14 @@ func resourceDialogflowCXAgentRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXAgent %q", d.Id())) @@ -600,6 +605,7 @@ func resourceDialogflowCXAgentUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Agent %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -671,6 +677,7 @@ func resourceDialogflowCXAgentUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -711,6 +718,8 @@ func resourceDialogflowCXAgentDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Agent %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -720,6 +729,7 @@ func resourceDialogflowCXAgentDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Agent") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_entity_type.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_entity_type.go index 3aeeea51f7..9b21e8482a 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_entity_type.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_entity_type.go @@ -20,6 +20,7 @@ package dialogflowcx import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -216,6 +217,8 @@ func resourceDialogflowCXEntityTypeCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -238,6 +241,7 @@ func resourceDialogflowCXEntityTypeCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EntityType: %s", err) @@ -277,6 +281,8 @@ func resourceDialogflowCXEntityTypeRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -297,6 +303,7 @@ func resourceDialogflowCXEntityTypeRead(d *schema.ResourceData, meta interface{} Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXEntityType %q", d.Id())) @@ -389,6 +396,7 @@ func resourceDialogflowCXEntityTypeUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating EntityType %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -455,6 +463,7 @@ func resourceDialogflowCXEntityTypeUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -489,6 +498,8 @@ func resourceDialogflowCXEntityTypeDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -513,6 +524,7 @@ func resourceDialogflowCXEntityTypeDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EntityType") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_environment.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_environment.go index f9cc3e5670..a7b6b7c403 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_environment.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_environment.go @@ -20,6 +20,7 @@ package dialogflowcx import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -143,6 +144,8 @@ func resourceDialogflowCXEnvironmentCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -165,6 +168,7 @@ func resourceDialogflowCXEnvironmentCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Environment: %s", err) @@ -225,6 +229,8 @@ func resourceDialogflowCXEnvironmentRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -245,6 +251,7 @@ func resourceDialogflowCXEnvironmentRead(d *schema.ResourceData, meta interface{ Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXEnvironment %q", d.Id())) @@ -304,6 +311,7 @@ func resourceDialogflowCXEnvironmentUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating Environment %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -354,6 +362,7 @@ func resourceDialogflowCXEnvironmentUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -395,6 +404,8 @@ func resourceDialogflowCXEnvironmentDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -419,6 +430,7 @@ func resourceDialogflowCXEnvironmentDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Environment") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_flow.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_flow.go index 9f2cd5b7ef..e91ff3642c 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_flow.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_flow.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -771,6 +772,8 @@ func resourceDialogflowCXFlowCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -821,6 +824,7 @@ func resourceDialogflowCXFlowCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Flow: %s", err) @@ -860,6 +864,8 @@ func resourceDialogflowCXFlowRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -880,6 +886,7 @@ func resourceDialogflowCXFlowRead(d *schema.ResourceData, meta interface{}) erro Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXFlow %q", d.Id())) @@ -977,6 +984,7 @@ func resourceDialogflowCXFlowUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Flow %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1043,6 +1051,7 @@ func resourceDialogflowCXFlowUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1077,6 +1086,8 @@ func resourceDialogflowCXFlowDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -1112,6 +1123,7 @@ func resourceDialogflowCXFlowDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Flow") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_intent.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_intent.go index bff60c9831..4ba8973af6 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_intent.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_intent.go @@ -20,6 +20,7 @@ package dialogflowcx import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -295,6 +296,8 @@ func resourceDialogflowCXIntentCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -345,6 +348,7 @@ func resourceDialogflowCXIntentCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Intent: %s", err) @@ -384,6 +388,8 @@ func resourceDialogflowCXIntentRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -404,6 +410,7 @@ func resourceDialogflowCXIntentRead(d *schema.ResourceData, meta interface{}) er Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXIntent %q", d.Id())) @@ -507,6 +514,7 @@ func resourceDialogflowCXIntentUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Intent %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -573,6 +581,7 @@ func resourceDialogflowCXIntentUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -607,6 +616,8 @@ func resourceDialogflowCXIntentDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -642,6 +653,7 @@ func resourceDialogflowCXIntentDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Intent") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_page.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_page.go index 687b3e3bf2..5915368144 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_page.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_page.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -1481,6 +1482,8 @@ func resourceDialogflowCXPageCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -1503,6 +1506,7 @@ func resourceDialogflowCXPageCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Page: %s", err) @@ -1542,6 +1546,8 @@ func resourceDialogflowCXPageRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -1562,6 +1568,7 @@ func resourceDialogflowCXPageRead(d *schema.ResourceData, meta interface{}) erro Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXPage %q", d.Id())) @@ -1657,6 +1664,7 @@ func resourceDialogflowCXPageUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Page %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1723,6 +1731,7 @@ func resourceDialogflowCXPageUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1757,6 +1766,8 @@ func resourceDialogflowCXPageDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -1781,6 +1792,7 @@ func resourceDialogflowCXPageDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Page") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_security_settings.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_security_settings.go index b88c9a6a0e..7ecfbb43a1 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_security_settings.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_security_settings.go @@ -20,6 +20,7 @@ package dialogflowcx import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -276,6 +277,7 @@ func resourceDialogflowCXSecuritySettingsCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -284,6 +286,7 @@ func resourceDialogflowCXSecuritySettingsCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SecuritySettings: %s", err) @@ -329,12 +332,14 @@ func resourceDialogflowCXSecuritySettingsRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXSecuritySettings %q", d.Id())) @@ -464,6 +469,7 @@ func resourceDialogflowCXSecuritySettingsUpdate(d *schema.ResourceData, meta int } log.Printf("[DEBUG] Updating SecuritySettings %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -527,6 +533,7 @@ func resourceDialogflowCXSecuritySettingsUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -567,6 +574,8 @@ func resourceDialogflowCXSecuritySettingsDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SecuritySettings %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -576,6 +585,7 @@ func resourceDialogflowCXSecuritySettingsDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SecuritySettings") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_test_case.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_test_case.go index 14442bf0e9..e04b32c443 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_test_case.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_test_case.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -626,6 +627,8 @@ func resourceDialogflowCXTestCaseCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -648,6 +651,7 @@ func resourceDialogflowCXTestCaseCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TestCase: %s", err) @@ -687,6 +691,8 @@ func resourceDialogflowCXTestCaseRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -707,6 +713,7 @@ func resourceDialogflowCXTestCaseRead(d *schema.ResourceData, meta interface{}) Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXTestCase %q", d.Id())) @@ -787,6 +794,7 @@ func resourceDialogflowCXTestCaseUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating TestCase %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("tags") { @@ -845,6 +853,7 @@ func resourceDialogflowCXTestCaseUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -879,6 +888,8 @@ func resourceDialogflowCXTestCaseDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -903,6 +914,7 @@ func resourceDialogflowCXTestCaseDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TestCase") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_version.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_version.go index 477a8597a4..c5577043c6 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_version.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_version.go @@ -20,6 +20,7 @@ package dialogflowcx import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -163,6 +164,8 @@ func resourceDialogflowCXVersionCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -185,6 +188,7 @@ func resourceDialogflowCXVersionCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Version: %s", err) @@ -245,6 +249,8 @@ func resourceDialogflowCXVersionRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -265,6 +271,7 @@ func resourceDialogflowCXVersionRead(d *schema.ResourceData, meta interface{}) e Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXVersion %q", d.Id())) @@ -321,6 +328,7 @@ func resourceDialogflowCXVersionUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Version %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -367,6 +375,7 @@ func resourceDialogflowCXVersionUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -401,6 +410,8 @@ func resourceDialogflowCXVersionDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -425,6 +436,7 @@ func resourceDialogflowCXVersionDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Version") diff --git a/google-beta/services/dialogflowcx/resource_dialogflow_cx_webhook.go b/google-beta/services/dialogflowcx/resource_dialogflow_cx_webhook.go index fd27553fe3..56ce0fecf4 100644 --- a/google-beta/services/dialogflowcx/resource_dialogflow_cx_webhook.go +++ b/google-beta/services/dialogflowcx/resource_dialogflow_cx_webhook.go @@ -20,6 +20,7 @@ package dialogflowcx import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -247,6 +248,8 @@ func resourceDialogflowCXWebhookCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -269,6 +272,7 @@ func resourceDialogflowCXWebhookCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Webhook: %s", err) @@ -308,6 +312,8 @@ func resourceDialogflowCXWebhookRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -328,6 +334,7 @@ func resourceDialogflowCXWebhookRead(d *schema.ResourceData, meta interface{}) e Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DialogflowCXWebhook %q", d.Id())) @@ -432,6 +439,7 @@ func resourceDialogflowCXWebhookUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Webhook %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -502,6 +510,7 @@ func resourceDialogflowCXWebhookUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -536,6 +545,8 @@ func resourceDialogflowCXWebhookDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + // extract location from the parent location := "" @@ -560,6 +571,7 @@ func resourceDialogflowCXWebhookDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Webhook") diff --git a/google-beta/services/discoveryengine/resource_discovery_engine_chat_engine.go b/google-beta/services/discoveryengine/resource_discovery_engine_chat_engine.go index 76757012c6..8dd4f8d745 100644 --- a/google-beta/services/discoveryengine/resource_discovery_engine_chat_engine.go +++ b/google-beta/services/discoveryengine/resource_discovery_engine_chat_engine.go @@ -20,6 +20,7 @@ package discoveryengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -259,6 +260,7 @@ func resourceDiscoveryEngineChatEngineCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -267,6 +269,7 @@ func resourceDiscoveryEngineChatEngineCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ChatEngine: %s", err) @@ -333,12 +336,14 @@ func resourceDiscoveryEngineChatEngineRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DiscoveryEngineChatEngine %q", d.Id())) @@ -410,6 +415,7 @@ func resourceDiscoveryEngineChatEngineUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating ChatEngine %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -437,6 +443,7 @@ func resourceDiscoveryEngineChatEngineUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -477,6 +484,8 @@ func resourceDiscoveryEngineChatEngineDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ChatEngine %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -486,6 +495,7 @@ func resourceDiscoveryEngineChatEngineDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ChatEngine") diff --git a/google-beta/services/discoveryengine/resource_discovery_engine_data_store.go b/google-beta/services/discoveryengine/resource_discovery_engine_data_store.go index 2f467a4fc5..e5b7d10641 100644 --- a/google-beta/services/discoveryengine/resource_discovery_engine_data_store.go +++ b/google-beta/services/discoveryengine/resource_discovery_engine_data_store.go @@ -20,6 +20,7 @@ package discoveryengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -186,6 +187,7 @@ func resourceDiscoveryEngineDataStoreCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -194,6 +196,7 @@ func resourceDiscoveryEngineDataStoreCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DataStore: %s", err) @@ -246,12 +249,14 @@ func resourceDiscoveryEngineDataStoreRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DiscoveryEngineDataStore %q", d.Id())) @@ -315,6 +320,7 @@ func resourceDiscoveryEngineDataStoreUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating DataStore %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -342,6 +348,7 @@ func resourceDiscoveryEngineDataStoreUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -389,6 +396,8 @@ func resourceDiscoveryEngineDataStoreDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DataStore %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -398,6 +407,7 @@ func resourceDiscoveryEngineDataStoreDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DataStore") diff --git a/google-beta/services/discoveryengine/resource_discovery_engine_search_engine.go b/google-beta/services/discoveryengine/resource_discovery_engine_search_engine.go index 6dd8a3ba6d..88e19b5939 100644 --- a/google-beta/services/discoveryengine/resource_discovery_engine_search_engine.go +++ b/google-beta/services/discoveryengine/resource_discovery_engine_search_engine.go @@ -20,6 +20,7 @@ package discoveryengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -229,6 +230,7 @@ func resourceDiscoveryEngineSearchEngineCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -237,6 +239,7 @@ func resourceDiscoveryEngineSearchEngineCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SearchEngine: %s", err) @@ -303,12 +306,14 @@ func resourceDiscoveryEngineSearchEngineRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DiscoveryEngineSearchEngine %q", d.Id())) @@ -386,6 +391,7 @@ func resourceDiscoveryEngineSearchEngineUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating SearchEngine %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -417,6 +423,7 @@ func resourceDiscoveryEngineSearchEngineUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -457,6 +464,8 @@ func resourceDiscoveryEngineSearchEngineDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SearchEngine %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -466,6 +475,7 @@ func resourceDiscoveryEngineSearchEngineDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SearchEngine") diff --git a/google-beta/services/dns/data_source_dns_key_test.go b/google-beta/services/dns/data_source_dns_key_test.go index 4242c3bf0a..830504db47 100644 --- a/google-beta/services/dns/data_source_dns_key_test.go +++ b/google-beta/services/dns/data_source_dns_key_test.go @@ -12,50 +12,23 @@ import ( ) func TestAccDataSourceDNSKeys_basic(t *testing.T) { - // TODO: https://github.com/hashicorp/terraform-provider-google/issues/14158 - acctest.SkipIfVcr(t) t.Parallel() dnsZoneName := fmt.Sprintf("tf-test-dnskey-test-%s", acctest.RandString(t, 10)) - var kskDigest1, kskDigest2, zskPubKey1, zskPubKey2, kskAlg1, kskAlg2 string - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - CheckDestroy: testAccCheckDNSManagedZoneDestroyProducerFramework(t), + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { - ExternalProviders: map[string]resource.ExternalProvider{ - "google": { - VersionConstraint: "4.58.0", - Source: "hashicorp/google-beta", - }, - }, - Config: testAccDataSourceDNSKeysConfigWithOutputs(dnsZoneName, "on"), + Config: testAccDataSourceDNSKeysConfig(dnsZoneName, "on"), Check: resource.ComposeTestCheckFunc( testAccDataSourceDNSKeysDSRecordCheck("data.google_dns_keys.foo_dns_key"), resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.#", "1"), resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "zone_signing_keys.#", "1"), resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key_id", "key_signing_keys.#", "1"), resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key_id", "zone_signing_keys.#", "1"), - acctest.TestExtractResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.0.digests.0.digest", &kskDigest1), - acctest.TestExtractResourceAttr("data.google_dns_keys.foo_dns_key_id", "zone_signing_keys.0.public_key", &zskPubKey1), - acctest.TestExtractResourceAttr("data.google_dns_keys.foo_dns_key_id", "key_signing_keys.0.algorithm", &kskAlg1), - ), - }, - { - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Config: testAccDataSourceDNSKeysConfigWithOutputs(dnsZoneName, "on"), - Check: resource.ComposeTestCheckFunc( - testAccDataSourceDNSKeysDSRecordCheck("data.google_dns_keys.foo_dns_key"), - resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.#", "1"), - resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "zone_signing_keys.#", "1"), - acctest.TestExtractResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.0.digests.0.digest", &kskDigest2), - acctest.TestExtractResourceAttr("data.google_dns_keys.foo_dns_key_id", "zone_signing_keys.0.public_key", &zskPubKey2), - acctest.TestExtractResourceAttr("data.google_dns_keys.foo_dns_key_id", "key_signing_keys.0.algorithm", &kskAlg2), - acctest.TestCheckAttributeValuesEqual(&kskDigest1, &kskDigest2), - acctest.TestCheckAttributeValuesEqual(&zskPubKey1, &zskPubKey2), - acctest.TestCheckAttributeValuesEqual(&kskAlg1, &kskAlg2), ), }, }, @@ -63,37 +36,22 @@ func TestAccDataSourceDNSKeys_basic(t *testing.T) { } func TestAccDataSourceDNSKeys_noDnsSec(t *testing.T) { - // TODO: https://github.com/hashicorp/terraform-provider-google/issues/14158 - acctest.SkipIfVcr(t) t.Parallel() dnsZoneName := fmt.Sprintf("tf-test-dnskey-test-%s", acctest.RandString(t, 10)) acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - CheckDestroy: testAccCheckDNSManagedZoneDestroyProducerFramework(t), + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { - ExternalProviders: map[string]resource.ExternalProvider{ - "google": { - VersionConstraint: "4.58.0", - Source: "hashicorp/google-beta", - }, - }, Config: testAccDataSourceDNSKeysConfig(dnsZoneName, "off"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.#", "0"), resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "zone_signing_keys.#", "0"), ), }, - { - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Config: testAccDataSourceDNSKeysConfig(dnsZoneName, "off"), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.#", "0"), - resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "zone_signing_keys.#", "0"), - ), - }, }, }) } @@ -117,7 +75,7 @@ func testAccDataSourceDNSKeysConfig(dnsZoneName, dnssecStatus string) string { return fmt.Sprintf(` resource "google_dns_managed_zone" "foo" { name = "%s" - dns_name = "%s.hashicorptest.com." + dns_name = "dnssec.gcp.tfacc.hashicorptest.com." dnssec_config { state = "%s" @@ -132,27 +90,90 @@ data "google_dns_keys" "foo_dns_key" { data "google_dns_keys" "foo_dns_key_id" { managed_zone = google_dns_managed_zone.foo.id } -`, dnsZoneName, dnsZoneName, dnssecStatus) +`, dnsZoneName, dnssecStatus) +} + +// TestAccDataSourceDNSKeys_basic_AdcAuth is the same as TestAccDataSourceDNSKeys_basic but the test enforces that a developer runs this using +// ADCs, supplied via GOOGLE_APPLICATION_CREDENTIALS. If any other credentials ENVs are set the PreCheck will fail. +// Commented out until this test can run in TeamCity/CI. +// func TestAccDataSourceDNSKeys_basic_AdcAuth(t *testing.T) { +// acctest.SkipIfVcr(t) // Uses external providers +// t.Parallel() + +// creds := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS") // PreCheck assertion handles checking this is set + +// dnsZoneName := fmt.Sprintf("tf-test-dnskey-test-%s", acctest.RandString(t, 10)) + +// context := map[string]interface{}{ +// "credentials_path": creds, +// "dns_zone_name": dnsZoneName, +// "dnssec_status": "on", +// } + +// acctest.VcrTest(t, resource.TestCase{ +// PreCheck: func() { acctest.AccTestPreCheck_AdcCredentialsOnly(t) }, // Note different than default +// CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), +// Steps: []resource.TestStep{ +// // Check test fails with version of provider where data source is implemented with PF +// { +// ExternalProviders: map[string]resource.ExternalProvider{ +// "google": { +// VersionConstraint: "4.60.0", // Muxed provider with dns data sources migrated to PF +// Source: "hashicorp/google", +// }, +// }, +// ExpectError: regexp.MustCompile("Post \"https://oauth2.googleapis.com/token\": context canceled"), +// Config: testAccDataSourceDNSKeysConfig_AdcCredentials(context), +// Check: resource.ComposeTestCheckFunc( +// testAccDataSourceDNSKeysDSRecordCheck("data.google_dns_keys.foo_dns_key"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.#", "1"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "zone_signing_keys.#", "1"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key_id", "key_signing_keys.#", "1"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key_id", "zone_signing_keys.#", "1"), +// ), +// }, +// // Test should pass with more recent code +// { +// ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), +// Config: testAccDataSourceDNSKeysConfig_AdcCredentials(context), +// Check: resource.ComposeTestCheckFunc( +// testAccDataSourceDNSKeysDSRecordCheck("data.google_dns_keys.foo_dns_key"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "key_signing_keys.#", "1"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key", "zone_signing_keys.#", "1"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key_id", "key_signing_keys.#", "1"), +// resource.TestCheckResourceAttr("data.google_dns_keys.foo_dns_key_id", "zone_signing_keys.#", "1"), +// ), +// }, +// }, +// }) +// } + +func testAccDataSourceDNSKeysConfig_AdcCredentials(context map[string]interface{}) string { + return acctest.Nprintf(` + +// The auth problem isn't triggered unless provider block is +// present in the test config. + +provider "google" { + credentials = "%{credentials_path}" } -// This function extends the config returned from the `testAccDataSourceDNSKeysConfig` function -// to include output blocks that access the `key_signing_keys` and `zone_signing_keys` attributes. -// These are null if DNSSEC is not enabled. -func testAccDataSourceDNSKeysConfigWithOutputs(dnsZoneName, dnssecStatus string) string { +resource "google_dns_managed_zone" "foo" { + name = "%{dns_zone_name}" + dns_name = "dnssec.gcp.tfacc.hashicorptest.com." - config := testAccDataSourceDNSKeysConfig(dnsZoneName, dnssecStatus) - config = config + ` -# These outputs will cause an error if google_dns_managed_zone.foo.dnssec_config.state == "off" + dnssec_config { + state = "%{dnssec_status}" + non_existence = "nsec3" + } +} -output "test_access_google_dns_keys_key_signing_keys" { - description = "Testing that we can access a value in key_signing_keys ok as a computed block" - value = data.google_dns_keys.foo_dns_key_id.key_signing_keys[0].ds_record +data "google_dns_keys" "foo_dns_key" { + managed_zone = google_dns_managed_zone.foo.name } -output "test_access_google_dns_keys_zone_signing_keys" { - description = "Testing that we can access a value in zone_signing_keys ok as a computed block" - value = data.google_dns_keys.foo_dns_key_id.zone_signing_keys[0].id +data "google_dns_keys" "foo_dns_key_id" { + managed_zone = google_dns_managed_zone.foo.id } -` - return config +`, context) } diff --git a/google-beta/services/dns/data_source_dns_keys.go b/google-beta/services/dns/data_source_dns_keys.go index 7bb2e86381..88eb0a6595 100644 --- a/google-beta/services/dns/data_source_dns_keys.go +++ b/google-beta/services/dns/data_source_dns_keys.go @@ -3,391 +3,231 @@ package dns import ( - "context" "fmt" + "log" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" "google.golang.org/api/dns/v1" - - "github.com/hashicorp/terraform-plugin-framework/attr" - "github.com/hashicorp/terraform-plugin-framework/datasource" - "github.com/hashicorp/terraform-plugin-framework/datasource/schema" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-plugin-log/tflog" - - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwmodels" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwresource" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" ) -// Ensure the implementation satisfies the expected interfaces -var ( - _ datasource.DataSource = &GoogleDnsKeysDataSource{} - _ datasource.DataSourceWithConfigure = &GoogleDnsKeysDataSource{} -) - -func NewGoogleDnsKeysDataSource() datasource.DataSource { - return &GoogleDnsKeysDataSource{} -} - -// GoogleDnsKeysDataSource defines the data source implementation -type GoogleDnsKeysDataSource struct { - client *dns.Service - project types.String -} - -type GoogleDnsKeysModel struct { - Id types.String `tfsdk:"id"` - ManagedZone types.String `tfsdk:"managed_zone"` - Project types.String `tfsdk:"project"` - KeySigningKeys types.List `tfsdk:"key_signing_keys"` - ZoneSigningKeys types.List `tfsdk:"zone_signing_keys"` -} - -type GoogleZoneSigningKey struct { - Algorithm types.String `tfsdk:"algorithm"` - CreationTime types.String `tfsdk:"creation_time"` - Description types.String `tfsdk:"description"` - Id types.String `tfsdk:"id"` - IsActive types.Bool `tfsdk:"is_active"` - KeyLength types.Int64 `tfsdk:"key_length"` - KeyTag types.Int64 `tfsdk:"key_tag"` - PublicKey types.String `tfsdk:"public_key"` - Digests types.List `tfsdk:"digests"` +// DNSSEC Algorithm Numbers: https://www.iana.org/assignments/dns-sec-alg-numbers/dns-sec-alg-numbers.xhtml +// The following are algorithms that are supported by Cloud DNS +var dnssecAlgoNums = map[string]int{ + "rsasha1": 5, + "rsasha256": 8, + "rsasha512": 10, + "ecdsap256sha256": 13, + "ecdsap384sha384": 14, } -type GoogleKeySigningKey struct { - Algorithm types.String `tfsdk:"algorithm"` - CreationTime types.String `tfsdk:"creation_time"` - Description types.String `tfsdk:"description"` - Id types.String `tfsdk:"id"` - IsActive types.Bool `tfsdk:"is_active"` - KeyLength types.Int64 `tfsdk:"key_length"` - KeyTag types.Int64 `tfsdk:"key_tag"` - PublicKey types.String `tfsdk:"public_key"` - Digests types.List `tfsdk:"digests"` - - DSRecord types.String `tfsdk:"ds_record"` +// DS RR Digest Types: https://www.iana.org/assignments/ds-rr-types/ds-rr-types.xhtml +// The following are digests that are supported by Cloud DNS +var dnssecDigestType = map[string]int{ + "sha1": 1, + "sha256": 2, + "sha384": 4, } -type GoogleZoneSigningKeyDigest struct { - Digest types.String `tfsdk:"digest"` - Type types.String `tfsdk:"type"` -} +func DataSourceDNSKeys() *schema.Resource { + return &schema.Resource{ + Read: dataSourceDNSKeysRead, -var ( - digestAttrTypes = map[string]attr.Type{ - "digest": types.StringType, - "type": types.StringType, + Schema: map[string]*schema.Schema{ + "managed_zone": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "key_signing_keys": { + Type: schema.TypeList, + Computed: true, + Elem: kskResource(), + }, + "zone_signing_keys": { + Type: schema.TypeList, + Computed: true, + Elem: dnsKeyResource(), + }, + }, } -) - -func (d *GoogleDnsKeysDataSource) Metadata(ctx context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { - resp.TypeName = req.ProviderTypeName + "_dns_keys" } -func (d *GoogleDnsKeysDataSource) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { - resp.Schema = schema.Schema{ - // This description is used by the documentation generator and the language server. - MarkdownDescription: "Get the DNSKEY and DS records of DNSSEC-signed managed zones", - - Attributes: map[string]schema.Attribute{ - "managed_zone": schema.StringAttribute{ - Description: "The Name of the zone.", - MarkdownDescription: "The Name of the zone.", - Required: true, +func dnsKeyResource() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "algorithm": { + Type: schema.TypeString, + Computed: true, + }, + "creation_time": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, }, - "project": schema.StringAttribute{ - Description: "The ID of the project for the Google Cloud.", - MarkdownDescription: "The ID of the project for the Google Cloud.", - Optional: true, - Computed: true, + "digests": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "digest": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, }, - "id": schema.StringAttribute{ - Description: "DNS keys identifier", - MarkdownDescription: "DNS keys identifier", - Computed: true, + "id": { + Type: schema.TypeString, + Computed: true, }, - // Issue with using computed blocks in the plugin framework with protocol 5 - // See: https://developer.hashicorp.com/terraform/plugin/framework/migrating/attributes-blocks/blocks-computed#framework - "zone_signing_keys": schema.ListAttribute{ - Description: "A list of Zone-signing key (ZSK) records.", - MarkdownDescription: "A list of Zone-signing key (ZSK) records.", - ElementType: dnsKeyObject(), - Computed: true, + "is_active": { + Type: schema.TypeBool, + Computed: true, }, - // Issue with using computed blocks in the plugin framework with protocol 5 - // See: https://developer.hashicorp.com/terraform/plugin/framework/migrating/attributes-blocks/blocks-computed#framework - "key_signing_keys": schema.ListAttribute{ - Description: "A list of Key-signing key (KSK) records.", - MarkdownDescription: "A list of Key-signing key (KSK) records.", - ElementType: kskObject(), - Computed: true, + "key_length": { + Type: schema.TypeInt, + Computed: true, + }, + "key_tag": { + Type: schema.TypeInt, + Computed: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, }, }, } } -func (d *GoogleDnsKeysDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, resp *datasource.ConfigureResponse) { - // Prevent panic if the provider has not been configured. - if req.ProviderData == nil { - return - } +func kskResource() *schema.Resource { + resource := dnsKeyResource() - p, ok := req.ProviderData.(*fwtransport.FrameworkProviderConfig) - if !ok { - resp.Diagnostics.AddError( - "Unexpected Data Source Configure Type", - fmt.Sprintf("Expected *fwtransport.FrameworkProviderConfig, got: %T. Please report this issue to the provider developers.", req.ProviderData), - ) - return + resource.Schema["ds_record"] = &schema.Schema{ + Type: schema.TypeString, + Computed: true, } - d.client = p.NewDnsClient(p.UserAgent, &resp.Diagnostics) - d.project = p.Project + return resource } -func (d *GoogleDnsKeysDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { - var data GoogleDnsKeysModel - var metaData *fwmodels.ProviderMetaModel - var diags diag.Diagnostics - - // Read Provider meta into the meta model - resp.Diagnostics.Append(req.ProviderMeta.Get(ctx, &metaData)...) - if resp.Diagnostics.HasError() { - return - } - - d.client.UserAgent = fwtransport.GenerateFrameworkUserAgentString(metaData, d.client.UserAgent) - - // Read Terraform configuration data into the model - resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) - if resp.Diagnostics.HasError() { - return - } - - fv := fwresource.ParseProjectFieldValueFramework("managedZones", data.ManagedZone.ValueString(), "project", data.Project, d.project, false, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } - - data.Project = types.StringValue(fv.Project) - data.ManagedZone = types.StringValue(fv.Name) - - data.Id = types.StringValue(fmt.Sprintf("projects/%s/managedZones/%s", data.Project.ValueString(), data.ManagedZone.ValueString())) - - tflog.Debug(ctx, fmt.Sprintf("fetching DNS keys from managed zone %s", data.ManagedZone.ValueString())) - - clientResp, err := d.client.DnsKeys.List(data.Project.ValueString(), data.ManagedZone.ValueString()).Do() - if err != nil { - resp.Diagnostics.AddError(fmt.Sprintf("Error when reading or editing dataSourceDnsKeys"), err.Error()) - // Save data into Terraform state - resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) - return - } - - tflog.Trace(ctx, "read dns keys data source") - - zoneSigningKeys, keySigningKeys := flattenSigningKeys(ctx, clientResp.DnsKeys, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } - - zskObjType := types.ObjectType{}.WithAttributeTypes(getDnsKeyAttrs("zoneSigning")) - data.ZoneSigningKeys, diags = types.ListValueFrom(ctx, zskObjType, zoneSigningKeys) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return - } - - kskObjType := types.ObjectType{}.WithAttributeTypes(getDnsKeyAttrs("keySigning")) - data.KeySigningKeys, diags = types.ListValueFrom(ctx, kskObjType, keySigningKeys) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return +func generateDSRecord(signingKey *dns.DnsKey) (string, error) { + algoNum, found := dnssecAlgoNums[signingKey.Algorithm] + if !found { + return "", fmt.Errorf("DNSSEC Algorithm number for %s not found", signingKey.Algorithm) } - // Save data into Terraform state - resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) -} - -// dnsKeyObject is a helper function for the zone_signing_keys schema and -// is also used by key_signing_keys schema (called in kskObject defined below) -func dnsKeyObject() types.ObjectType { - // See comments in Schema function - // Also: https://github.com/hashicorp/terraform-plugin-framework/issues/214#issuecomment-1194666110 - return types.ObjectType{ - AttrTypes: map[string]attr.Type{ - "algorithm": types.StringType, - "creation_time": types.StringType, - "description": types.StringType, - "id": types.StringType, - "is_active": types.BoolType, - "key_length": types.Int64Type, - "key_tag": types.Int64Type, - "public_key": types.StringType, - "digests": types.ListType{ - ElemType: types.ObjectType{ - AttrTypes: map[string]attr.Type{ - "digest": types.StringType, - "type": types.StringType, - }, - }, - }, - }, + digestType, found := dnssecDigestType[signingKey.Digests[0].Type] + if !found { + return "", fmt.Errorf("DNSSEC Digest type for %s not found", signingKey.Digests[0].Type) } -} - -// kskObject is a helper function for the key_signing_keys schema -func kskObject() types.ObjectType { - nbo := dnsKeyObject() - nbo.AttrTypes["ds_record"] = types.StringType - - return nbo + return fmt.Sprintf("%d %d %d %s", + signingKey.KeyTag, + algoNum, + digestType, + signingKey.Digests[0].Digest), nil } -func flattenSigningKeys(ctx context.Context, signingKeys []*dns.DnsKey, diags *diag.Diagnostics) ([]types.Object, []types.Object) { - var zoneSigningKeys []types.Object - var keySigningKeys []types.Object - var d diag.Diagnostics +func flattenSigningKeys(signingKeys []*dns.DnsKey, keyType string) []map[string]interface{} { + var keys []map[string]interface{} for _, signingKey := range signingKeys { - if signingKey != nil { - var digests []types.Object - for _, dig := range signingKey.Digests { - digest := GoogleZoneSigningKeyDigest{ - Digest: types.StringValue(dig.Digest), - Type: types.StringValue(dig.Type), - } - obj, d := types.ObjectValueFrom(ctx, digestAttrTypes, digest) - diags.Append(d...) - if diags.HasError() { - return zoneSigningKeys, keySigningKeys - } - - digests = append(digests, obj) + if signingKey != nil && signingKey.Type == keyType { + data := map[string]interface{}{ + "algorithm": signingKey.Algorithm, + "creation_time": signingKey.CreationTime, + "description": signingKey.Description, + "digests": flattenDigests(signingKey.Digests), + "id": signingKey.Id, + "is_active": signingKey.IsActive, + "key_length": signingKey.KeyLength, + "key_tag": signingKey.KeyTag, + "public_key": signingKey.PublicKey, } if signingKey.Type == "keySigning" && len(signingKey.Digests) > 0 { - ksk := GoogleKeySigningKey{ - Algorithm: types.StringValue(signingKey.Algorithm), - CreationTime: types.StringValue(signingKey.CreationTime), - Description: types.StringValue(signingKey.Description), - Id: types.StringValue(signingKey.Id), - IsActive: types.BoolValue(signingKey.IsActive), - KeyLength: types.Int64Value(signingKey.KeyLength), - KeyTag: types.Int64Value(signingKey.KeyTag), - PublicKey: types.StringValue(signingKey.PublicKey), - } - - objType := types.ObjectType{}.WithAttributeTypes(digestAttrTypes) - ksk.Digests, d = types.ListValueFrom(ctx, objType, digests) - diags.Append(d...) - if diags.HasError() { - return zoneSigningKeys, keySigningKeys - } - dsRecord, err := generateDSRecord(signingKey) - if err != nil { - diags.AddError("error generating ds record", err.Error()) - return zoneSigningKeys, keySigningKeys + if err == nil { + data["ds_record"] = dsRecord } + } - ksk.DSRecord = types.StringValue(dsRecord) + keys = append(keys, data) + } + } - obj, d := types.ObjectValueFrom(ctx, getDnsKeyAttrs(signingKey.Type), ksk) - diags.Append(d...) - if diags.HasError() { - return zoneSigningKeys, keySigningKeys - } - keySigningKeys = append(keySigningKeys, obj) - } else { - zsk := GoogleZoneSigningKey{ - Algorithm: types.StringValue(signingKey.Algorithm), - CreationTime: types.StringValue(signingKey.CreationTime), - Description: types.StringValue(signingKey.Description), - Id: types.StringValue(signingKey.Id), - IsActive: types.BoolValue(signingKey.IsActive), - KeyLength: types.Int64Value(signingKey.KeyLength), - KeyTag: types.Int64Value(signingKey.KeyTag), - PublicKey: types.StringValue(signingKey.PublicKey), - } + return keys +} - objType := types.ObjectType{}.WithAttributeTypes(digestAttrTypes) - zsk.Digests, d = types.ListValueFrom(ctx, objType, digests) - diags.Append(d...) - if diags.HasError() { - return zoneSigningKeys, keySigningKeys - } +func flattenDigests(dnsKeyDigests []*dns.DnsKeyDigest) []map[string]interface{} { + var digests []map[string]interface{} - obj, d := types.ObjectValueFrom(ctx, getDnsKeyAttrs("zoneSigning"), zsk) - diags.Append(d...) - if diags.HasError() { - return zoneSigningKeys, keySigningKeys - } - zoneSigningKeys = append(zoneSigningKeys, obj) + for _, dnsKeyDigest := range dnsKeyDigests { + if dnsKeyDigest != nil { + data := map[string]interface{}{ + "digest": dnsKeyDigest.Digest, + "type": dnsKeyDigest.Type, } + digests = append(digests, data) } } - return zoneSigningKeys, keySigningKeys + return digests } -// DNSSEC Algorithm Numbers: https://www.iana.org/assignments/dns-sec-alg-numbers/dns-sec-alg-numbers.xhtml -// The following are algorithms that are supported by Cloud DNS -var dnssecAlgoNums = map[string]int{ - "rsasha1": 5, - "rsasha256": 8, - "rsasha512": 10, - "ecdsap256sha256": 13, - "ecdsap384sha384": 14, -} - -// DS RR Digest Types: https://www.iana.org/assignments/ds-rr-types/ds-rr-types.xhtml -// The following are digests that are supported by Cloud DNS -var dnssecDigestType = map[string]int{ - "sha1": 1, - "sha256": 2, - "sha384": 4, -} +func dataSourceDNSKeysRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } -// generateDSRecord will generate the ds_record on key signing keys -func generateDSRecord(signingKey *dns.DnsKey) (string, error) { - algoNum, found := dnssecAlgoNums[signingKey.Algorithm] - if !found { - return "", fmt.Errorf("DNSSEC Algorithm number for %s not found", signingKey.Algorithm) + fv, err := tpgresource.ParseProjectFieldValue("managedZones", d.Get("managed_zone").(string), "project", d, config, false) + if err != nil { + return err } + project := fv.Project + managedZone := fv.Name - digestType, found := dnssecDigestType[signingKey.Digests[0].Type] - if !found { - return "", fmt.Errorf("DNSSEC Digest type for %s not found", signingKey.Digests[0].Type) + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error setting project: %s", err) } + d.SetId(fmt.Sprintf("projects/%s/managedZones/%s", project, managedZone)) - return fmt.Sprintf("%d %d %d %s", - signingKey.KeyTag, - algoNum, - digestType, - signingKey.Digests[0].Digest), nil -} + log.Printf("[DEBUG] Fetching DNS keys from managed zone %s", managedZone) -func getDnsKeyAttrs(keyType string) map[string]attr.Type { - dnsKeyAttrs := map[string]attr.Type{ - "algorithm": types.StringType, - "creation_time": types.StringType, - "description": types.StringType, - "id": types.StringType, - "is_active": types.BoolType, - "key_length": types.Int64Type, - "key_tag": types.Int64Type, - "public_key": types.StringType, - "digests": types.ListType{}.WithElementType(types.ObjectType{}.WithAttributeTypes(digestAttrTypes)), + response, err := config.NewDnsClient(userAgent).DnsKeys.List(project, managedZone).Do() + if err != nil && !transport_tpg.IsGoogleApiErrorWithCode(err, 404) { + return fmt.Errorf("error retrieving DNS keys: %s", err) + } else if transport_tpg.IsGoogleApiErrorWithCode(err, 404) { + return nil } - if keyType == "keySigning" { - dnsKeyAttrs["ds_record"] = types.StringType + log.Printf("[DEBUG] Fetched DNS keys from managed zone %s", managedZone) + + if err := d.Set("key_signing_keys", flattenSigningKeys(response.DnsKeys, "keySigning")); err != nil { + return fmt.Errorf("Error setting key_signing_keys: %s", err) + } + if err := d.Set("zone_signing_keys", flattenSigningKeys(response.DnsKeys, "zoneSigning")); err != nil { + return fmt.Errorf("Error setting zone_signing_keys: %s", err) } - return dnsKeyAttrs + return nil } diff --git a/google-beta/services/dns/data_source_dns_managed_zone.go b/google-beta/services/dns/data_source_dns_managed_zone.go index f06839fe61..01936e9420 100644 --- a/google-beta/services/dns/data_source_dns_managed_zone.go +++ b/google-beta/services/dns/data_source_dns_managed_zone.go @@ -3,198 +3,107 @@ package dns import ( - "context" "fmt" - "google.golang.org/api/dns/v1" - - "github.com/hashicorp/terraform-plugin-framework/attr" - "github.com/hashicorp/terraform-plugin-framework/datasource" - "github.com/hashicorp/terraform-plugin-framework/datasource/schema" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-plugin-log/tflog" - - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwmodels" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwresource" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" ) -// Ensure the implementation satisfies the expected interfaces -var ( - _ datasource.DataSource = &GoogleDnsManagedZoneDataSource{} - _ datasource.DataSourceWithConfigure = &GoogleDnsManagedZoneDataSource{} -) - -func NewGoogleDnsManagedZoneDataSource() datasource.DataSource { - return &GoogleDnsManagedZoneDataSource{} -} - -// GoogleDnsManagedZoneDataSource defines the data source implementation -type GoogleDnsManagedZoneDataSource struct { - client *dns.Service - project types.String -} +func DataSourceDnsManagedZone() *schema.Resource { + return &schema.Resource{ + Read: dataSourceDnsManagedZoneRead, -type GoogleDnsManagedZoneModel struct { - Id types.String `tfsdk:"id"` - DnsName types.String `tfsdk:"dns_name"` - Name types.String `tfsdk:"name"` - Description types.String `tfsdk:"description"` - ManagedZoneId types.Int64 `tfsdk:"managed_zone_id"` - NameServers types.List `tfsdk:"name_servers"` - Visibility types.String `tfsdk:"visibility"` - Project types.String `tfsdk:"project"` -} - -func (d *GoogleDnsManagedZoneDataSource) Metadata(ctx context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { - resp.TypeName = req.ProviderTypeName + "_dns_managed_zone" -} - -func (d *GoogleDnsManagedZoneDataSource) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { - resp.Schema = schema.Schema{ - // This description is used by the documentation generator and the language server. - MarkdownDescription: "Provides access to a zone's attributes within Google Cloud DNS", - - Attributes: map[string]schema.Attribute{ - "name": schema.StringAttribute{ - Description: "A unique name for the resource.", - MarkdownDescription: "A unique name for the resource.", - Required: true, + Schema: map[string]*schema.Schema{ + "dns_name": { + Type: schema.TypeString, + Computed: true, }, - // Google Cloud DNS ManagedZone resources do not have a SelfLink attribute. - "project": schema.StringAttribute{ - Description: "The ID of the project for the Google Cloud.", - MarkdownDescription: "The ID of the project for the Google Cloud.", - Optional: true, + "name": { + Type: schema.TypeString, + Required: true, }, - "dns_name": schema.StringAttribute{ - Description: "The fully qualified DNS name of this zone.", - MarkdownDescription: "The fully qualified DNS name of this zone.", - Computed: true, + "description": { + Type: schema.TypeString, + Computed: true, }, - "description": schema.StringAttribute{ - Description: "A textual description field.", - MarkdownDescription: "A textual description field.", - Computed: true, + "managed_zone_id": { + Type: schema.TypeInt, + Computed: true, }, - "managed_zone_id": schema.Int64Attribute{ - Description: "Unique identifier for the resource; defined by the server.", - MarkdownDescription: "Unique identifier for the resource; defined by the server.", - Computed: true, + "name_servers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, }, - "name_servers": schema.ListAttribute{ - Description: "The list of nameservers that will be authoritative for this " + - "domain. Use NS records to redirect from your DNS provider to these names, " + - "thus making Google Cloud DNS authoritative for this zone.", - MarkdownDescription: "The list of nameservers that will be authoritative for this " + - "domain. Use NS records to redirect from your DNS provider to these names, " + - "thus making Google Cloud DNS authoritative for this zone.", - Computed: true, - ElementType: types.StringType, + "visibility": { + Type: schema.TypeString, + Computed: true, }, - "visibility": schema.StringAttribute{ - Description: "The zone's visibility: public zones are exposed to the Internet, " + - "while private zones are visible only to Virtual Private Cloud resources.", - MarkdownDescription: "The zone's visibility: public zones are exposed to the Internet, " + - "while private zones are visible only to Virtual Private Cloud resources.", - Computed: true, + // Google Cloud DNS ManagedZone resources do not have a SelfLink attribute. + "project": { + Type: schema.TypeString, + Optional: true, }, - "id": schema.StringAttribute{ - Description: "DNS managed zone identifier", - MarkdownDescription: "DNS managed zone identifier", - Computed: true, + "id": { + Type: schema.TypeString, + Computed: true, }, }, } } -func (d *GoogleDnsManagedZoneDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, resp *datasource.ConfigureResponse) { - // Prevent panic if the provider has not been configured. - if req.ProviderData == nil { - return +func dataSourceDnsManagedZoneRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err } - p, ok := req.ProviderData.(*fwtransport.FrameworkProviderConfig) - if !ok { - resp.Diagnostics.AddError( - "Unexpected Data Source Configure Type", - fmt.Sprintf("Expected *fwtransport.FrameworkProviderConfig, got: %T. Please report this issue to the provider developers.", req.ProviderData), - ) - return + project, err := tpgresource.GetProject(d, config) + if err != nil { + return err } - d.client = p.NewDnsClient(p.UserAgent, &resp.Diagnostics) - d.project = p.Project -} + name := d.Get("name").(string) + d.SetId(fmt.Sprintf("projects/%s/managedZones/%s", project, name)) -func (d *GoogleDnsManagedZoneDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { - var data GoogleDnsManagedZoneModel - var metaData *fwmodels.ProviderMetaModel - var diags diag.Diagnostics - - // Read Provider meta into the meta model - resp.Diagnostics.Append(req.ProviderMeta.Get(ctx, &metaData)...) - if resp.Diagnostics.HasError() { - return + zone, err := config.NewDnsClient(userAgent).ManagedZones.Get( + project, name).Do() + if err != nil { + return err } - d.client.UserAgent = fwtransport.GenerateFrameworkUserAgentString(metaData, d.client.UserAgent) - - // Read Terraform configuration data into the model - resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) - if resp.Diagnostics.HasError() { - return + if err := d.Set("name_servers", zone.NameServers); err != nil { + return fmt.Errorf("Error setting name_servers: %s", err) } - - data.Project = fwresource.GetProjectFramework(data.Project, d.project, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return + if err := d.Set("name", zone.Name); err != nil { + return fmt.Errorf("Error setting name: %s", err) } - - data.Id = types.StringValue(fmt.Sprintf("projects/%s/managedZones/%s", data.Project.ValueString(), data.Name.ValueString())) - clientResp, err := d.client.ManagedZones.Get(data.Project.ValueString(), data.Name.ValueString()).Do() - if err != nil { - fwtransport.HandleDatasourceNotFoundError(ctx, err, &resp.State, fmt.Sprintf("dataSourceDnsManagedZone %q", data.Name.ValueString()), &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } + if err := d.Set("dns_name", zone.DnsName); err != nil { + return fmt.Errorf("Error setting dns_name: %s", err) } - - tflog.Trace(ctx, "read dns managed zone data source") - - data.DnsName = types.StringValue(clientResp.DnsName) - data.Description = types.StringValue(clientResp.Description) - data.ManagedZoneId = types.Int64Value(int64(clientResp.Id)) - data.Visibility = types.StringValue(clientResp.Visibility) - data.NameServers, diags = types.ListValueFrom(ctx, types.StringType, clientResp.NameServers) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return + if err := d.Set("managed_zone_id", zone.Id); err != nil { + return fmt.Errorf("Error setting managed_zone_id: %s", err) } - - // Save data into Terraform state - resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) -} - -func getDnsManagedZoneAttrs() map[string]attr.Type { - dnsManagedZoneAttrs := map[string]attr.Type{ - "name": types.StringType, - "project": types.StringType, - "dns_name": types.StringType, - "description": types.StringType, - "managed_zone_id": types.Int64Type, - "name_servers": types.ListType{}.WithElementType(types.StringType), - "visibility": types.StringType, - "id": types.StringType, + if err := d.Set("description", zone.Description); err != nil { + return fmt.Errorf("Error setting description: %s", err) + } + if err := d.Set("visibility", zone.Visibility); err != nil { + return fmt.Errorf("Error setting visibility: %s", err) + } + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error setting project: %s", err) } - return dnsManagedZoneAttrs + return nil } diff --git a/google-beta/services/dns/data_source_dns_managed_zone_test.go b/google-beta/services/dns/data_source_dns_managed_zone_test.go index 56809cee19..aae485b5f9 100644 --- a/google-beta/services/dns/data_source_dns_managed_zone_test.go +++ b/google-beta/services/dns/data_source_dns_managed_zone_test.go @@ -4,32 +4,21 @@ package dns_test import ( "fmt" - "strings" "testing" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwresource" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" ) func TestAccDataSourceDnsManagedZone_basic(t *testing.T) { - // TODO: https://github.com/hashicorp/terraform-provider-google/issues/14158 - acctest.SkipIfVcr(t) t.Parallel() acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - CheckDestroy: testAccCheckDNSManagedZoneDestroyProducerFramework(t), + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { - ExternalProviders: map[string]resource.ExternalProvider{ - "google": { - VersionConstraint: "4.58.0", - Source: "hashicorp/google-beta", - }, - }, Config: testAccDataSourceDnsManagedZone_basic(acctest.RandString(t, 10)), Check: acctest.CheckDataSourceStateMatchesResourceStateWithIgnores( "data.google_dns_managed_zone.qa", @@ -51,29 +40,6 @@ func TestAccDataSourceDnsManagedZone_basic(t *testing.T) { }, ), }, - { - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Config: testAccDataSourceDnsManagedZone_basic(acctest.RandString(t, 10)), - Check: acctest.CheckDataSourceStateMatchesResourceStateWithIgnores( - "data.google_dns_managed_zone.qa", - "google_dns_managed_zone.foo", - map[string]struct{}{ - "dnssec_config.#": {}, - "private_visibility_config.#": {}, - "peering_config.#": {}, - "forwarding_config.#": {}, - "force_destroy": {}, - "labels.#": {}, - "terraform_labels.%": {}, - "effective_labels.%": {}, - "creation_time": {}, - "cloud_logging_config.#": {}, - "cloud_logging_config.0.%": {}, - "cloud_logging_config.0.enable_logging": {}, - "reverse_lookup": {}, - }, - ), - }, }, }) } @@ -81,48 +47,13 @@ func TestAccDataSourceDnsManagedZone_basic(t *testing.T) { func testAccDataSourceDnsManagedZone_basic(managedZoneName string) string { return fmt.Sprintf(` resource "google_dns_managed_zone" "foo" { - name = "tf-test-zone-%s" - dns_name = "tf-test-zone-%s.hashicorptest.com." - description = "tf test DNS zone" + name = "tf-test-qa-zone-%s" + dns_name = "qa.gcp.tfacc.hashicorptest.com." + description = "QA DNS zone" } data "google_dns_managed_zone" "qa" { name = google_dns_managed_zone.foo.name } -`, managedZoneName, managedZoneName) -} - -// testAccCheckDNSManagedZoneDestroyProducerFramework is the framework version of the generated testAccCheckDNSManagedZoneDestroyProducer -// when we automate this, we'll use the automated version and can get rid of this -func testAccCheckDNSManagedZoneDestroyProducerFramework(t *testing.T) func(s *terraform.State) error { - return func(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_dns_managed_zone" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - p := acctest.GetFwTestProvider(t) - - url, err := fwresource.ReplaceVarsForFrameworkTest(&p.FrameworkProvider.FrameworkProviderConfig, rs, "{{DNSBasePath}}projects/{{project}}/managedZones/{{name}}") - if err != nil { - return err - } - - billingProject := "" - - if !p.BillingProject.IsNull() && p.BillingProject.String() != "" { - billingProject = p.BillingProject.String() - } - - _, diags := fwtransport.SendFrameworkRequest(&p.FrameworkProvider.FrameworkProviderConfig, "GET", billingProject, url, p.UserAgent, nil) - if !diags.HasError() { - return fmt.Errorf("DNSManagedZone still exists at %s", url) - } - } - - return nil - } +`, managedZoneName) } diff --git a/google-beta/services/dns/data_source_dns_managed_zones.go b/google-beta/services/dns/data_source_dns_managed_zones.go index c6047e5b76..c785ab0198 100644 --- a/google-beta/services/dns/data_source_dns_managed_zones.go +++ b/google-beta/services/dns/data_source_dns_managed_zones.go @@ -3,242 +3,95 @@ package dns import ( - "context" "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" "google.golang.org/api/dns/v1" - - "github.com/hashicorp/terraform-plugin-framework/datasource" - "github.com/hashicorp/terraform-plugin-framework/datasource/schema" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-plugin-log/tflog" - - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwmodels" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwresource" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" ) -// Ensure the implementation satisfies the expected interfaces -var ( - _ datasource.DataSource = &GoogleDnsManagedZonesDataSource{} - _ datasource.DataSourceWithConfigure = &GoogleDnsManagedZonesDataSource{} -) - -func NewGoogleDnsManagedZonesDataSource() datasource.DataSource { - return &GoogleDnsManagedZonesDataSource{} -} - -// GoogleDnsManagedZonesDataSource defines the data source implementation -type GoogleDnsManagedZonesDataSource struct { - client *dns.Service - project types.String -} +func DataSourceDnsManagedZones() *schema.Resource { -type GoogleDnsManagedZonesModel struct { - Id types.String `tfsdk:"id"` - Project types.String `tfsdk:"project"` - ManagedZones types.List `tfsdk:"managed_zones"` -} - -func (d *GoogleDnsManagedZonesDataSource) Metadata(ctx context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { - resp.TypeName = req.ProviderTypeName + "_dns_managed_zones" -} + mzSchema := DataSourceDnsManagedZone().Schema + tpgresource.AddOptionalFieldsToSchema(mzSchema, "name") -func (d *GoogleDnsManagedZonesDataSource) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + return &schema.Resource{ + Read: dataSourceDnsManagedZonesRead, - resp.Schema = schema.Schema{ - // This description is used by the documentation generator and the language server. - MarkdownDescription: "Provides access to all zones for a given project within Google Cloud DNS", - - Attributes: map[string]schema.Attribute{ - - "project": schema.StringAttribute{ - Description: "The ID of the project for the Google Cloud.", - MarkdownDescription: "The ID of the project for the Google Cloud.", - Optional: true, + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Computed: true, }, - // Id field is added to match plugin-framework migrated google_dns_managed_zone data source - // Whilst ID fields are required in the SDK, they're not needed in the plugin-framework. - "id": schema.StringAttribute{ - Description: "foobar", - MarkdownDescription: "foobar", - Computed: true, + "managed_zones": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: mzSchema, + }, }, - }, - Blocks: map[string]schema.Block{ - "managed_zones": schema.ListNestedBlock{ - Description: "The list of managed zones in the given project.", - MarkdownDescription: "The list of managed zones in the given project.", - NestedObject: schema.NestedBlockObject{ - Attributes: map[string]schema.Attribute{ - "name": schema.StringAttribute{ - Description: "A unique name for the resource.", - MarkdownDescription: "A unique name for the resource.", - Computed: true, - }, - - // Google Cloud DNS ManagedZone resources do not have a SelfLink attribute. - "project": schema.StringAttribute{ - Description: "The ID of the project for the Google Cloud.", - MarkdownDescription: "The ID of the project for the Google Cloud.", - Computed: true, - }, - - "dns_name": schema.StringAttribute{ - Description: "The fully qualified DNS name of this zone.", - MarkdownDescription: "The fully qualified DNS name of this zone.", - Computed: true, - }, - - "description": schema.StringAttribute{ - Description: "A textual description field.", - MarkdownDescription: "A textual description field.", - Computed: true, - }, - - "managed_zone_id": schema.Int64Attribute{ - Description: "Unique identifier for the resource; defined by the server.", - MarkdownDescription: "Unique identifier for the resource; defined by the server.", - Computed: true, - }, - - "name_servers": schema.ListAttribute{ - Description: "The list of nameservers that will be authoritative for this " + - "domain. Use NS records to redirect from your DNS provider to these names, " + - "thus making Google Cloud DNS authoritative for this zone.", - MarkdownDescription: "The list of nameservers that will be authoritative for this " + - "domain. Use NS records to redirect from your DNS provider to these names, " + - "thus making Google Cloud DNS authoritative for this zone.", - Computed: true, - ElementType: types.StringType, - }, - - "visibility": schema.StringAttribute{ - Description: "The zone's visibility: public zones are exposed to the Internet, " + - "while private zones are visible only to Virtual Private Cloud resources.", - MarkdownDescription: "The zone's visibility: public zones are exposed to the Internet, " + - "while private zones are visible only to Virtual Private Cloud resources.", - Computed: true, - }, - - "id": schema.StringAttribute{ - Description: "DNS managed zone identifier", - MarkdownDescription: "DNS managed zone identifier", - Computed: true, - }, - }, - }, + // Google Cloud DNS ManagedZone resources do not have a SelfLink attribute. + "project": { + Type: schema.TypeString, + Optional: true, }, }, } } -func (d *GoogleDnsManagedZonesDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, resp *datasource.ConfigureResponse) { - // Prevent panic if the provider has not been configured. - if req.ProviderData == nil { - return - } - - p, ok := req.ProviderData.(*fwtransport.FrameworkProviderConfig) - if !ok { - resp.Diagnostics.AddError( - "Unexpected Data Source Configure Type", - fmt.Sprintf("Expected *fwtransport.FrameworkProviderConfig, got: %T. Please report this issue to the provider developers.", req.ProviderData), - ) - return - } - - d.client = p.NewDnsClient(p.UserAgent, &resp.Diagnostics) - d.project = p.Project -} - -func (d *GoogleDnsManagedZonesDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { - var data GoogleDnsManagedZonesModel - var metaData *fwmodels.ProviderMetaModel - var diags diag.Diagnostics - - // Read Provider meta into the meta model - resp.Diagnostics.Append(req.ProviderMeta.Get(ctx, &metaData)...) - if resp.Diagnostics.HasError() { - return - } - - d.client.UserAgent = fwtransport.GenerateFrameworkUserAgentString(metaData, d.client.UserAgent) - - // Read Terraform configuration data into the model - resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) - if resp.Diagnostics.HasError() { - return +func dataSourceDnsManagedZonesRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err } - data.Project = fwresource.GetProjectFramework(data.Project, d.project, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return + project, err := tpgresource.GetProject(d, config) + if err != nil { + return err } - data.Id = types.StringValue(fmt.Sprintf("projects/%s/managedZones", data.Project.ValueString())) + d.SetId(fmt.Sprintf("projects/%s/managedZones", project)) - tflog.Debug(ctx, fmt.Sprintf("fetching managed zones from project %s", data.Project.ValueString())) - - clientResp, err := d.client.ManagedZones.List(data.Project.ValueString()).Do() + zones, err := config.NewDnsClient(userAgent).ManagedZones.List(project).Do() if err != nil { - fwtransport.HandleDatasourceNotFoundError(ctx, err, &resp.State, fmt.Sprintf("dataSourceDnsManagedZones %q", data.Project.ValueString()), &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } + return err } - tflog.Trace(ctx, "read dns managed zones data source") - - zones, di := flattenManagedZones(ctx, clientResp.ManagedZones, data.Project.ValueString()) - diags.Append(di...) - - if len(zones) > 0 { - mzObjType := types.ObjectType{}.WithAttributeTypes(getDnsManagedZoneAttrs()) - data.ManagedZones, di = types.ListValueFrom(ctx, mzObjType, zones) - diags.Append(di...) + if err := d.Set("managed_zones", flattenZones(zones.ManagedZones, project)); err != nil { + return fmt.Errorf("error setting managed_zones: %s", err) } - - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return + if err := d.Set("project", project); err != nil { + return fmt.Errorf("error setting project: %s", err) } - // Save data into Terraform state - resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) + return nil } -func flattenManagedZones(ctx context.Context, managedZones []*dns.ManagedZone, project string) ([]types.Object, diag.Diagnostics) { - var zones []types.Object - var diags diag.Diagnostics - - for _, zone := range managedZones { - - data := GoogleDnsManagedZoneModel{ - // Id is not an API value but we assemble it here to match the google_dns_managed_zone data source - // and fulfil the GoogleDnsManagedZoneModel's fields. - // IDs are not required in the plugin-framework (vs the SDK) - Id: types.StringValue(fmt.Sprintf("projects/%s/managedZones/%s", project, zone.Name)), - Project: types.StringValue(project), - - DnsName: types.StringValue(zone.DnsName), - Name: types.StringValue(zone.Name), - Description: types.StringValue(zone.Description), - ManagedZoneId: types.Int64Value(int64(zone.Id)), - Visibility: types.StringValue(zone.Visibility), +// flattenZones flattens the list of managed zones into a format that can be assigned to the managed_zones field +// on the plural datasource. This includes setting the project value for each item, as this isn't returned by the API. +func flattenZones(items []*dns.ManagedZone, project string) []map[string]interface{} { + var zones []map[string]interface{} + + for _, item := range items { + if item != nil { + data := map[string]interface{}{ + "id": fmt.Sprintf("projects/%s/managedZones/%s", project, item.Name), // Matches construction in singlur data source + "dns_name": item.DnsName, + "name": item.Name, + "managed_zone_id": item.Id, + "description": item.Description, + "visibility": item.Visibility, + "name_servers": item.NameServers, + "project": project, + } + + zones = append(zones, data) } - - data.NameServers, diags = types.ListValueFrom(ctx, types.StringType, zone.NameServers) - diags.Append(diags...) - - obj, d := types.ObjectValueFrom(ctx, getDnsManagedZoneAttrs(), data) - diags.Append(d...) - - zones = append(zones, obj) } - return zones, diags + return zones } diff --git a/google-beta/services/dns/data_source_dns_managed_zones_test.go b/google-beta/services/dns/data_source_dns_managed_zones_test.go index 306c7ca6e0..0147c761d5 100644 --- a/google-beta/services/dns/data_source_dns_managed_zones_test.go +++ b/google-beta/services/dns/data_source_dns_managed_zones_test.go @@ -14,8 +14,6 @@ import ( func TestAccDataSourceDnsManagedZones_basic(t *testing.T) { t.Parallel() - // TODO: https://github.com/hashicorp/terraform-provider-google/issues/14158 - acctest.SkipIfVcr(t) context := map[string]interface{}{ "name-1": fmt.Sprintf("tf-test-zone-%s", acctest.RandString(t, 10)), @@ -27,7 +25,7 @@ func TestAccDataSourceDnsManagedZones_basic(t *testing.T) { acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, - CheckDestroy: testAccCheckDNSManagedZoneDestroyProducerFramework(t), + CheckDestroy: testAccCheckDNSManagedZoneDestroyProducer(t), Steps: []resource.TestStep{ { ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), diff --git a/google-beta/services/dns/data_source_dns_record_set.go b/google-beta/services/dns/data_source_dns_record_set.go index 567f332e94..98c7b52677 100644 --- a/google-beta/services/dns/data_source_dns_record_set.go +++ b/google-beta/services/dns/data_source_dns_record_set.go @@ -3,155 +3,88 @@ package dns import ( - "context" "fmt" - "google.golang.org/api/dns/v1" - - "github.com/hashicorp/terraform-plugin-framework/datasource" - "github.com/hashicorp/terraform-plugin-framework/datasource/schema" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-plugin-log/tflog" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwmodels" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwresource" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" -) - -// Ensure the implementation satisfies the expected interfaces -var ( - _ datasource.DataSource = &GoogleDnsRecordSetDataSource{} - _ datasource.DataSourceWithConfigure = &GoogleDnsRecordSetDataSource{} + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" ) -func NewGoogleDnsRecordSetDataSource() datasource.DataSource { - return &GoogleDnsRecordSetDataSource{} -} - -// GoogleDnsRecordSetDataSource defines the data source implementation -type GoogleDnsRecordSetDataSource struct { - client *dns.Service - project types.String -} - -type GoogleDnsRecordSetModel struct { - Id types.String `tfsdk:"id"` - ManagedZone types.String `tfsdk:"managed_zone"` - Name types.String `tfsdk:"name"` - Rrdatas types.List `tfsdk:"rrdatas"` - Ttl types.Int64 `tfsdk:"ttl"` - Type types.String `tfsdk:"type"` - Project types.String `tfsdk:"project"` -} +func DataSourceDnsRecordSet() *schema.Resource { + return &schema.Resource{ + Read: dataSourceDnsRecordSetRead, -func (d *GoogleDnsRecordSetDataSource) Metadata(ctx context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { - resp.TypeName = req.ProviderTypeName + "_dns_record_set" -} - -func (d *GoogleDnsRecordSetDataSource) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { - resp.Schema = schema.Schema{ - // This description is used by the documentation generator and the language server. - MarkdownDescription: "A DNS record set within Google Cloud DNS", - - Attributes: map[string]schema.Attribute{ - "managed_zone": schema.StringAttribute{ - MarkdownDescription: "The Name of the zone.", - Required: true, - }, - "name": schema.StringAttribute{ - MarkdownDescription: "The DNS name for the resource.", - Required: true, + Schema: map[string]*schema.Schema{ + "managed_zone": { + Type: schema.TypeString, + Required: true, }, - "type": schema.StringAttribute{ - MarkdownDescription: "The identifier of a supported record type. See the list of Supported DNS record types.", - Required: true, + + "name": { + Type: schema.TypeString, + Required: true, }, - "project": schema.StringAttribute{ - MarkdownDescription: "The ID of the project for the Google Cloud.", - Optional: true, + + "rrdatas": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, }, - "rrdatas": schema.ListAttribute{ - MarkdownDescription: "The string data for the records in this record set.", - Computed: true, - ElementType: types.StringType, + + "ttl": { + Type: schema.TypeInt, + Computed: true, }, - "ttl": schema.Int64Attribute{ - MarkdownDescription: "The time-to-live of this record set (seconds).", - Computed: true, + + "type": { + Type: schema.TypeString, + Required: true, }, - "id": schema.StringAttribute{ - MarkdownDescription: "DNS record set identifier", - Computed: true, + + "project": { + Type: schema.TypeString, + Optional: true, }, }, } } -func (d *GoogleDnsRecordSetDataSource) Configure(ctx context.Context, req datasource.ConfigureRequest, resp *datasource.ConfigureResponse) { - // Prevent panic if the provider has not been configured. - if req.ProviderData == nil { - return - } - - p, ok := req.ProviderData.(*fwtransport.FrameworkProviderConfig) - if !ok { - resp.Diagnostics.AddError( - "Unexpected Data Source Configure Type", - fmt.Sprintf("Expected *fwtransport.FrameworkProviderConfig, got: %T. Please report this issue to the provider developers.", req.ProviderData), - ) - return +func dataSourceDnsRecordSetRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err } - d.client = p.NewDnsClient(p.UserAgent, &resp.Diagnostics) - d.project = p.Project -} - -func (d *GoogleDnsRecordSetDataSource) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { - var data GoogleDnsRecordSetModel - var metaData *fwmodels.ProviderMetaModel - var diags diag.Diagnostics - - // Read Provider meta into the meta model - resp.Diagnostics.Append(req.ProviderMeta.Get(ctx, &metaData)...) - if resp.Diagnostics.HasError() { - return + project, err := tpgresource.GetProject(d, config) + if err != nil { + return err } - d.client.UserAgent = fwtransport.GenerateFrameworkUserAgentString(metaData, d.client.UserAgent) + zone := d.Get("managed_zone").(string) + name := d.Get("name").(string) + dnsType := d.Get("type").(string) + d.SetId(fmt.Sprintf("projects/%s/managedZones/%s/rrsets/%s/%s", project, zone, name, dnsType)) - // Read Terraform configuration data into the model - resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) - if resp.Diagnostics.HasError() { - return + resp, err := config.NewDnsClient(userAgent).ResourceRecordSets.List(project, zone).Name(name).Type(dnsType).Do() + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("dataSourceDnsRecordSet %q", name)) } - - data.Project = fwresource.GetProjectFramework(data.Project, d.project, &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return + if len(resp.Rrsets) != 1 { + return fmt.Errorf("Only expected 1 record set, got %d", len(resp.Rrsets)) } - data.Id = types.StringValue(fmt.Sprintf("projects/%s/managedZones/%s/rrsets/%s/%s", data.Project.ValueString(), data.ManagedZone.ValueString(), data.Name.ValueString(), data.Type.ValueString())) - clientResp, err := d.client.ResourceRecordSets.List(data.Project.ValueString(), data.ManagedZone.ValueString()).Name(data.Name.ValueString()).Type(data.Type.ValueString()).Do() - if err != nil { - fwtransport.HandleDatasourceNotFoundError(ctx, err, &resp.State, fmt.Sprintf("dataSourceDnsRecordSet %q", data.Name.ValueString()), &resp.Diagnostics) - if resp.Diagnostics.HasError() { - return - } + if err := d.Set("rrdatas", resp.Rrsets[0].Rrdatas); err != nil { + return fmt.Errorf("Error setting rrdatas: %s", err) } - if len(clientResp.Rrsets) != 1 { - resp.Diagnostics.AddError("only expected 1 record set", fmt.Sprintf("%d record sets were returned", len(clientResp.Rrsets))) + if err := d.Set("ttl", resp.Rrsets[0].Ttl); err != nil { + return fmt.Errorf("Error setting ttl: %s", err) } - - tflog.Trace(ctx, "read dns record set data source") - - data.Type = types.StringValue(clientResp.Rrsets[0].Type) - data.Ttl = types.Int64Value(clientResp.Rrsets[0].Ttl) - data.Rrdatas, diags = types.ListValueFrom(ctx, types.StringType, clientResp.Rrsets[0].Rrdatas) - resp.Diagnostics.Append(diags...) - if resp.Diagnostics.HasError() { - return + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error setting project: %s", err) } - // Save data into Terraform state - resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) + return nil } diff --git a/google-beta/services/dns/data_source_dns_record_set_test.go b/google-beta/services/dns/data_source_dns_record_set_test.go index 7c7dc3d790..285342e602 100644 --- a/google-beta/services/dns/data_source_dns_record_set_test.go +++ b/google-beta/services/dns/data_source_dns_record_set_test.go @@ -4,59 +4,36 @@ package dns_test import ( "fmt" - "strings" "testing" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwresource" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/fwtransport" ) -func TestAccDataSourceDnsRecordSet_basic(t *testing.T) { - // TODO: https://github.com/hashicorp/terraform-provider-google/issues/14158 - acctest.SkipIfVcr(t) +func TestAcccDataSourceDnsRecordSet_basic(t *testing.T) { t.Parallel() - var ttl1, ttl2 string // ttl is a computed string-type attribute that is easy to compare in the test - - managedZoneName := fmt.Sprintf("tf-test-zone-%s", acctest.RandString(t, 10)) + name := fmt.Sprintf("tf-test-%s", acctest.RandString(t, 10)) acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - CheckDestroy: testAccCheckDnsRecordSetDestroyProducerFramework(t), + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDnsRecordSetDestroyProducer(t), Steps: []resource.TestStep{ { - ExternalProviders: map[string]resource.ExternalProvider{ - "google": { - VersionConstraint: "4.58.0", - Source: "hashicorp/google-beta", - }, - }, - Config: testAccDataSourceDnsRecordSet_basic(managedZoneName, acctest.RandString(t, 10)), - Check: resource.ComposeTestCheckFunc( - acctest.CheckDataSourceStateMatchesResourceState("data.google_dns_record_set.rs", "google_dns_record_set.rs"), - acctest.TestExtractResourceAttr("data.google_dns_record_set.rs", "ttl", &ttl1), - ), - }, - { - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Config: testAccDataSourceDnsRecordSet_basic(managedZoneName, acctest.RandString(t, 10)), + Config: testAccDataSourceDnsRecordSet_basic(name), Check: resource.ComposeTestCheckFunc( acctest.CheckDataSourceStateMatchesResourceState("data.google_dns_record_set.rs", "google_dns_record_set.rs"), - acctest.TestExtractResourceAttr("data.google_dns_record_set.rs", "ttl", &ttl2), - acctest.TestCheckAttributeValuesEqual(&ttl1, &ttl2), ), }, }, }) } -func testAccDataSourceDnsRecordSet_basic(managedZoneName, recordSetName string) string { +func testAccDataSourceDnsRecordSet_basic(randString string) string { return fmt.Sprintf(` resource "google_dns_managed_zone" "zone" { - name = "%s-hashicorptest-com" + name = "tf-test-zone-%s" dns_name = "%s.hashicorptest.com." } @@ -75,40 +52,5 @@ data "google_dns_record_set" "rs" { name = google_dns_record_set.rs.name type = google_dns_record_set.rs.type } -`, managedZoneName, managedZoneName, recordSetName) -} - -// testAccCheckDnsRecordSetDestroyProducerFramework is the framework version of the generated testAccCheckDnsRecordSetDestroyProducer -func testAccCheckDnsRecordSetDestroyProducerFramework(t *testing.T) func(s *terraform.State) error { - - return func(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_dns_record_set" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - p := acctest.GetFwTestProvider(t) - - url, err := fwresource.ReplaceVarsForFrameworkTest(&p.FrameworkProvider.FrameworkProviderConfig, rs, "{{DNSBasePath}}projects/{{project}}/managedZones/{{managed_zone}}/rrsets/{{name}}/{{type}}") - if err != nil { - return err - } - - billingProject := "" - - if !p.BillingProject.IsNull() && p.BillingProject.String() != "" { - billingProject = p.BillingProject.String() - } - - _, diags := fwtransport.SendFrameworkRequest(&p.FrameworkProvider.FrameworkProviderConfig, "GET", billingProject, url, p.UserAgent, nil) - if !diags.HasError() { - return fmt.Errorf("DNSResourceDnsRecordSet still exists at %s", url) - } - } - - return nil - } +`, randString, randString, randString) } diff --git a/google-beta/services/dns/resource_dns_managed_zone.go b/google-beta/services/dns/resource_dns_managed_zone.go index ba8bbf2b9c..0a74d9bd1d 100644 --- a/google-beta/services/dns/resource_dns_managed_zone.go +++ b/google-beta/services/dns/resource_dns_managed_zone.go @@ -21,6 +21,7 @@ import ( "bytes" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -520,6 +521,7 @@ func resourceDNSManagedZoneCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -528,6 +530,7 @@ func resourceDNSManagedZoneCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ManagedZone: %s", err) @@ -570,12 +573,14 @@ func resourceDNSManagedZoneRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DNSManagedZone %q", d.Id())) @@ -746,6 +751,7 @@ func resourceDNSManagedZoneUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating ManagedZone %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -760,6 +766,7 @@ func resourceDNSManagedZoneUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -798,6 +805,7 @@ func resourceDNSManagedZoneDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) if d.Get("force_destroy").(bool) { zone := d.Get("name").(string) token := "" @@ -878,6 +886,7 @@ func resourceDNSManagedZoneDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ManagedZone") diff --git a/google-beta/services/dns/resource_dns_policy.go b/google-beta/services/dns/resource_dns_policy.go index 167ddb7d0d..04fd387050 100644 --- a/google-beta/services/dns/resource_dns_policy.go +++ b/google-beta/services/dns/resource_dns_policy.go @@ -21,6 +21,7 @@ import ( "bytes" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -235,6 +236,7 @@ func resourceDNSPolicyCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -243,6 +245,7 @@ func resourceDNSPolicyCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Policy: %s", err) @@ -285,12 +288,14 @@ func resourceDNSPolicyRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DNSPolicy %q", d.Id())) @@ -378,6 +383,8 @@ func resourceDNSPolicyUpdate(d *schema.ResourceData, meta interface{}) error { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -391,6 +398,7 @@ func resourceDNSPolicyUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Policy %q: %s", d.Id(), err) @@ -432,6 +440,7 @@ func resourceDNSPolicyDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) // if networks are attached, they need to be detached before the policy can be deleted if d.Get("networks.#").(int) > 0 { patched := make(map[string]interface{}) @@ -465,6 +474,7 @@ func resourceDNSPolicyDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Policy") diff --git a/google-beta/services/dns/resource_dns_record_set.go b/google-beta/services/dns/resource_dns_record_set.go index e59a0960d5..ec738d7482 100644 --- a/google-beta/services/dns/resource_dns_record_set.go +++ b/google-beta/services/dns/resource_dns_record_set.go @@ -169,7 +169,7 @@ func ResourceDnsRecordSet() *schema.Resource { "primary_backup": { Type: schema.TypeList, Optional: true, - Description: "The configuration for a primary-backup policy with global to regional failover. Queries are responded to with the global primary targets, but if none of the primary targets are healthy, then we fallback to a regional failover policy.", + Description: "The configuration for a failover policy with global to regional failover. Queries are responded to with the global primary targets, but if none of the primary targets are healthy, then we fallback to a regional failover policy.", MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ diff --git a/google-beta/services/dns/resource_dns_response_policy.go b/google-beta/services/dns/resource_dns_response_policy.go index 33190e4be0..5f11989a74 100644 --- a/google-beta/services/dns/resource_dns_response_policy.go +++ b/google-beta/services/dns/resource_dns_response_policy.go @@ -20,6 +20,7 @@ package dns import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -161,6 +162,7 @@ func resourceDNSResponsePolicyCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -169,6 +171,7 @@ func resourceDNSResponsePolicyCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ResponsePolicy: %s", err) @@ -211,12 +214,14 @@ func resourceDNSResponsePolicyRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DNSResponsePolicy %q", d.Id())) @@ -283,6 +288,7 @@ func resourceDNSResponsePolicyUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating ResponsePolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -297,6 +303,7 @@ func resourceDNSResponsePolicyUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -335,6 +342,7 @@ func resourceDNSResponsePolicyDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) // if gke clusters are attached, they need to be detached before the response policy can be deleted if d.Get("gke_clusters.#").(int) > 0 { patched := make(map[string]interface{}) @@ -392,6 +400,7 @@ func resourceDNSResponsePolicyDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ResponsePolicy") diff --git a/google-beta/services/dns/resource_dns_response_policy_rule.go b/google-beta/services/dns/resource_dns_response_policy_rule.go index 5a23879070..44243d741f 100644 --- a/google-beta/services/dns/resource_dns_response_policy_rule.go +++ b/google-beta/services/dns/resource_dns_response_policy_rule.go @@ -20,6 +20,7 @@ package dns import ( "fmt" "log" + "net/http" "reflect" "time" @@ -185,6 +186,7 @@ func resourceDNSResponsePolicyRuleCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -193,6 +195,7 @@ func resourceDNSResponsePolicyRuleCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ResponsePolicyRule: %s", err) @@ -235,12 +238,14 @@ func resourceDNSResponsePolicyRuleRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DNSResponsePolicyRule %q", d.Id())) @@ -307,6 +312,7 @@ func resourceDNSResponsePolicyRuleUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating ResponsePolicyRule %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -321,6 +327,7 @@ func resourceDNSResponsePolicyRuleUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -359,6 +366,8 @@ func resourceDNSResponsePolicyRuleDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ResponsePolicyRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -368,6 +377,7 @@ func resourceDNSResponsePolicyRuleDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ResponsePolicyRule") diff --git a/google-beta/services/documentai/resource_document_ai_processor.go b/google-beta/services/documentai/resource_document_ai_processor.go index def0aefb06..9eec606ad3 100644 --- a/google-beta/services/documentai/resource_document_ai_processor.go +++ b/google-beta/services/documentai/resource_document_ai_processor.go @@ -20,6 +20,7 @@ package documentai import ( "fmt" "log" + "net/http" "reflect" "time" @@ -136,6 +137,7 @@ func resourceDocumentAIProcessorCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -144,6 +146,7 @@ func resourceDocumentAIProcessorCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Processor: %s", err) @@ -189,12 +192,14 @@ func resourceDocumentAIProcessorRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DocumentAIProcessor %q", d.Id())) @@ -247,6 +252,8 @@ func resourceDocumentAIProcessorDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Processor %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -256,6 +263,7 @@ func resourceDocumentAIProcessorDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Processor") diff --git a/google-beta/services/documentai/resource_document_ai_processor_default_version.go b/google-beta/services/documentai/resource_document_ai_processor_default_version.go index 32e77b2576..394d3bc71b 100644 --- a/google-beta/services/documentai/resource_document_ai_processor_default_version.go +++ b/google-beta/services/documentai/resource_document_ai_processor_default_version.go @@ -20,6 +20,7 @@ package documentai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -99,6 +100,7 @@ func resourceDocumentAIProcessorDefaultVersionCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) if strings.Contains(url, "https://-") { location := tpgresource.GetRegionFromRegionalSelfLink(url) url = strings.TrimPrefix(url, "https://") @@ -112,6 +114,7 @@ func resourceDocumentAIProcessorDefaultVersionCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ProcessorDefaultVersion: %s", err) @@ -148,6 +151,7 @@ func resourceDocumentAIProcessorDefaultVersionRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) if strings.Contains(url, "https://-") { location := tpgresource.GetRegionFromRegionalSelfLink(url) url = strings.TrimPrefix(url, "https://") @@ -159,6 +163,7 @@ func resourceDocumentAIProcessorDefaultVersionRead(d *schema.ResourceData, meta Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DocumentAIProcessorDefaultVersion %q", d.Id())) diff --git a/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_document_schema.go b/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_document_schema.go index 6381445568..650e7f8d21 100644 --- a/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_document_schema.go +++ b/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_document_schema.go @@ -20,6 +20,7 @@ package documentaiwarehouse import ( "fmt" "log" + "net/http" "reflect" "time" @@ -465,6 +466,7 @@ func resourceDocumentAIWarehouseDocumentSchemaCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -473,6 +475,7 @@ func resourceDocumentAIWarehouseDocumentSchemaCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DocumentSchema: %s", err) @@ -512,12 +515,14 @@ func resourceDocumentAIWarehouseDocumentSchemaRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("DocumentAIWarehouseDocumentSchema %q", d.Id())) @@ -560,6 +565,8 @@ func resourceDocumentAIWarehouseDocumentSchemaDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DocumentSchema %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -569,6 +576,7 @@ func resourceDocumentAIWarehouseDocumentSchemaDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DocumentSchema") diff --git a/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_location.go b/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_location.go index 666682f589..157ada22e9 100644 --- a/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_location.go +++ b/google-beta/services/documentaiwarehouse/resource_document_ai_warehouse_location.go @@ -20,6 +20,7 @@ package documentaiwarehouse import ( "fmt" "log" + "net/http" "reflect" "time" @@ -136,6 +137,7 @@ func resourceDocumentAIWarehouseLocationCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -144,6 +146,7 @@ func resourceDocumentAIWarehouseLocationCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Location: %s", err) diff --git a/google-beta/services/edgecontainer/resource_edgecontainer_cluster.go b/google-beta/services/edgecontainer/resource_edgecontainer_cluster.go index e45ca3a23d..25fe71a37f 100644 --- a/google-beta/services/edgecontainer/resource_edgecontainer_cluster.go +++ b/google-beta/services/edgecontainer/resource_edgecontainer_cluster.go @@ -20,6 +20,7 @@ package edgecontainer import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -687,6 +688,7 @@ func resourceEdgecontainerClusterCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -695,6 +697,7 @@ func resourceEdgecontainerClusterCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Cluster: %s", err) @@ -747,12 +750,14 @@ func resourceEdgecontainerClusterRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("EdgecontainerCluster %q", d.Id())) @@ -912,6 +917,7 @@ func resourceEdgecontainerClusterUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Cluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("networking") { @@ -971,6 +977,7 @@ func resourceEdgecontainerClusterUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1004,6 +1011,8 @@ func resourceEdgecontainerClusterUpdate(d *schema.ResourceData, meta interface{} return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1017,6 +1026,7 @@ func resourceEdgecontainerClusterUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Cluster %q: %s", d.Id(), err) @@ -1064,6 +1074,8 @@ func resourceEdgecontainerClusterDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Cluster %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1073,6 +1085,7 @@ func resourceEdgecontainerClusterDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Cluster") diff --git a/google-beta/services/edgecontainer/resource_edgecontainer_node_pool.go b/google-beta/services/edgecontainer/resource_edgecontainer_node_pool.go index 19aa6231d7..d02ba218e5 100644 --- a/google-beta/services/edgecontainer/resource_edgecontainer_node_pool.go +++ b/google-beta/services/edgecontainer/resource_edgecontainer_node_pool.go @@ -20,6 +20,7 @@ package edgecontainer import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -249,6 +250,7 @@ func resourceEdgecontainerNodePoolCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -257,6 +259,7 @@ func resourceEdgecontainerNodePoolCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NodePool: %s", err) @@ -309,12 +312,14 @@ func resourceEdgecontainerNodePoolRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("EdgecontainerNodePool %q", d.Id())) @@ -414,6 +419,7 @@ func resourceEdgecontainerNodePoolUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating NodePool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("node_count") { @@ -457,6 +463,7 @@ func resourceEdgecontainerNodePoolUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -504,6 +511,8 @@ func resourceEdgecontainerNodePoolDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NodePool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -513,6 +522,7 @@ func resourceEdgecontainerNodePoolDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NodePool") diff --git a/google-beta/services/edgecontainer/resource_edgecontainer_vpn_connection.go b/google-beta/services/edgecontainer/resource_edgecontainer_vpn_connection.go index e5f54f9601..ee027493f4 100644 --- a/google-beta/services/edgecontainer/resource_edgecontainer_vpn_connection.go +++ b/google-beta/services/edgecontainer/resource_edgecontainer_vpn_connection.go @@ -20,6 +20,7 @@ package edgecontainer import ( "fmt" "log" + "net/http" "reflect" "time" @@ -273,6 +274,7 @@ func resourceEdgecontainerVpnConnectionCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -281,6 +283,7 @@ func resourceEdgecontainerVpnConnectionCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VpnConnection: %s", err) @@ -333,12 +336,14 @@ func resourceEdgecontainerVpnConnectionRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("EdgecontainerVpnConnection %q", d.Id())) @@ -453,6 +458,7 @@ func resourceEdgecontainerVpnConnectionUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating VpnConnection %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -467,6 +473,7 @@ func resourceEdgecontainerVpnConnectionUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -513,6 +520,8 @@ func resourceEdgecontainerVpnConnectionDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting VpnConnection %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -522,6 +531,7 @@ func resourceEdgecontainerVpnConnectionDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VpnConnection") diff --git a/google-beta/services/edgenetwork/resource_edgenetwork_network.go b/google-beta/services/edgenetwork/resource_edgenetwork_network.go index 3feef2fc4f..9d68dde89b 100644 --- a/google-beta/services/edgenetwork/resource_edgenetwork_network.go +++ b/google-beta/services/edgenetwork/resource_edgenetwork_network.go @@ -20,6 +20,7 @@ package edgenetwork import ( "fmt" "log" + "net/http" "reflect" "time" @@ -165,6 +166,7 @@ func resourceEdgenetworkNetworkCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -173,6 +175,7 @@ func resourceEdgenetworkNetworkCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Network: %s", err) @@ -225,12 +228,14 @@ func resourceEdgenetworkNetworkRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("EdgenetworkNetwork %q", d.Id())) @@ -289,6 +294,8 @@ func resourceEdgenetworkNetworkDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Network %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -298,6 +305,7 @@ func resourceEdgenetworkNetworkDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Network") diff --git a/google-beta/services/edgenetwork/resource_edgenetwork_subnet.go b/google-beta/services/edgenetwork/resource_edgenetwork_subnet.go index 0cd846e263..ae38712a15 100644 --- a/google-beta/services/edgenetwork/resource_edgenetwork_subnet.go +++ b/google-beta/services/edgenetwork/resource_edgenetwork_subnet.go @@ -20,6 +20,7 @@ package edgenetwork import ( "fmt" "log" + "net/http" "reflect" "time" @@ -214,6 +215,7 @@ func resourceEdgenetworkSubnetCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -222,6 +224,7 @@ func resourceEdgenetworkSubnetCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Subnet: %s", err) @@ -274,12 +277,14 @@ func resourceEdgenetworkSubnetRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("EdgenetworkSubnet %q", d.Id())) @@ -350,6 +355,8 @@ func resourceEdgenetworkSubnetDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Subnet %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -359,6 +366,7 @@ func resourceEdgenetworkSubnetDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Subnet") diff --git a/google-beta/services/essentialcontacts/resource_essential_contacts_contact.go b/google-beta/services/essentialcontacts/resource_essential_contacts_contact.go index deaf33778e..76c6c49733 100644 --- a/google-beta/services/essentialcontacts/resource_essential_contacts_contact.go +++ b/google-beta/services/essentialcontacts/resource_essential_contacts_contact.go @@ -20,6 +20,7 @@ package essentialcontacts import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -123,6 +124,7 @@ func resourceEssentialContactsContactCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -131,6 +133,7 @@ func resourceEssentialContactsContactCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Contact: %s", err) @@ -170,12 +173,14 @@ func resourceEssentialContactsContactRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("EssentialContactsContact %q", d.Id())) @@ -226,6 +231,7 @@ func resourceEssentialContactsContactUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating Contact %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_category_subscriptions") { @@ -257,6 +263,7 @@ func resourceEssentialContactsContactUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -291,6 +298,8 @@ func resourceEssentialContactsContactDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Contact %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -300,6 +309,7 @@ func resourceEssentialContactsContactDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Contact") diff --git a/google-beta/services/filestore/resource_filestore_backup.go b/google-beta/services/filestore/resource_filestore_backup.go index b8a87eaf96..1e6844aeb3 100644 --- a/google-beta/services/filestore/resource_filestore_backup.go +++ b/google-beta/services/filestore/resource_filestore_backup.go @@ -20,6 +20,7 @@ package filestore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -217,6 +218,7 @@ func resourceFilestoreBackupCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -225,6 +227,7 @@ func resourceFilestoreBackupCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -288,12 +291,14 @@ func resourceFilestoreBackupRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -395,6 +400,7 @@ func resourceFilestoreBackupUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Backup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -430,6 +436,7 @@ func resourceFilestoreBackupUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -485,6 +492,8 @@ func resourceFilestoreBackupDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Backup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -494,6 +503,7 @@ func resourceFilestoreBackupDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { diff --git a/google-beta/services/filestore/resource_filestore_instance.go b/google-beta/services/filestore/resource_filestore_instance.go index 8af59b7944..6b23fad0ec 100644 --- a/google-beta/services/filestore/resource_filestore_instance.go +++ b/google-beta/services/filestore/resource_filestore_instance.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -245,6 +246,17 @@ Please refer to the field 'effective_labels' for all of the labels present on th Description: `The name of the location of the instance. This can be a region for ENTERPRISE tier instances.`, ExactlyOneOf: []string{}, }, + "protocol": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"NFS_V3", "NFS_V4_1", ""}), + Description: `Either NFSv3, for using NFS version 3 as file sharing protocol, +or NFSv4.1, for using NFS version 4.1 as file sharing protocol. +NFSv4.1 can be used with HIGH_SCALE_SSD, ZONAL, REGIONAL and ENTERPRISE. +The default is NFSv3. Default value: "NFS_V3" Possible values: ["NFS_V3", "NFS_V4_1"]`, + Default: "NFS_V3", + }, "zone": { Type: schema.TypeString, Computed: true, @@ -309,6 +321,12 @@ func resourceFilestoreInstanceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("tier"); !tpgresource.IsEmptyValue(reflect.ValueOf(tierProp)) && (ok || !reflect.DeepEqual(v, tierProp)) { obj["tier"] = tierProp } + protocolProp, err := expandFilestoreInstanceProtocol(d.Get("protocol"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("protocol"); !tpgresource.IsEmptyValue(reflect.ValueOf(protocolProp)) && (ok || !reflect.DeepEqual(v, protocolProp)) { + obj["protocol"] = protocolProp + } fileSharesProp, err := expandFilestoreInstanceFileShares(d.Get("file_shares"), d, config) if err != nil { return err @@ -353,6 +371,7 @@ func resourceFilestoreInstanceCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) if d.Get("location") == "" { zone, err := tpgresource.GetZone(d, config) if err != nil { @@ -378,6 +397,7 @@ func resourceFilestoreInstanceCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -441,12 +461,14 @@ func resourceFilestoreInstanceRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -466,6 +488,9 @@ func resourceFilestoreInstanceRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("tier", flattenFilestoreInstanceTier(res["tier"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("protocol", flattenFilestoreInstanceProtocol(res["protocol"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } if err := d.Set("labels", flattenFilestoreInstanceLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } @@ -532,6 +557,7 @@ func resourceFilestoreInstanceUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -567,6 +593,7 @@ func resourceFilestoreInstanceUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -615,6 +642,8 @@ func resourceFilestoreInstanceDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -624,6 +653,7 @@ func resourceFilestoreInstanceDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -674,6 +704,14 @@ func flattenFilestoreInstanceTier(v interface{}, d *schema.ResourceData, config return v } +func flattenFilestoreInstanceProtocol(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil || tpgresource.IsEmptyValue(reflect.ValueOf(v)) { + return "NFS_V3" + } + + return v +} + func flattenFilestoreInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -884,6 +922,10 @@ func expandFilestoreInstanceTier(v interface{}, d tpgresource.TerraformResourceD return v, nil } +func expandFilestoreInstanceProtocol(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandFilestoreInstanceFileShares(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) diff --git a/google-beta/services/filestore/resource_filestore_instance_generated_test.go b/google-beta/services/filestore/resource_filestore_instance_generated_test.go index fcb4c5b2f9..8344be8806 100644 --- a/google-beta/services/filestore/resource_filestore_instance_generated_test.go +++ b/google-beta/services/filestore/resource_filestore_instance_generated_test.go @@ -135,6 +135,54 @@ resource "google_filestore_instance" "instance" { `, context) } +func TestAccFilestoreInstance_filestoreInstanceProtocolExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckFilestoreInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccFilestoreInstance_filestoreInstanceProtocolExample(context), + }, + { + ResourceName: "google_filestore_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "zone", "location", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccFilestoreInstance_filestoreInstanceProtocolExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_filestore_instance" "instance" { + provider = google-beta + name = "tf-test-test-instance%{random_suffix}" + location = "us-central1" + tier = "ENTERPRISE" + protocol = "NFS_V4_1" + + file_shares { + capacity_gb = 1024 + name = "share1" + } + + networks { + network = "default" + modes = ["MODE_IPV4"] + } + +} +`, context) +} + func testAccCheckFilestoreInstanceDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google-beta/services/filestore/resource_filestore_snapshot.go b/google-beta/services/filestore/resource_filestore_snapshot.go index 29ae5191e2..e3996d49c2 100644 --- a/google-beta/services/filestore/resource_filestore_snapshot.go +++ b/google-beta/services/filestore/resource_filestore_snapshot.go @@ -20,6 +20,7 @@ package filestore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -180,6 +181,7 @@ func resourceFilestoreSnapshotCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -188,6 +190,7 @@ func resourceFilestoreSnapshotCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -251,12 +254,14 @@ func resourceFilestoreSnapshotRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -334,6 +339,7 @@ func resourceFilestoreSnapshotUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Snapshot %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -365,6 +371,7 @@ func resourceFilestoreSnapshotUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -420,6 +427,8 @@ func resourceFilestoreSnapshotDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Snapshot %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -429,6 +438,7 @@ func resourceFilestoreSnapshotDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { diff --git a/google-beta/services/firebase/resource_firebase_android_app.go b/google-beta/services/firebase/resource_firebase_android_app.go index 029fe023f2..338bec3601 100644 --- a/google-beta/services/firebase/resource_firebase_android_app.go +++ b/google-beta/services/firebase/resource_firebase_android_app.go @@ -20,6 +20,7 @@ package firebase import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -190,6 +191,7 @@ func resourceFirebaseAndroidAppCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -198,6 +200,7 @@ func resourceFirebaseAndroidAppCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AndroidApp: %s", err) @@ -267,12 +270,14 @@ func resourceFirebaseAndroidAppRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAndroidApp %q", d.Id())) @@ -369,6 +374,7 @@ func resourceFirebaseAndroidAppUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating AndroidApp %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -412,6 +418,7 @@ func resourceFirebaseAndroidAppUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebase/resource_firebase_apple_app.go b/google-beta/services/firebase/resource_firebase_apple_app.go index 1d4827d058..775758edad 100644 --- a/google-beta/services/firebase/resource_firebase_apple_app.go +++ b/google-beta/services/firebase/resource_firebase_apple_app.go @@ -20,6 +20,7 @@ package firebase import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -171,6 +172,7 @@ func resourceFirebaseAppleAppCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -179,6 +181,7 @@ func resourceFirebaseAppleAppCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AppleApp: %s", err) @@ -248,12 +251,14 @@ func resourceFirebaseAppleAppRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppleApp %q", d.Id())) @@ -341,6 +346,7 @@ func resourceFirebaseAppleAppUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating AppleApp %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -380,6 +386,7 @@ func resourceFirebaseAppleAppUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebase/resource_firebase_project.go b/google-beta/services/firebase/resource_firebase_project.go index 6d2e216c33..17a039b020 100644 --- a/google-beta/services/firebase/resource_firebase_project.go +++ b/google-beta/services/firebase/resource_firebase_project.go @@ -20,6 +20,7 @@ package firebase import ( "fmt" "log" + "net/http" "time" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" @@ -126,6 +127,7 @@ func resourceFirebaseProjectCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) // Check if Firebase has already been enabled existingId, err := getExistingFirebaseProjectId(config, d, billingProject, userAgent) if err != nil { @@ -145,6 +147,7 @@ func resourceFirebaseProjectCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Project: %s", err) @@ -197,12 +200,14 @@ func resourceFirebaseProjectRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseProject %q", d.Id())) diff --git a/google-beta/services/firebase/resource_firebase_web_app.go b/google-beta/services/firebase/resource_firebase_web_app.go index b4ff4508f2..53575116db 100644 --- a/google-beta/services/firebase/resource_firebase_web_app.go +++ b/google-beta/services/firebase/resource_firebase_web_app.go @@ -20,6 +20,7 @@ package firebase import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -145,6 +146,7 @@ func resourceFirebaseWebAppCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -153,6 +155,7 @@ func resourceFirebaseWebAppCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WebApp: %s", err) @@ -222,12 +225,14 @@ func resourceFirebaseWebAppRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseWebApp %q", d.Id())) @@ -297,6 +302,7 @@ func resourceFirebaseWebAppUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating WebApp %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -328,6 +334,7 @@ func resourceFirebaseWebAppUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_app_attest_config.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_app_attest_config.go index 02972d2f8d..e935da3e84 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_app_attest_config.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_app_attest_config.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -119,6 +120,7 @@ func resourceFirebaseAppCheckAppAttestConfigCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -127,6 +129,7 @@ func resourceFirebaseAppCheckAppAttestConfigCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AppAttestConfig: %s", err) @@ -172,12 +175,14 @@ func resourceFirebaseAppCheckAppAttestConfigRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckAppAttestConfig %q", d.Id())) @@ -226,6 +231,7 @@ func resourceFirebaseAppCheckAppAttestConfigUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating AppAttestConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("token_ttl") { @@ -253,6 +259,7 @@ func resourceFirebaseAppCheckAppAttestConfigUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_debug_token.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_debug_token.go index 519045b223..6ae73b42b7 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_debug_token.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_debug_token.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -137,6 +138,7 @@ func resourceFirebaseAppCheckDebugTokenCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -145,6 +147,7 @@ func resourceFirebaseAppCheckDebugTokenCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DebugToken: %s", err) @@ -190,12 +193,14 @@ func resourceFirebaseAppCheckDebugTokenRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckDebugToken %q", d.Id())) @@ -244,6 +249,7 @@ func resourceFirebaseAppCheckDebugTokenUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating DebugToken %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -271,6 +277,7 @@ func resourceFirebaseAppCheckDebugTokenUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -311,6 +318,8 @@ func resourceFirebaseAppCheckDebugTokenDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DebugToken %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -320,6 +329,7 @@ func resourceFirebaseAppCheckDebugTokenDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DebugToken") diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_device_check_config.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_device_check_config.go index 00f05f9156..0a7183b3b0 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_device_check_config.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_device_check_config.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -148,6 +149,7 @@ func resourceFirebaseAppCheckDeviceCheckConfigCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -156,6 +158,7 @@ func resourceFirebaseAppCheckDeviceCheckConfigCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DeviceCheckConfig: %s", err) @@ -201,12 +204,14 @@ func resourceFirebaseAppCheckDeviceCheckConfigRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckDeviceCheckConfig %q", d.Id())) @@ -273,6 +278,7 @@ func resourceFirebaseAppCheckDeviceCheckConfigUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating DeviceCheckConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("token_ttl") { @@ -308,6 +314,7 @@ func resourceFirebaseAppCheckDeviceCheckConfigUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config.go index f4b368ef35..4df7da4062 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -119,6 +120,7 @@ func resourceFirebaseAppCheckPlayIntegrityConfigCreate(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -127,6 +129,7 @@ func resourceFirebaseAppCheckPlayIntegrityConfigCreate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PlayIntegrityConfig: %s", err) @@ -172,12 +175,14 @@ func resourceFirebaseAppCheckPlayIntegrityConfigRead(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckPlayIntegrityConfig %q", d.Id())) @@ -226,6 +231,7 @@ func resourceFirebaseAppCheckPlayIntegrityConfigUpdate(d *schema.ResourceData, m } log.Printf("[DEBUG] Updating PlayIntegrityConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("token_ttl") { @@ -253,6 +259,7 @@ func resourceFirebaseAppCheckPlayIntegrityConfigUpdate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config_generated_test.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config_generated_test.go index bbe936ff0b..8ae4de1b6e 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config_generated_test.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_play_integrity_config_generated_test.go @@ -57,6 +57,17 @@ func TestAccFirebaseAppCheckPlayIntegrityConfig_firebaseAppCheckPlayIntegrityCon func testAccFirebaseAppCheckPlayIntegrityConfig_firebaseAppCheckPlayIntegrityConfigMinimalExample(context map[string]interface{}) string { return acctest.Nprintf(` +# Enables the Play Integrity API +resource "google_project_service" "play_integrity" { + provider = google-beta + + project = "%{project_id}" + service = "playintegrity.googleapis.com" + + # Don't disable the service if the resource block is removed by accident. + disable_on_destroy = false +} + resource "google_firebase_android_app" "default" { provider = google-beta @@ -124,6 +135,17 @@ func TestAccFirebaseAppCheckPlayIntegrityConfig_firebaseAppCheckPlayIntegrityCon func testAccFirebaseAppCheckPlayIntegrityConfig_firebaseAppCheckPlayIntegrityConfigFullExample(context map[string]interface{}) string { return acctest.Nprintf(` +# Enables the Play Integrity API +resource "google_project_service" "play_integrity" { + provider = google-beta + + project = "%{project_id}" + service = "playintegrity.googleapis.com" + + # Don't disable the service if the resource block is removed by accident. + disable_on_destroy = false +} + resource "google_firebase_android_app" "default" { provider = google-beta diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config.go index 43e61ab7cf..0e6a598ebe 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -132,6 +133,7 @@ func resourceFirebaseAppCheckRecaptchaEnterpriseConfigCreate(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -140,6 +142,7 @@ func resourceFirebaseAppCheckRecaptchaEnterpriseConfigCreate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RecaptchaEnterpriseConfig: %s", err) @@ -185,12 +188,14 @@ func resourceFirebaseAppCheckRecaptchaEnterpriseConfigRead(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckRecaptchaEnterpriseConfig %q", d.Id())) @@ -248,6 +253,7 @@ func resourceFirebaseAppCheckRecaptchaEnterpriseConfigUpdate(d *schema.ResourceD } log.Printf("[DEBUG] Updating RecaptchaEnterpriseConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("token_ttl") { @@ -279,6 +285,7 @@ func resourceFirebaseAppCheckRecaptchaEnterpriseConfigUpdate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config_generated_test.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config_generated_test.go index 099e00313b..02a4cbd1ce 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config_generated_test.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_enterprise_config_generated_test.go @@ -59,6 +59,17 @@ func TestAccFirebaseAppCheckRecaptchaEnterpriseConfig_firebaseAppCheckRecaptchaE func testAccFirebaseAppCheckRecaptchaEnterpriseConfig_firebaseAppCheckRecaptchaEnterpriseConfigBasicExample(context map[string]interface{}) string { return acctest.Nprintf(` +# Enables the reCAPTCHA Enterprise API +resource "google_project_service" "recaptcha_enterprise" { + provider = google-beta + + project = "%{project_id}" + service = "recaptchaenterprise.googleapis.com" + + # Don't disable the service if the resource block is removed by accident. + disable_on_destroy = false +} + resource "google_firebase_web_app" "default" { provider = google-beta diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_v3_config.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_v3_config.go index e1b575bb31..c336f8d10a 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_v3_config.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_recaptcha_v3_config.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -137,6 +138,7 @@ func resourceFirebaseAppCheckRecaptchaV3ConfigCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -145,6 +147,7 @@ func resourceFirebaseAppCheckRecaptchaV3ConfigCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RecaptchaV3Config: %s", err) @@ -190,12 +193,14 @@ func resourceFirebaseAppCheckRecaptchaV3ConfigRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckRecaptchaV3Config %q", d.Id())) @@ -253,6 +258,7 @@ func resourceFirebaseAppCheckRecaptchaV3ConfigUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating RecaptchaV3Config %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("token_ttl") { @@ -284,6 +290,7 @@ func resourceFirebaseAppCheckRecaptchaV3ConfigUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/firebaseappcheck/resource_firebase_app_check_service_config.go b/google-beta/services/firebaseappcheck/resource_firebase_app_check_service_config.go index 854f639004..8c716b9551 100644 --- a/google-beta/services/firebaseappcheck/resource_firebase_app_check_service_config.go +++ b/google-beta/services/firebaseappcheck/resource_firebase_app_check_service_config.go @@ -20,6 +20,7 @@ package firebaseappcheck import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -149,6 +150,7 @@ func resourceFirebaseAppCheckServiceConfigCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -157,6 +159,7 @@ func resourceFirebaseAppCheckServiceConfigCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceConfig: %s", err) @@ -202,12 +205,14 @@ func resourceFirebaseAppCheckServiceConfigRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseAppCheckServiceConfig %q", d.Id())) @@ -256,6 +261,7 @@ func resourceFirebaseAppCheckServiceConfigUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating ServiceConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enforcement_mode") { @@ -283,6 +289,7 @@ func resourceFirebaseAppCheckServiceConfigUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -323,6 +330,8 @@ func resourceFirebaseAppCheckServiceConfigDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServiceConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -332,6 +341,7 @@ func resourceFirebaseAppCheckServiceConfigDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServiceConfig") diff --git a/google-beta/services/firebasedatabase/resource_firebase_database_instance.go b/google-beta/services/firebasedatabase/resource_firebase_database_instance.go index 21a1ec2c70..1340154d7a 100644 --- a/google-beta/services/firebasedatabase/resource_firebase_database_instance.go +++ b/google-beta/services/firebasedatabase/resource_firebase_database_instance.go @@ -20,6 +20,7 @@ package firebasedatabase import ( "fmt" "log" + "net/http" "reflect" "time" @@ -190,6 +191,7 @@ func resourceFirebaseDatabaseInstanceCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -198,6 +200,7 @@ func resourceFirebaseDatabaseInstanceCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -251,12 +254,14 @@ func resourceFirebaseDatabaseInstanceRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseDatabaseInstance %q", d.Id())) @@ -329,6 +334,7 @@ func resourceFirebaseDatabaseInstanceUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) // start of customized code if d.HasChange("desired_state") { @@ -364,6 +370,7 @@ func resourceFirebaseDatabaseInstanceUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -402,6 +409,8 @@ func resourceFirebaseDatabaseInstanceDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + // start of customized code if d.Get("state").(string) == "ACTIVE" { if err := disableRTDB(config, d, project, billingProject, userAgent); err != nil { @@ -423,6 +432,7 @@ func resourceFirebaseDatabaseInstanceDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/firebaseextensions/resource_firebase_extensions_instance.go b/google-beta/services/firebaseextensions/resource_firebase_extensions_instance.go index c26dc05b78..3e16635f48 100644 --- a/google-beta/services/firebaseextensions/resource_firebase_extensions_instance.go +++ b/google-beta/services/firebaseextensions/resource_firebase_extensions_instance.go @@ -20,6 +20,7 @@ package firebaseextensions import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -323,6 +324,7 @@ func resourceFirebaseExtensionsInstanceCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -331,6 +333,7 @@ func resourceFirebaseExtensionsInstanceCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -397,12 +400,14 @@ func resourceFirebaseExtensionsInstanceRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseExtensionsInstance %q", d.Id())) @@ -484,6 +489,7 @@ func resourceFirebaseExtensionsInstanceUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("config") { @@ -520,6 +526,7 @@ func resourceFirebaseExtensionsInstanceUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -567,6 +574,8 @@ func resourceFirebaseExtensionsInstanceDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -576,6 +585,7 @@ func resourceFirebaseExtensionsInstanceDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/firebasehosting/resource_firebase_hosting_channel.go b/google-beta/services/firebasehosting/resource_firebase_hosting_channel.go index 3bcd5b1f3e..bcf67e391b 100644 --- a/google-beta/services/firebasehosting/resource_firebase_hosting_channel.go +++ b/google-beta/services/firebasehosting/resource_firebase_hosting_channel.go @@ -20,6 +20,7 @@ package firebasehosting import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -169,6 +170,7 @@ func resourceFirebaseHostingChannelCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -177,6 +179,7 @@ func resourceFirebaseHostingChannelCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Channel: %s", err) @@ -216,12 +219,14 @@ func resourceFirebaseHostingChannelRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseHostingChannel %q", d.Id())) @@ -284,6 +289,7 @@ func resourceFirebaseHostingChannelUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Channel %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("retained_release_count") { @@ -319,6 +325,7 @@ func resourceFirebaseHostingChannelUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -353,6 +360,8 @@ func resourceFirebaseHostingChannelDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Channel %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -362,6 +371,7 @@ func resourceFirebaseHostingChannelDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Channel") diff --git a/google-beta/services/firebasehosting/resource_firebase_hosting_custom_domain.go b/google-beta/services/firebasehosting/resource_firebase_hosting_custom_domain.go index 9353a18e68..5d2ca1c60e 100644 --- a/google-beta/services/firebasehosting/resource_firebase_hosting_custom_domain.go +++ b/google-beta/services/firebasehosting/resource_firebase_hosting_custom_domain.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -593,6 +594,7 @@ func resourceFirebaseHostingCustomDomainCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -601,6 +603,7 @@ func resourceFirebaseHostingCustomDomainCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating CustomDomain: %s", err) @@ -658,12 +661,14 @@ func resourceFirebaseHostingCustomDomainRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseHostingCustomDomain %q", d.Id())) @@ -766,6 +771,7 @@ func resourceFirebaseHostingCustomDomainUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating CustomDomain %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("etag") { @@ -801,6 +807,7 @@ func resourceFirebaseHostingCustomDomainUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -848,6 +855,8 @@ func resourceFirebaseHostingCustomDomainDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting CustomDomain %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -857,6 +866,7 @@ func resourceFirebaseHostingCustomDomainDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "CustomDomain") diff --git a/google-beta/services/firebasehosting/resource_firebase_hosting_release.go b/google-beta/services/firebasehosting/resource_firebase_hosting_release.go index e94f8a4077..08447204a0 100644 --- a/google-beta/services/firebasehosting/resource_firebase_hosting_release.go +++ b/google-beta/services/firebasehosting/resource_firebase_hosting_release.go @@ -20,6 +20,7 @@ package firebasehosting import ( "fmt" "log" + "net/http" "reflect" "time" @@ -139,6 +140,7 @@ func resourceFirebaseHostingReleaseCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -147,6 +149,7 @@ func resourceFirebaseHostingReleaseCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Release: %s", err) @@ -193,12 +196,14 @@ func resourceFirebaseHostingReleaseRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseHostingRelease %q", d.Id())) diff --git a/google-beta/services/firebasehosting/resource_firebase_hosting_site.go b/google-beta/services/firebasehosting/resource_firebase_hosting_site.go index 2541d9e88e..74af6eeaee 100644 --- a/google-beta/services/firebasehosting/resource_firebase_hosting_site.go +++ b/google-beta/services/firebasehosting/resource_firebase_hosting_site.go @@ -20,6 +20,7 @@ package firebasehosting import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -128,6 +129,7 @@ func resourceFirebaseHostingSiteCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -136,6 +138,7 @@ func resourceFirebaseHostingSiteCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Site: %s", err) @@ -181,12 +184,14 @@ func resourceFirebaseHostingSiteRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseHostingSite %q", d.Id())) @@ -238,6 +243,7 @@ func resourceFirebaseHostingSiteUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Site %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("app_id") { @@ -265,6 +271,7 @@ func resourceFirebaseHostingSiteUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -305,6 +312,8 @@ func resourceFirebaseHostingSiteDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Site %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -314,6 +323,7 @@ func resourceFirebaseHostingSiteDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Site") diff --git a/google-beta/services/firebasehosting/resource_firebase_hosting_version.go b/google-beta/services/firebasehosting/resource_firebase_hosting_version.go index 6e9b618d81..660457ac27 100644 --- a/google-beta/services/firebasehosting/resource_firebase_hosting_version.go +++ b/google-beta/services/firebasehosting/resource_firebase_hosting_version.go @@ -20,6 +20,7 @@ package firebasehosting import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -129,6 +130,13 @@ request URL path, triggers Hosting to respond as if the service were given the s Description: `The user-supplied glob to match against the request URL path.`, ExactlyOneOf: []string{}, }, + "path": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The URL path to rewrite the request to.`, + ExactlyOneOf: []string{}, + }, "regex": { Type: schema.TypeString, Optional: true, @@ -210,6 +218,7 @@ func resourceFirebaseHostingVersionCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -218,6 +227,7 @@ func resourceFirebaseHostingVersionCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Version: %s", err) @@ -297,12 +307,14 @@ func resourceFirebaseHostingVersionRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseHostingVersion %q", d.Id())) @@ -392,6 +404,7 @@ func flattenFirebaseHostingVersionConfigRewrites(v interface{}, d *schema.Resour transformed = append(transformed, map[string]interface{}{ "glob": flattenFirebaseHostingVersionConfigRewritesGlob(original["glob"], d, config), "regex": flattenFirebaseHostingVersionConfigRewritesRegex(original["regex"], d, config), + "path": flattenFirebaseHostingVersionConfigRewritesPath(original["path"], d, config), "function": flattenFirebaseHostingVersionConfigRewritesFunction(original["function"], d, config), "run": flattenFirebaseHostingVersionConfigRewritesRun(original["run"], d, config), }) @@ -406,6 +419,10 @@ func flattenFirebaseHostingVersionConfigRewritesRegex(v interface{}, d *schema.R return v } +func flattenFirebaseHostingVersionConfigRewritesPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenFirebaseHostingVersionConfigRewritesFunction(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -533,6 +550,13 @@ func expandFirebaseHostingVersionConfigRewrites(v interface{}, d tpgresource.Ter transformed["regex"] = transformedRegex } + transformedPath, err := expandFirebaseHostingVersionConfigRewritesPath(original["path"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPath); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["path"] = transformedPath + } + transformedFunction, err := expandFirebaseHostingVersionConfigRewritesFunction(original["function"], d, config) if err != nil { return nil, err @@ -560,6 +584,10 @@ func expandFirebaseHostingVersionConfigRewritesRegex(v interface{}, d tpgresourc return v, nil } +func expandFirebaseHostingVersionConfigRewritesPath(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandFirebaseHostingVersionConfigRewritesFunction(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/firebasehosting/resource_firebase_hosting_version_generated_test.go b/google-beta/services/firebasehosting/resource_firebase_hosting_version_generated_test.go index 71f090aae9..3ad9ecbc87 100644 --- a/google-beta/services/firebasehosting/resource_firebase_hosting_version_generated_test.go +++ b/google-beta/services/firebasehosting/resource_firebase_hosting_version_generated_test.go @@ -80,6 +80,59 @@ resource "google_firebase_hosting_release" "default" { `, context) } +func TestAccFirebaseHostingVersion_firebasehostingVersionPathExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project_id": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccFirebaseHostingVersion_firebasehostingVersionPathExample(context), + }, + { + ResourceName: "google_firebase_hosting_version.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"version_id", "site_id"}, + }, + }, + }) +} + +func testAccFirebaseHostingVersion_firebasehostingVersionPathExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_firebase_hosting_site" "default" { + provider = google-beta + project = "%{project_id}" + site_id = "tf-test-site-id%{random_suffix}" +} + +resource "google_firebase_hosting_version" "default" { + provider = google-beta + site_id = google_firebase_hosting_site.default.site_id + config { + rewrites { + glob = "**" + path = "/index.html" + } + } +} + +resource "google_firebase_hosting_release" "default" { + provider = google-beta + site_id = google_firebase_hosting_site.default.site_id + version_name = google_firebase_hosting_version.default.name + message = "Path Rewrite" +} +`, context) +} + func TestAccFirebaseHostingVersion_firebasehostingVersionCloudRunExample(t *testing.T) { t.Parallel() diff --git a/google-beta/services/firebasestorage/resource_firebase_storage_bucket.go b/google-beta/services/firebasestorage/resource_firebase_storage_bucket.go index 067bb20103..7083fbc69b 100644 --- a/google-beta/services/firebasestorage/resource_firebase_storage_bucket.go +++ b/google-beta/services/firebasestorage/resource_firebase_storage_bucket.go @@ -20,6 +20,7 @@ package firebasestorage import ( "fmt" "log" + "net/http" "time" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" @@ -99,6 +100,7 @@ func resourceFirebaseStorageBucketCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -107,6 +109,7 @@ func resourceFirebaseStorageBucketCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Bucket: %s", err) @@ -152,12 +155,14 @@ func resourceFirebaseStorageBucketRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirebaseStorageBucket %q", d.Id())) @@ -201,6 +206,8 @@ func resourceFirebaseStorageBucketDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Bucket %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -210,6 +217,7 @@ func resourceFirebaseStorageBucketDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Bucket") diff --git a/google-beta/services/firestore/resource_firestore_backup_schedule.go b/google-beta/services/firestore/resource_firestore_backup_schedule.go index 637c502b0d..1922032fc2 100644 --- a/google-beta/services/firestore/resource_firestore_backup_schedule.go +++ b/google-beta/services/firestore/resource_firestore_backup_schedule.go @@ -20,6 +20,7 @@ package firestore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -60,7 +61,7 @@ func ResourceFirestoreBackupSchedule() *schema.Resource { Description: `At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s". -For a daily backup recurrence, set this to a value up to 7 days. If you set a weekly backup recurrence, set this to a value up to 14 weeks.`, +You can set this to a value up to 14 weeks.`, }, "daily_recurrence": { Type: schema.TypeList, @@ -161,6 +162,7 @@ func resourceFirestoreBackupScheduleCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -169,6 +171,7 @@ func resourceFirestoreBackupScheduleCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BackupSchedule: %s", err) @@ -214,12 +217,14 @@ func resourceFirestoreBackupScheduleRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirestoreBackupSchedule %q", d.Id())) @@ -274,6 +279,7 @@ func resourceFirestoreBackupScheduleUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating BackupSchedule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("retention") { @@ -301,6 +307,7 @@ func resourceFirestoreBackupScheduleUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -341,6 +348,8 @@ func resourceFirestoreBackupScheduleDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BackupSchedule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -350,6 +359,7 @@ func resourceFirestoreBackupScheduleDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BackupSchedule") diff --git a/google-beta/services/firestore/resource_firestore_backup_schedule_generated_test.go b/google-beta/services/firestore/resource_firestore_backup_schedule_generated_test.go index daeda7757f..43cad9828d 100644 --- a/google-beta/services/firestore/resource_firestore_backup_schedule_generated_test.go +++ b/google-beta/services/firestore/resource_firestore_backup_schedule_generated_test.go @@ -74,7 +74,7 @@ resource "google_firestore_backup_schedule" "daily-backup" { project = "%{project_id}" database = google_firestore_database.database.name - retention = "604800s" // 7 days (maximum possible value for daily backups) + retention = "8467200s" // 14 weeks (maximum possible retention) daily_recurrence {} } @@ -124,7 +124,7 @@ resource "google_firestore_backup_schedule" "weekly-backup" { project = "%{project_id}" database = google_firestore_database.database.name - retention = "8467200s" // 14 weeks (maximum possible value for weekly backups) + retention = "8467200s" // 14 weeks (maximum possible retention) weekly_recurrence { day = "SUNDAY" diff --git a/google-beta/services/firestore/resource_firestore_database.go b/google-beta/services/firestore/resource_firestore_database.go index 507445c74d..84e44bbd2e 100644 --- a/google-beta/services/firestore/resource_firestore_database.go +++ b/google-beta/services/firestore/resource_firestore_database.go @@ -20,6 +20,7 @@ package firestore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -307,6 +308,7 @@ func resourceFirestoreDatabaseCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -315,6 +317,7 @@ func resourceFirestoreDatabaseCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Database: %s", err) @@ -381,12 +384,14 @@ func resourceFirestoreDatabaseRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirestoreDatabase %q", d.Id())) @@ -510,6 +515,7 @@ func resourceFirestoreDatabaseUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Database %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("type") { @@ -557,6 +563,7 @@ func resourceFirestoreDatabaseUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -604,6 +611,7 @@ func resourceFirestoreDatabaseDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) if deletionPolicy := d.Get("deletion_policy"); deletionPolicy != "DELETE" { log.Printf("[WARN] Firestore database %q deletion_policy is not set to 'DELETE', skipping deletion", d.Get("name").(string)) return nil @@ -621,6 +629,7 @@ func resourceFirestoreDatabaseDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Database") diff --git a/google-beta/services/firestore/resource_firestore_database_sweeper.go b/google-beta/services/firestore/resource_firestore_database_sweeper.go new file mode 100644 index 0000000000..fffa00b40d --- /dev/null +++ b/google-beta/services/firestore/resource_firestore_database_sweeper.go @@ -0,0 +1,124 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package firestore + +import ( + "context" + "log" + "strings" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/sweeper" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func init() { + sweeper.AddTestSweepers("FirestoreDatabase", testSweepFirestoreDatabase) +} + +// At the time of writing, the CI only passes us-central1 as the region +func testSweepFirestoreDatabase(region string) error { + resourceName := "FirestoreDatabase" + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) + + config, err := sweeper.SharedConfigForRegion(region) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) + return err + } + + err = config.LoadAndValidate(context.Background()) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) + return err + } + + t := &testing.T{} + billingId := envvar.GetTestBillingAccountFromEnv(t) + + // Setup variables to replace in list template + d := &tpgresource.ResourceDataMock{ + FieldsInSchema: map[string]interface{}{ + "project": config.Project, + "region": region, + "location": region, + "zone": "-", + "billing_account": billingId, + }, + } + + listTemplate := strings.Split("https://firestore.googleapis.com/v1/projects/{{project}}/databases", "?")[0] + listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) + return nil + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: config.Project, + RawURL: listUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) + return nil + } + + resourceList, ok := res["databases"] + if !ok { + log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") + return nil + } + + rl := resourceList.([]interface{}) + + log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for _, ri := range rl { + obj := ri.(map[string]interface{}) + if obj["name"] == nil { + log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) + return nil + } + + name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) + // Skip resources that shouldn't be sweeped + if !sweeper.IsSweepableTestResource(name) { + nonPrefixCount++ + continue + } + + deleteTemplate := "https://firestore.googleapis.com/v1/projects/{{project}}/databases/{{name}}" + deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + return nil + } + deleteUrl = deleteUrl + name + + // Don't wait on operations as we may have a lot to delete + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: config.Project, + RawURL: deleteUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} diff --git a/google-beta/services/firestore/resource_firestore_document.go b/google-beta/services/firestore/resource_firestore_document.go index 9462cbe1c5..5cf4e5d443 100644 --- a/google-beta/services/firestore/resource_firestore_document.go +++ b/google-beta/services/firestore/resource_firestore_document.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "regexp" "time" @@ -87,7 +88,7 @@ func ResourceFirestoreDocument() *schema.Resource { "name": { Type: schema.TypeString, Computed: true, - Description: `A server defined name for this index. Format: + Description: `A server defined name for this document. Format: 'projects/{{project_id}}/databases/{{database_id}}/documents/{{path}}/{{document_id}}'`, }, "path": { @@ -145,6 +146,7 @@ func resourceFirestoreDocumentCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -153,6 +155,7 @@ func resourceFirestoreDocumentCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Document: %s", err) @@ -198,12 +201,14 @@ func resourceFirestoreDocumentRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("FirestoreDocument %q", d.Id())) @@ -273,6 +278,7 @@ func resourceFirestoreDocumentUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Document %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -287,6 +293,7 @@ func resourceFirestoreDocumentUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -325,6 +332,8 @@ func resourceFirestoreDocumentDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Document %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -334,6 +343,7 @@ func resourceFirestoreDocumentDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Document") diff --git a/google-beta/services/firestore/resource_firestore_field.go b/google-beta/services/firestore/resource_firestore_field.go index a83511b59f..e8d4039350 100644 --- a/google-beta/services/firestore/resource_firestore_field.go +++ b/google-beta/services/firestore/resource_firestore_field.go @@ -20,6 +20,7 @@ package firestore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -199,6 +200,7 @@ func resourceFirestoreFieldCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -207,6 +209,7 @@ func resourceFirestoreFieldCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.FirestoreField409RetryUnderlyingDataChanged}, }) if err != nil { @@ -274,12 +277,14 @@ func resourceFirestoreFieldRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.FirestoreField409RetryUnderlyingDataChanged}, }) if err != nil { @@ -343,6 +348,7 @@ func resourceFirestoreFieldUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Field %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("index_config") { @@ -374,6 +380,7 @@ func resourceFirestoreFieldUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.FirestoreField409RetryUnderlyingDataChanged}, }) diff --git a/google-beta/services/firestore/resource_firestore_index.go b/google-beta/services/firestore/resource_firestore_index.go index c49cf8299a..93d330a6c8 100644 --- a/google-beta/services/firestore/resource_firestore_index.go +++ b/google-beta/services/firestore/resource_firestore_index.go @@ -20,6 +20,7 @@ package firestore import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -100,12 +101,12 @@ func ResourceFirestoreIndex() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: firestoreIFieldsDiffSuppress, - Description: `The fields supported by this index. The last field entry is always for -the field path '__name__'. If, on creation, '__name__' was not -specified as the last field, it will be added automatically with the -same direction as that of the last field defined. If the final field -in a composite index is not directional, the '__name__' will be -ordered '"ASCENDING"' (unless explicitly specified otherwise).`, + Description: `The fields supported by this index. The last non-stored field entry is +always for the field path '__name__'. If, on creation, '__name__' was not +specified as the last field, it will be added automatically with the same +direction as that of the last field defined. If the final field in a +composite index is not directional, the '__name__' will be ordered +'"ASCENDING"' (unless explicitly specified otherwise).`, MinItems: 2, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -114,8 +115,8 @@ ordered '"ASCENDING"' (unless explicitly specified otherwise).`, Optional: true, ForceNew: true, ValidateFunc: verify.ValidateEnum([]string{"CONTAINS", ""}), - Description: `Indicates that this field supports operations on arrayValues. Only one of 'order' and 'arrayConfig' can -be specified. Possible values: ["CONTAINS"]`, + Description: `Indicates that this field supports operations on arrayValues. Only one of 'order', 'arrayConfig', and +'vectorConfig' can be specified. Possible values: ["CONTAINS"]`, }, "field_path": { Type: schema.TypeString, @@ -129,7 +130,36 @@ be specified. Possible values: ["CONTAINS"]`, ForceNew: true, ValidateFunc: verify.ValidateEnum([]string{"ASCENDING", "DESCENDING", ""}), Description: `Indicates that this field supports ordering by the specified order or comparing using =, <, <=, >, >=. -Only one of 'order' and 'arrayConfig' can be specified. Possible values: ["ASCENDING", "DESCENDING"]`, +Only one of 'order', 'arrayConfig', and 'vectorConfig' can be specified. Possible values: ["ASCENDING", "DESCENDING"]`, + }, + "vector_config": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Indicates that this field supports vector search operations. Only one of 'order', 'arrayConfig', and +'vectorConfig' can be specified. Vector Fields should come after the field path '__name__'.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dimension": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Description: `The resulting index will only include vectors of this dimension, and can be used for vector search +with the same dimension.`, + }, + "flat": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Indicates the vector index is a flat index.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{}, + }, + }, + }, + }, }, }, }, @@ -237,6 +267,7 @@ func resourceFirestoreIndexCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -245,6 +276,7 @@ func resourceFirestoreIndexCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.FirestoreIndex409Retry}, }) if err != nil { @@ -322,12 +354,14 @@ func resourceFirestoreIndexRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.FirestoreIndex409Retry}, }) if err != nil { @@ -381,6 +415,8 @@ func resourceFirestoreIndexDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Index %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -390,6 +426,7 @@ func resourceFirestoreIndexDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.FirestoreIndex409Retry}, }) if err != nil { @@ -467,9 +504,10 @@ func flattenFirestoreIndexFields(v interface{}, d *schema.ResourceData, config * continue } transformed = append(transformed, map[string]interface{}{ - "field_path": flattenFirestoreIndexFieldsFieldPath(original["fieldPath"], d, config), - "order": flattenFirestoreIndexFieldsOrder(original["order"], d, config), - "array_config": flattenFirestoreIndexFieldsArrayConfig(original["arrayConfig"], d, config), + "field_path": flattenFirestoreIndexFieldsFieldPath(original["fieldPath"], d, config), + "order": flattenFirestoreIndexFieldsOrder(original["order"], d, config), + "array_config": flattenFirestoreIndexFieldsArrayConfig(original["arrayConfig"], d, config), + "vector_config": flattenFirestoreIndexFieldsVectorConfig(original["vectorConfig"], d, config), }) } return transformed @@ -486,6 +524,46 @@ func flattenFirestoreIndexFieldsArrayConfig(v interface{}, d *schema.ResourceDat return v } +func flattenFirestoreIndexFieldsVectorConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["dimension"] = + flattenFirestoreIndexFieldsVectorConfigDimension(original["dimension"], d, config) + transformed["flat"] = + flattenFirestoreIndexFieldsVectorConfigFlat(original["flat"], d, config) + return []interface{}{transformed} +} +func flattenFirestoreIndexFieldsVectorConfigDimension(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenFirestoreIndexFieldsVectorConfigFlat(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + transformed := make(map[string]interface{}) + return []interface{}{transformed} +} + func expandFirestoreIndexDatabase(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -533,6 +611,13 @@ func expandFirestoreIndexFields(v interface{}, d tpgresource.TerraformResourceDa transformed["arrayConfig"] = transformedArrayConfig } + transformedVectorConfig, err := expandFirestoreIndexFieldsVectorConfig(original["vector_config"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedVectorConfig); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["vectorConfig"] = transformedVectorConfig + } + req = append(req, transformed) } return req, nil @@ -550,6 +635,51 @@ func expandFirestoreIndexFieldsArrayConfig(v interface{}, d tpgresource.Terrafor return v, nil } +func expandFirestoreIndexFieldsVectorConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedDimension, err := expandFirestoreIndexFieldsVectorConfigDimension(original["dimension"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDimension); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["dimension"] = transformedDimension + } + + transformedFlat, err := expandFirestoreIndexFieldsVectorConfigFlat(original["flat"], d, config) + if err != nil { + return nil, err + } else { + transformed["flat"] = transformedFlat + } + + return transformed, nil +} + +func expandFirestoreIndexFieldsVectorConfigDimension(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandFirestoreIndexFieldsVectorConfigFlat(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 { + return nil, nil + } + + if l[0] == nil { + transformed := make(map[string]interface{}) + return transformed, nil + } + transformed := make(map[string]interface{}) + + return transformed, nil +} + func resourceFirestoreIndexEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { // We've added project / database / collection as split fields of the name, but // the API doesn't expect them. Make sure we remove them from any requests. diff --git a/google-beta/services/firestore/resource_firestore_index_generated_test.go b/google-beta/services/firestore/resource_firestore_index_generated_test.go index 509850bd6a..3abf40191a 100644 --- a/google-beta/services/firestore/resource_firestore_index_generated_test.go +++ b/google-beta/services/firestore/resource_firestore_index_generated_test.go @@ -150,6 +150,70 @@ resource "google_firestore_index" "my-index" { `, context) } +func TestAccFirestoreIndex_firestoreIndexVectorExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project_id": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckFirestoreIndexDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccFirestoreIndex_firestoreIndexVectorExample(context), + }, + { + ResourceName: "google_firestore_index.my-index", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"database", "collection"}, + }, + }, + }) +} + +func testAccFirestoreIndex_firestoreIndexVectorExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_firestore_database" "database" { + project = "%{project_id}" + name = "tf-test-database-id-vector%{random_suffix}" + location_id = "nam5" + type = "FIRESTORE_NATIVE" + + delete_protection_state = "DELETE_PROTECTION_DISABLED" + deletion_policy = "DELETE" +} + +resource "google_firestore_index" "my-index" { + project = "%{project_id}" + database = google_firestore_database.database.name + collection = "atestcollection" + + fields { + field_path = "field_name" + order = "ASCENDING" + } + + fields { + field_path = "__name__" + order = "ASCENDING" + } + + fields { + field_path = "description" + vector_config { + dimension = 128 + flat {} + } + } +} +`, context) +} + func testAccCheckFirestoreIndexDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google-beta/services/gkebackup/resource_gke_backup_backup_plan.go b/google-beta/services/gkebackup/resource_gke_backup_backup_plan.go index b720944c5b..42f09d8446 100644 --- a/google-beta/services/gkebackup/resource_gke_backup_backup_plan.go +++ b/google-beta/services/gkebackup/resource_gke_backup_backup_plan.go @@ -20,6 +20,7 @@ package gkebackup import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -29,6 +30,7 @@ import ( "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" ) func ResourceGKEBackupBackupPlan() *schema.Resource { @@ -179,6 +181,8 @@ included in the scope of a Backup.`, Optional: true, Description: `A standard cron string that defines a repeating schedule for creating Backups via this BackupPlan. +This is mutually exclusive with the rpoConfig field since at most one +schedule can be defined for a BackupPlan. If this is defined, then backupRetainDays must also be defined.`, }, "paused": { @@ -187,6 +191,136 @@ If this is defined, then backupRetainDays must also be defined.`, Optional: true, Description: `This flag denotes whether automatic Backup creation is paused for this BackupPlan.`, }, + "rpo_config": { + Type: schema.TypeList, + Optional: true, + Description: `Defines the RPO schedule configuration for this BackupPlan. This is mutually +exclusive with the cronSchedule field since at most one schedule can be defined +for a BackupPLan. If this is defined, then backupRetainDays must also be defined.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "target_rpo_minutes": { + Type: schema.TypeInt, + Required: true, + Description: `Defines the target RPO for the BackupPlan in minutes, which means the target +maximum data loss in time that is acceptable for this BackupPlan. This must be +at least 60, i.e., 1 hour, and at most 86400, i.e., 60 days.`, + }, + "exclusion_windows": { + Type: schema.TypeList, + Optional: true, + Description: `User specified time windows during which backup can NOT happen for this BackupPlan. +Backups should start and finish outside of any given exclusion window. Note: backup +jobs will be scheduled to start and finish outside the duration of the window as +much as possible, but running jobs will not get canceled when it runs into the window. +All the time and date values in exclusionWindows entry in the API are in UTC. We +only allow <=1 recurrence (daily or weekly) exclusion window for a BackupPlan while no +restriction on number of single occurrence windows.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "duration": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateDuration(), + Description: `Specifies duration of the window in seconds with up to nine fractional digits, +terminated by 's'. Example: "3.5s". Restrictions for duration based on the +recurrence type to allow some time for backup to happen: + - single_occurrence_date: no restriction + - daily window: duration < 24 hours + - weekly window: + - days of week includes all seven days of a week: duration < 24 hours + - all other weekly window: duration < 168 hours (i.e., 24 * 7 hours)`, + }, + "start_time": { + Type: schema.TypeList, + Required: true, + Description: `Specifies the start time of the window using time of the day in UTC.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "hours": { + Type: schema.TypeInt, + Optional: true, + Description: `Hours of day in 24 hour format.`, + }, + "minutes": { + Type: schema.TypeInt, + Optional: true, + Description: `Minutes of hour of day.`, + }, + "nanos": { + Type: schema.TypeInt, + Optional: true, + Description: `Fractions of seconds in nanoseconds.`, + }, + "seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `Seconds of minutes of the time.`, + }, + }, + }, + }, + "daily": { + Type: schema.TypeBool, + Optional: true, + Description: `The exclusion window occurs every day if set to "True". +Specifying this field to "False" is an error. +Only one of singleOccurrenceDate, daily and daysOfWeek may be set.`, + }, + "days_of_week": { + Type: schema.TypeList, + Optional: true, + Description: `The exclusion window occurs on these days of each week in UTC. +Only one of singleOccurrenceDate, daily and daysOfWeek may be set.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "days_of_week": { + Type: schema.TypeList, + Optional: true, + Description: `A list of days of week. Possible values: ["MONDAY", "TUESDAY", "WEDNESDAY", "THURSDAY", "FRIDAY", "SATURDAY", "SUNDAY"]`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidateEnum([]string{"MONDAY", "TUESDAY", "WEDNESDAY", "THURSDAY", "FRIDAY", "SATURDAY", "SUNDAY"}), + }, + }, + }, + }, + }, + "single_occurrence_date": { + Type: schema.TypeList, + Optional: true, + Description: `No recurrence. The exclusion window occurs only once and on this date in UTC. +Only one of singleOccurrenceDate, daily and daysOfWeek may be set.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "day": { + Type: schema.TypeInt, + Optional: true, + Description: `Day of a month.`, + }, + "month": { + Type: schema.TypeInt, + Optional: true, + Description: `Month of a year.`, + }, + "year": { + Type: schema.TypeInt, + Optional: true, + Description: `Year of the date.`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, }, }, }, @@ -247,7 +381,9 @@ subject to automatic deletion. Updating this field does NOT affect existing Backups under it. Backups created AFTER a successful update will automatically pick up the new value. NOTE: backupRetainDays must be >= backupDeleteLockDays. -If cronSchedule is defined, then this must be <= 360 * the creation interval.]`, +If cronSchedule is defined, then this must be <= 360 * the creation interval. +If rpo_config is defined, then this must be +<= 360 * targetRpoMinutes/(1440minutes/day)`, }, "locked": { Type: schema.TypeBool, @@ -390,6 +526,7 @@ func resourceGKEBackupBackupPlanCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -398,6 +535,7 @@ func resourceGKEBackupBackupPlanCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BackupPlan: %s", err) @@ -464,12 +602,14 @@ func resourceGKEBackupBackupPlanRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEBackupBackupPlan %q", d.Id())) @@ -587,6 +727,7 @@ func resourceGKEBackupBackupPlanUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating BackupPlan %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -634,6 +775,7 @@ func resourceGKEBackupBackupPlanUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -681,6 +823,8 @@ func resourceGKEBackupBackupPlanDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BackupPlan %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -690,6 +834,7 @@ func resourceGKEBackupBackupPlanDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BackupPlan") @@ -829,6 +974,8 @@ func flattenGKEBackupBackupPlanBackupSchedule(v interface{}, d *schema.ResourceD flattenGKEBackupBackupPlanBackupScheduleCronSchedule(original["cronSchedule"], d, config) transformed["paused"] = flattenGKEBackupBackupPlanBackupSchedulePaused(original["paused"], d, config) + transformed["rpo_config"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfig(original["rpoConfig"], d, config) return []interface{}{transformed} } func flattenGKEBackupBackupPlanBackupScheduleCronSchedule(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -839,6 +986,240 @@ func flattenGKEBackupBackupPlanBackupSchedulePaused(v interface{}, d *schema.Res return v } +func flattenGKEBackupBackupPlanBackupScheduleRpoConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["target_rpo_minutes"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigTargetRpoMinutes(original["targetRpoMinutes"], d, config) + transformed["exclusion_windows"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindows(original["exclusionWindows"], d, config) + return []interface{}{transformed} +} +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigTargetRpoMinutes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindows(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "start_time": flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTime(original["startTime"], d, config), + "duration": flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDuration(original["duration"], d, config), + "single_occurrence_date": flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDate(original["singleOccurrenceDate"], d, config), + "daily": flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaily(original["daily"], d, config), + "days_of_week": flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeek(original["daysOfWeek"], d, config), + }) + } + return transformed +} +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["hours"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeHours(original["hours"], d, config) + transformed["minutes"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeMinutes(original["minutes"], d, config) + transformed["seconds"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeSeconds(original["seconds"], d, config) + transformed["nanos"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeNanos(original["nanos"], d, config) + return []interface{}{transformed} +} +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeHours(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeMinutes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeSeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeNanos(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDuration(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDate(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["year"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateYear(original["year"], d, config) + transformed["month"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateMonth(original["month"], d, config) + transformed["day"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateDay(original["day"], d, config) + return []interface{}{transformed} +} +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateYear(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateMonth(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateDay(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaily(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeek(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["days_of_week"] = + flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeekDaysOfWeek(original["daysOfWeek"], d, config) + return []interface{}{transformed} +} +func flattenGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeekDaysOfWeek(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenGKEBackupBackupPlanEtag(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1080,6 +1461,13 @@ func expandGKEBackupBackupPlanBackupSchedule(v interface{}, d tpgresource.Terraf transformed["paused"] = transformedPaused } + transformedRpoConfig, err := expandGKEBackupBackupPlanBackupScheduleRpoConfig(original["rpo_config"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedRpoConfig); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["rpoConfig"] = transformedRpoConfig + } + return transformed, nil } @@ -1091,6 +1479,218 @@ func expandGKEBackupBackupPlanBackupSchedulePaused(v interface{}, d tpgresource. return v, nil } +func expandGKEBackupBackupPlanBackupScheduleRpoConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedTargetRpoMinutes, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigTargetRpoMinutes(original["target_rpo_minutes"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTargetRpoMinutes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["targetRpoMinutes"] = transformedTargetRpoMinutes + } + + transformedExclusionWindows, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindows(original["exclusion_windows"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedExclusionWindows); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["exclusionWindows"] = transformedExclusionWindows + } + + return transformed, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigTargetRpoMinutes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindows(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedStartTime, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTime(original["start_time"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedStartTime); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["startTime"] = transformedStartTime + } + + transformedDuration, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDuration(original["duration"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDuration); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["duration"] = transformedDuration + } + + transformedSingleOccurrenceDate, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDate(original["single_occurrence_date"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSingleOccurrenceDate); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["singleOccurrenceDate"] = transformedSingleOccurrenceDate + } + + transformedDaily, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaily(original["daily"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDaily); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["daily"] = transformedDaily + } + + transformedDaysOfWeek, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeek(original["days_of_week"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDaysOfWeek); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["daysOfWeek"] = transformedDaysOfWeek + } + + req = append(req, transformed) + } + return req, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedHours, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeHours(original["hours"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedHours); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["hours"] = transformedHours + } + + transformedMinutes, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeMinutes(original["minutes"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMinutes); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["minutes"] = transformedMinutes + } + + transformedSeconds, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeSeconds(original["seconds"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["seconds"] = transformedSeconds + } + + transformedNanos, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeNanos(original["nanos"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedNanos); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["nanos"] = transformedNanos + } + + return transformed, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeHours(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeMinutes(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeSeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsStartTimeNanos(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDuration(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDate(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedYear, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateYear(original["year"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedYear); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["year"] = transformedYear + } + + transformedMonth, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateMonth(original["month"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedMonth); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["month"] = transformedMonth + } + + transformedDay, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateDay(original["day"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDay); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["day"] = transformedDay + } + + return transformed, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateYear(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateMonth(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsSingleOccurrenceDateDay(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaily(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeek(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedDaysOfWeek, err := expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeekDaysOfWeek(original["days_of_week"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedDaysOfWeek); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["daysOfWeek"] = transformedDaysOfWeek + } + + return transformed, nil +} + +func expandGKEBackupBackupPlanBackupScheduleRpoConfigExclusionWindowsDaysOfWeekDaysOfWeek(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandGKEBackupBackupPlanDeactivated(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go b/google-beta/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go index 72693e72fe..fd6cf03f05 100644 --- a/google-beta/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go +++ b/google-beta/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go @@ -307,6 +307,202 @@ resource "google_gke_backup_backup_plan" "full" { `, context) } +func TestAccGKEBackupBackupPlan_gkebackupBackupplanRpoDailyWindowExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "network_name": acctest.BootstrapSharedTestNetwork(t, "gke-cluster"), + "subnetwork_name": acctest.BootstrapSubnet(t, "gke-cluster", acctest.BootstrapSharedTestNetwork(t, "gke-cluster")), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckGKEBackupBackupPlanDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccGKEBackupBackupPlan_gkebackupBackupplanRpoDailyWindowExample(context), + }, + { + ResourceName: "google_gke_backup_backup_plan.rpo_daily_window", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccGKEBackupBackupPlan_gkebackupBackupplanRpoDailyWindowExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_container_cluster" "primary" { + name = "tf-test-rpo-daily-cluster%{random_suffix}" + location = "us-central1" + initial_node_count = 1 + workload_identity_config { + workload_pool = "%{project}.svc.id.goog" + } + addons_config { + gke_backup_agent_config { + enabled = true + } + } + deletion_protection = "%{deletion_protection}" + network = "%{network_name}" + subnetwork = "%{subnetwork_name}" +} + +resource "google_gke_backup_backup_plan" "rpo_daily_window" { + name = "tf-test-rpo-daily-window%{random_suffix}" + cluster = google_container_cluster.primary.id + location = "us-central1" + retention_policy { + backup_delete_lock_days = 30 + backup_retain_days = 180 + } + backup_schedule { + paused = true + rpo_config { + target_rpo_minutes=1440 + exclusion_windows { + start_time { + hours = 12 + } + duration = "7200s" + daily = true + } + exclusion_windows { + start_time { + hours = 8 + minutes = 40 + seconds = 1 + nanos = 100 + } + duration = "3600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 16 + } + } + } + } + backup_config { + include_volume_data = true + include_secrets = true + all_namespaces = true + } +} +`, context) +} + +func TestAccGKEBackupBackupPlan_gkebackupBackupplanRpoWeeklyWindowExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "network_name": acctest.BootstrapSharedTestNetwork(t, "gke-cluster"), + "subnetwork_name": acctest.BootstrapSubnet(t, "gke-cluster", acctest.BootstrapSharedTestNetwork(t, "gke-cluster")), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckGKEBackupBackupPlanDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccGKEBackupBackupPlan_gkebackupBackupplanRpoWeeklyWindowExample(context), + }, + { + ResourceName: "google_gke_backup_backup_plan.rpo_weekly_window", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccGKEBackupBackupPlan_gkebackupBackupplanRpoWeeklyWindowExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_container_cluster" "primary" { + name = "tf-test-rpo-weekly-cluster%{random_suffix}" + location = "us-central1" + initial_node_count = 1 + workload_identity_config { + workload_pool = "%{project}.svc.id.goog" + } + addons_config { + gke_backup_agent_config { + enabled = true + } + } + deletion_protection = "%{deletion_protection}" + network = "%{network_name}" + subnetwork = "%{subnetwork_name}" +} + +resource "google_gke_backup_backup_plan" "rpo_weekly_window" { + name = "tf-test-rpo-weekly-window%{random_suffix}" + cluster = google_container_cluster.primary.id + location = "us-central1" + retention_policy { + backup_delete_lock_days = 30 + backup_retain_days = 180 + } + backup_schedule { + paused = true + rpo_config { + target_rpo_minutes=1440 + exclusion_windows { + start_time { + hours = 1 + minutes = 23 + } + duration = "1800s" + days_of_week { + days_of_week = ["MONDAY", "THURSDAY"] + } + } + exclusion_windows { + start_time { + hours = 12 + } + duration = "3600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 17 + } + } + exclusion_windows { + start_time { + hours = 8 + minutes = 40 + } + duration = "600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 18 + } + } + } + } + backup_config { + include_volume_data = true + include_secrets = true + all_namespaces = true + } +} +`, context) +} + func testAccCheckGKEBackupBackupPlanDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google-beta/services/gkebackup/resource_gke_backup_backup_plan_test.go b/google-beta/services/gkebackup/resource_gke_backup_backup_plan_test.go index 39cab06d47..afe5db9dcf 100644 --- a/google-beta/services/gkebackup/resource_gke_backup_backup_plan_test.go +++ b/google-beta/services/gkebackup/resource_gke_backup_backup_plan_test.go @@ -44,6 +44,33 @@ func TestAccGKEBackupBackupPlan_update(t *testing.T) { ImportStateVerify: true, ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, + { + Config: testAccGKEBackupBackupPlan_rpo_daily_window(context), + }, + { + ResourceName: "google_gke_backup_backup_plan.backupplan", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + { + Config: testAccGKEBackupBackupPlan_rpo_weekly_window(context), + }, + { + ResourceName: "google_gke_backup_backup_plan.backupplan", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + { + Config: testAccGKEBackupBackupPlan_full(context), + }, + { + ResourceName: "google_gke_backup_backup_plan.backupplan", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, }, }) } @@ -133,3 +160,164 @@ resource "google_gke_backup_backup_plan" "backupplan" { } `, context) } + +func testAccGKEBackupBackupPlan_rpo_daily_window(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_container_cluster" "primary" { + name = "tf-test-testcluster%{random_suffix}" + location = "us-central1" + initial_node_count = 1 + workload_identity_config { + workload_pool = "%{project}.svc.id.goog" + } + addons_config { + gke_backup_agent_config { + enabled = true + } + } + deletion_protection = false + network = "%{network_name}" + subnetwork = "%{subnetwork_name}" +} + +resource "google_gke_backup_backup_plan" "backupplan" { + name = "tf-test-testplan%{random_suffix}" + cluster = google_container_cluster.primary.id + location = "us-central1" + retention_policy { + backup_delete_lock_days = 30 + backup_retain_days = 180 + } + backup_schedule { + paused = true + rpo_config { + target_rpo_minutes=1440 + exclusion_windows { + start_time { + hours = 12 + } + duration = "7200s" + daily = true + } + exclusion_windows { + start_time { + hours = 8 + minutes = 40 + seconds = 1 + } + duration = "3600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 16 + } + } + } + } + backup_config { + include_volume_data = true + include_secrets = true + selected_applications { + namespaced_names { + name = "app1" + namespace = "ns1" + } + namespaced_names { + name = "app2" + namespace = "ns2" + } + } + } + labels = { + "some-key-2": "some-value-2" + } +} +`, context) +} + +func testAccGKEBackupBackupPlan_rpo_weekly_window(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_container_cluster" "primary" { + name = "tf-test-testcluster%{random_suffix}" + location = "us-central1" + initial_node_count = 1 + workload_identity_config { + workload_pool = "%{project}.svc.id.goog" + } + addons_config { + gke_backup_agent_config { + enabled = true + } + } + deletion_protection = false + network = "%{network_name}" + subnetwork = "%{subnetwork_name}" +} + +resource "google_gke_backup_backup_plan" "backupplan" { + name = "tf-test-testplan%{random_suffix}" + cluster = google_container_cluster.primary.id + location = "us-central1" + retention_policy { + backup_delete_lock_days = 30 + backup_retain_days = 180 + } + backup_schedule { + paused = true + rpo_config { + target_rpo_minutes=1400 + exclusion_windows { + start_time { + hours = 1 + minutes = 23 + } + duration = "1800s" + days_of_week { + days_of_week = ["MONDAY", "THURSDAY"] + } + } + exclusion_windows { + start_time { + hours = 12 + } + duration = "3600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 17 + } + } + exclusion_windows { + start_time { + hours = 8 + minutes = 40 + } + duration = "600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 18 + } + } + } + } + backup_config { + include_volume_data = true + include_secrets = true + selected_applications { + namespaced_names { + name = "app1" + namespace = "ns1" + } + namespaced_names { + name = "app2" + namespace = "ns2" + } + } + } + labels = { + "some-key-2": "some-value-2" + } +} +`, context) +} diff --git a/google-beta/services/gkebackup/resource_gke_backup_restore_plan.go b/google-beta/services/gkebackup/resource_gke_backup_restore_plan.go index 4a66aa9b8f..311570a470 100644 --- a/google-beta/services/gkebackup/resource_gke_backup_restore_plan.go +++ b/google-beta/services/gkebackup/resource_gke_backup_restore_plan.go @@ -20,6 +20,7 @@ package gkebackup import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -520,6 +521,7 @@ func resourceGKEBackupRestorePlanCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -528,6 +530,7 @@ func resourceGKEBackupRestorePlanCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating RestorePlan: %s", err) @@ -594,12 +597,14 @@ func resourceGKEBackupRestorePlanRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEBackupRestorePlan %q", d.Id())) @@ -687,6 +692,7 @@ func resourceGKEBackupRestorePlanUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating RestorePlan %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -722,6 +728,7 @@ func resourceGKEBackupRestorePlanUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -769,6 +776,8 @@ func resourceGKEBackupRestorePlanDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting RestorePlan %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -778,6 +787,7 @@ func resourceGKEBackupRestorePlanDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "RestorePlan") diff --git a/google-beta/services/gkehub/resource_gke_hub_membership.go b/google-beta/services/gkehub/resource_gke_hub_membership.go index 215aab3e01..5d15e9d0cc 100644 --- a/google-beta/services/gkehub/resource_gke_hub_membership.go +++ b/google-beta/services/gkehub/resource_gke_hub_membership.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -237,6 +238,7 @@ func resourceGKEHubMembershipCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -245,6 +247,7 @@ func resourceGKEHubMembershipCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Membership: %s", err) @@ -311,12 +314,14 @@ func resourceGKEHubMembershipRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHubMembership %q", d.Id())) @@ -392,6 +397,7 @@ func resourceGKEHubMembershipUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Membership %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -427,6 +433,7 @@ func resourceGKEHubMembershipUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -474,6 +481,8 @@ func resourceGKEHubMembershipDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Membership %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -483,6 +492,7 @@ func resourceGKEHubMembershipDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Membership") diff --git a/google-beta/services/gkehub2/resource_gke_hub_feature.go b/google-beta/services/gkehub2/resource_gke_hub_feature.go index 347a957e9f..2406f7d4ed 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_feature.go +++ b/google-beta/services/gkehub2/resource_gke_hub_feature.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -756,6 +757,7 @@ func resourceGKEHub2FeatureCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -764,6 +766,7 @@ func resourceGKEHub2FeatureCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Feature: %s", err) @@ -826,12 +829,14 @@ func resourceGKEHub2FeatureRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2Feature %q", d.Id())) @@ -916,6 +921,7 @@ func resourceGKEHub2FeatureUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Feature %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("spec") { @@ -951,6 +957,7 @@ func resourceGKEHub2FeatureUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -998,6 +1005,8 @@ func resourceGKEHub2FeatureDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Feature %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1007,6 +1016,7 @@ func resourceGKEHub2FeatureDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Feature") diff --git a/google-beta/services/gkehub2/resource_gke_hub_fleet.go b/google-beta/services/gkehub2/resource_gke_hub_fleet.go index 56890ad29c..51421462c6 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_fleet.go +++ b/google-beta/services/gkehub2/resource_gke_hub_fleet.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -211,6 +212,7 @@ func resourceGKEHub2FleetCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -219,6 +221,7 @@ func resourceGKEHub2FleetCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Fleet: %s", err) @@ -271,12 +274,14 @@ func resourceGKEHub2FleetRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2Fleet %q", d.Id())) @@ -346,6 +351,7 @@ func resourceGKEHub2FleetUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Fleet %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -377,6 +383,7 @@ func resourceGKEHub2FleetUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -424,6 +431,8 @@ func resourceGKEHub2FleetDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Fleet %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -433,6 +442,7 @@ func resourceGKEHub2FleetDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Fleet") diff --git a/google-beta/services/gkehub2/resource_gke_hub_membership_binding.go b/google-beta/services/gkehub2/resource_gke_hub_membership_binding.go index e230c27951..1c560c17bc 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_membership_binding.go +++ b/google-beta/services/gkehub2/resource_gke_hub_membership_binding.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -192,6 +193,7 @@ func resourceGKEHub2MembershipBindingCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -200,6 +202,7 @@ func resourceGKEHub2MembershipBindingCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating MembershipBinding: %s", err) @@ -266,12 +269,14 @@ func resourceGKEHub2MembershipBindingRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2MembershipBinding %q", d.Id())) @@ -350,6 +355,7 @@ func resourceGKEHub2MembershipBindingUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating MembershipBinding %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("scope") { @@ -381,6 +387,7 @@ func resourceGKEHub2MembershipBindingUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -428,6 +435,8 @@ func resourceGKEHub2MembershipBindingDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MembershipBinding %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -437,6 +446,7 @@ func resourceGKEHub2MembershipBindingDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "MembershipBinding") diff --git a/google-beta/services/gkehub2/resource_gke_hub_membership_rbac_role_binding.go b/google-beta/services/gkehub2/resource_gke_hub_membership_rbac_role_binding.go index 366bb493dd..a435fefda4 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_membership_rbac_role_binding.go +++ b/google-beta/services/gkehub2/resource_gke_hub_membership_rbac_role_binding.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "time" @@ -186,6 +187,7 @@ func resourceGKEHub2MembershipRBACRoleBindingCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -194,6 +196,7 @@ func resourceGKEHub2MembershipRBACRoleBindingCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating MembershipRBACRoleBinding: %s", err) @@ -260,12 +263,14 @@ func resourceGKEHub2MembershipRBACRoleBindingRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2MembershipRBACRoleBinding %q", d.Id())) @@ -330,6 +335,8 @@ func resourceGKEHub2MembershipRBACRoleBindingDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MembershipRBACRoleBinding %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -339,6 +346,7 @@ func resourceGKEHub2MembershipRBACRoleBindingDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "MembershipRBACRoleBinding") diff --git a/google-beta/services/gkehub2/resource_gke_hub_namespace.go b/google-beta/services/gkehub2/resource_gke_hub_namespace.go index afb2b7f513..53f5303149 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_namespace.go +++ b/google-beta/services/gkehub2/resource_gke_hub_namespace.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -202,6 +203,7 @@ func resourceGKEHub2NamespaceCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -210,6 +212,7 @@ func resourceGKEHub2NamespaceCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Namespace: %s", err) @@ -276,12 +279,14 @@ func resourceGKEHub2NamespaceRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2Namespace %q", d.Id())) @@ -363,6 +368,7 @@ func resourceGKEHub2NamespaceUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Namespace %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("namespace_labels") { @@ -394,6 +400,7 @@ func resourceGKEHub2NamespaceUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -441,6 +448,8 @@ func resourceGKEHub2NamespaceDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Namespace %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -450,6 +459,7 @@ func resourceGKEHub2NamespaceDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Namespace") diff --git a/google-beta/services/gkehub2/resource_gke_hub_scope.go b/google-beta/services/gkehub2/resource_gke_hub_scope.go index 1edf1bfcdc..d2a45b95df 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_scope.go +++ b/google-beta/services/gkehub2/resource_gke_hub_scope.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -183,6 +184,7 @@ func resourceGKEHub2ScopeCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -191,6 +193,7 @@ func resourceGKEHub2ScopeCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Scope: %s", err) @@ -257,12 +260,14 @@ func resourceGKEHub2ScopeRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2Scope %q", d.Id())) @@ -341,6 +346,7 @@ func resourceGKEHub2ScopeUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Scope %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("namespace_labels") { @@ -372,6 +378,7 @@ func resourceGKEHub2ScopeUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -419,6 +426,8 @@ func resourceGKEHub2ScopeDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Scope %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -428,6 +437,7 @@ func resourceGKEHub2ScopeDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Scope") diff --git a/google-beta/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go b/google-beta/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go index 9f4f551c00..d003c8b1a7 100644 --- a/google-beta/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go +++ b/google-beta/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go @@ -20,6 +20,7 @@ package gkehub2 import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -225,6 +226,7 @@ func resourceGKEHub2ScopeRBACRoleBindingCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -233,6 +235,7 @@ func resourceGKEHub2ScopeRBACRoleBindingCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ScopeRBACRoleBinding: %s", err) @@ -299,12 +302,14 @@ func resourceGKEHub2ScopeRBACRoleBindingRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GKEHub2ScopeRBACRoleBinding %q", d.Id())) @@ -401,6 +406,7 @@ func resourceGKEHub2ScopeRBACRoleBindingUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating ScopeRBACRoleBinding %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("user") { @@ -440,6 +446,7 @@ func resourceGKEHub2ScopeRBACRoleBindingUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -487,6 +494,8 @@ func resourceGKEHub2ScopeRBACRoleBindingDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ScopeRBACRoleBinding %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -496,6 +505,7 @@ func resourceGKEHub2ScopeRBACRoleBindingDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ScopeRBACRoleBinding") diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_admin_cluster.go b/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_admin_cluster.go index c60f14c915..61f5b9644a 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_admin_cluster.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_admin_cluster.go @@ -20,6 +20,7 @@ package gkeonprem import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -828,6 +829,7 @@ func resourceGkeonpremBareMetalAdminClusterCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -836,6 +838,7 @@ func resourceGkeonpremBareMetalAdminClusterCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BareMetalAdminCluster: %s", err) @@ -895,12 +898,14 @@ func resourceGkeonpremBareMetalAdminClusterRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GkeonpremBareMetalAdminCluster %q", d.Id())) @@ -1093,6 +1098,7 @@ func resourceGkeonpremBareMetalAdminClusterUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating BareMetalAdminCluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -1168,6 +1174,7 @@ func resourceGkeonpremBareMetalAdminClusterUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_cluster.go b/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_cluster.go index 7e275aa64d..2e0fe39037 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_cluster.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_cluster.go @@ -20,6 +20,7 @@ package gkeonprem import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -1336,6 +1337,7 @@ func resourceGkeonpremBareMetalClusterCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1344,6 +1346,7 @@ func resourceGkeonpremBareMetalClusterCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BareMetalCluster: %s", err) @@ -1403,12 +1406,14 @@ func resourceGkeonpremBareMetalClusterRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GkeonpremBareMetalCluster %q", d.Id())) @@ -1631,6 +1636,7 @@ func resourceGkeonpremBareMetalClusterUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating BareMetalCluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -1718,6 +1724,7 @@ func resourceGkeonpremBareMetalClusterUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1765,6 +1772,8 @@ func resourceGkeonpremBareMetalClusterDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BareMetalCluster %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1774,6 +1783,7 @@ func resourceGkeonpremBareMetalClusterDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BareMetalCluster") diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_node_pool.go b/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_node_pool.go index 57322cd0e4..2267304dd9 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_node_pool.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_bare_metal_node_pool.go @@ -20,6 +20,7 @@ package gkeonprem import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -339,6 +340,7 @@ func resourceGkeonpremBareMetalNodePoolCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -347,6 +349,7 @@ func resourceGkeonpremBareMetalNodePoolCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BareMetalNodePool: %s", err) @@ -406,12 +409,14 @@ func resourceGkeonpremBareMetalNodePoolRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GkeonpremBareMetalNodePool %q", d.Id())) @@ -502,6 +507,7 @@ func resourceGkeonpremBareMetalNodePoolUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating BareMetalNodePool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -537,6 +543,7 @@ func resourceGkeonpremBareMetalNodePoolUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -584,6 +591,8 @@ func resourceGkeonpremBareMetalNodePoolDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BareMetalNodePool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -593,6 +602,7 @@ func resourceGkeonpremBareMetalNodePoolDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BareMetalNodePool") diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster.go b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster.go index 8c4389ea7f..ee1d0b4573 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster.go @@ -20,6 +20,7 @@ package gkeonprem import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -255,6 +256,11 @@ full access to the cluster.`, Optional: true, Description: `A human readable description of this VMware User Cluster.`, }, + "disable_bundled_ingress": { + Type: schema.TypeBool, + Optional: true, + Description: `Disable bundled ingress.`, + }, "enable_control_plane_v2": { Type: schema.TypeBool, Optional: true, @@ -980,6 +986,12 @@ func resourceGkeonpremVmwareClusterCreate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("enable_control_plane_v2"); !tpgresource.IsEmptyValue(reflect.ValueOf(enableControlPlaneV2Prop)) && (ok || !reflect.DeepEqual(v, enableControlPlaneV2Prop)) { obj["enableControlPlaneV2"] = enableControlPlaneV2Prop } + disableBundledIngressProp, err := expandGkeonpremVmwareClusterDisableBundledIngress(d.Get("disable_bundled_ingress"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("disable_bundled_ingress"); !tpgresource.IsEmptyValue(reflect.ValueOf(disableBundledIngressProp)) && (ok || !reflect.DeepEqual(v, disableBundledIngressProp)) { + obj["disableBundledIngress"] = disableBundledIngressProp + } upgradePolicyProp, err := expandGkeonpremVmwareClusterUpgradePolicy(d.Get("upgrade_policy"), d, config) if err != nil { return err @@ -1018,6 +1030,7 @@ func resourceGkeonpremVmwareClusterCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1026,6 +1039,7 @@ func resourceGkeonpremVmwareClusterCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VmwareCluster: %s", err) @@ -1085,12 +1099,14 @@ func resourceGkeonpremVmwareClusterRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GkeonpremVmwareCluster %q", d.Id())) @@ -1145,6 +1161,9 @@ func resourceGkeonpremVmwareClusterRead(d *schema.ResourceData, meta interface{} if err := d.Set("enable_control_plane_v2", flattenGkeonpremVmwareClusterEnableControlPlaneV2(res["enableControlPlaneV2"], d, config)); err != nil { return fmt.Errorf("Error reading VmwareCluster: %s", err) } + if err := d.Set("disable_bundled_ingress", flattenGkeonpremVmwareClusterDisableBundledIngress(res["disableBundledIngress"], d, config)); err != nil { + return fmt.Errorf("Error reading VmwareCluster: %s", err) + } if err := d.Set("upgrade_policy", flattenGkeonpremVmwareClusterUpgradePolicy(res["upgradePolicy"], d, config)); err != nil { return fmt.Errorf("Error reading VmwareCluster: %s", err) } @@ -1279,6 +1298,12 @@ func resourceGkeonpremVmwareClusterUpdate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("enable_control_plane_v2"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, enableControlPlaneV2Prop)) { obj["enableControlPlaneV2"] = enableControlPlaneV2Prop } + disableBundledIngressProp, err := expandGkeonpremVmwareClusterDisableBundledIngress(d.Get("disable_bundled_ingress"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("disable_bundled_ingress"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, disableBundledIngressProp)) { + obj["disableBundledIngress"] = disableBundledIngressProp + } upgradePolicyProp, err := expandGkeonpremVmwareClusterUpgradePolicy(d.Get("upgrade_policy"), d, config) if err != nil { return err @@ -1304,6 +1329,7 @@ func resourceGkeonpremVmwareClusterUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating VmwareCluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -1354,6 +1380,10 @@ func resourceGkeonpremVmwareClusterUpdate(d *schema.ResourceData, meta interface updateMask = append(updateMask, "enableControlPlaneV2") } + if d.HasChange("disable_bundled_ingress") { + updateMask = append(updateMask, "disableBundledIngress") + } + if d.HasChange("upgrade_policy") { updateMask = append(updateMask, "upgradePolicy") } @@ -1387,6 +1417,7 @@ func resourceGkeonpremVmwareClusterUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1434,6 +1465,8 @@ func resourceGkeonpremVmwareClusterDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting VmwareCluster %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1443,6 +1476,7 @@ func resourceGkeonpremVmwareClusterDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VmwareCluster") @@ -2246,6 +2280,10 @@ func flattenGkeonpremVmwareClusterEnableControlPlaneV2(v interface{}, d *schema. return v } +func flattenGkeonpremVmwareClusterDisableBundledIngress(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenGkeonpremVmwareClusterUpgradePolicy(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -3326,6 +3364,10 @@ func expandGkeonpremVmwareClusterEnableControlPlaneV2(v interface{}, d tpgresour return v, nil } +func expandGkeonpremVmwareClusterDisableBundledIngress(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandGkeonpremVmwareClusterUpgradePolicy(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_generated_test.go b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_generated_test.go index 49d00d1693..ac023d1eea 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_generated_test.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_generated_test.go @@ -178,6 +178,7 @@ resource "google_gkeonprem_vmware_cluster" "cluster-f5lb" { } vm_tracking_enabled = true enable_control_plane_v2 = true + disable_bundled_ingress = true authorization { admin_users { username = "testuser@gmail.com" diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_test.go b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_test.go index 2c546b4fb1..e3f01098ba 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_test.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_cluster_test.go @@ -373,6 +373,7 @@ func testAccGkeonpremVmwareCluster_vmwareClusterUpdateManualLbStart(context map[ } vm_tracking_enabled = true enable_control_plane_v2 = true + disable_bundled_ingress = true upgrade_policy { control_plane_only = true } @@ -465,6 +466,7 @@ func testAccGkeonpremVmwareCluster_vmwareClusterUpdateManualLb(context map[strin } vm_tracking_enabled = false enable_control_plane_v2 = false + disable_bundled_ingress = false upgrade_policy { control_plane_only = true } diff --git a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_node_pool.go b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_node_pool.go index 6c74892adc..a780dcb93e 100644 --- a/google-beta/services/gkeonprem/resource_gkeonprem_vmware_node_pool.go +++ b/google-beta/services/gkeonprem/resource_gkeonprem_vmware_node_pool.go @@ -20,6 +20,7 @@ package gkeonprem import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -410,6 +411,7 @@ func resourceGkeonpremVmwareNodePoolCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -418,6 +420,7 @@ func resourceGkeonpremVmwareNodePoolCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VmwareNodePool: %s", err) @@ -477,12 +480,14 @@ func resourceGkeonpremVmwareNodePoolRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GkeonpremVmwareNodePool %q", d.Id())) @@ -585,6 +590,7 @@ func resourceGkeonpremVmwareNodePoolUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating VmwareNodePool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -624,6 +630,7 @@ func resourceGkeonpremVmwareNodePoolUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -671,6 +678,8 @@ func resourceGkeonpremVmwareNodePoolDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting VmwareNodePool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -680,6 +689,7 @@ func resourceGkeonpremVmwareNodePoolDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VmwareNodePool") diff --git a/google-beta/services/healthcare/resource_healthcare_consent_store.go b/google-beta/services/healthcare/resource_healthcare_consent_store.go index f5523cb84c..b524b69a05 100644 --- a/google-beta/services/healthcare/resource_healthcare_consent_store.go +++ b/google-beta/services/healthcare/resource_healthcare_consent_store.go @@ -20,6 +20,7 @@ package healthcare import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -159,6 +160,7 @@ func resourceHealthcareConsentStoreCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -167,6 +169,7 @@ func resourceHealthcareConsentStoreCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ConsentStore: %s", err) @@ -203,12 +206,14 @@ func resourceHealthcareConsentStoreRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("HealthcareConsentStore %q", d.Id())) @@ -268,6 +273,7 @@ func resourceHealthcareConsentStoreUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating ConsentStore %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("default_consent_ttl") { @@ -303,6 +309,7 @@ func resourceHealthcareConsentStoreUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -337,6 +344,8 @@ func resourceHealthcareConsentStoreDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ConsentStore %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -346,6 +355,7 @@ func resourceHealthcareConsentStoreDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ConsentStore") diff --git a/google-beta/services/healthcare/resource_healthcare_dataset.go b/google-beta/services/healthcare/resource_healthcare_dataset.go index 56972c3ddd..3e41fe51b6 100644 --- a/google-beta/services/healthcare/resource_healthcare_dataset.go +++ b/google-beta/services/healthcare/resource_healthcare_dataset.go @@ -20,6 +20,7 @@ package healthcare import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -129,6 +130,7 @@ func resourceHealthcareDatasetCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -137,6 +139,7 @@ func resourceHealthcareDatasetCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.HealthcareDatasetNotInitialized}, }) if err != nil { @@ -180,12 +183,14 @@ func resourceHealthcareDatasetRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.HealthcareDatasetNotInitialized}, }) if err != nil { @@ -247,6 +252,7 @@ func resourceHealthcareDatasetUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Dataset %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("time_zone") { @@ -274,6 +280,7 @@ func resourceHealthcareDatasetUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.HealthcareDatasetNotInitialized}, }) @@ -315,6 +322,8 @@ func resourceHealthcareDatasetDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Dataset %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -324,6 +333,7 @@ func resourceHealthcareDatasetDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.HealthcareDatasetNotInitialized}, }) if err != nil { diff --git a/google-beta/services/healthcare/resource_healthcare_dicom_store.go b/google-beta/services/healthcare/resource_healthcare_dicom_store.go index 377ae59b58..604a9cfd5f 100644 --- a/google-beta/services/healthcare/resource_healthcare_dicom_store.go +++ b/google-beta/services/healthcare/resource_healthcare_dicom_store.go @@ -20,6 +20,7 @@ package healthcare import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -204,6 +205,7 @@ func resourceHealthcareDicomStoreCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -212,6 +214,7 @@ func resourceHealthcareDicomStoreCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DicomStore: %s", err) @@ -248,12 +251,14 @@ func resourceHealthcareDicomStoreRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("HealthcareDicomStore %q", d.Id())) @@ -328,6 +333,7 @@ func resourceHealthcareDicomStoreUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating DicomStore %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("notification_config") { @@ -363,6 +369,7 @@ func resourceHealthcareDicomStoreUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -397,6 +404,8 @@ func resourceHealthcareDicomStoreDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DicomStore %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -406,6 +415,7 @@ func resourceHealthcareDicomStoreDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DicomStore") diff --git a/google-beta/services/healthcare/resource_healthcare_fhir_store.go b/google-beta/services/healthcare/resource_healthcare_fhir_store.go index b1145e8294..7b2e497296 100644 --- a/google-beta/services/healthcare/resource_healthcare_fhir_store.go +++ b/google-beta/services/healthcare/resource_healthcare_fhir_store.go @@ -20,6 +20,7 @@ package healthcare import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -439,6 +440,7 @@ func resourceHealthcareFhirStoreCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -447,6 +449,7 @@ func resourceHealthcareFhirStoreCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FhirStore: %s", err) @@ -483,12 +486,14 @@ func resourceHealthcareFhirStoreRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("HealthcareFhirStore %q", d.Id())) @@ -620,6 +625,7 @@ func resourceHealthcareFhirStoreUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating FhirStore %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("complex_data_type_reference_parsing") { @@ -675,6 +681,7 @@ func resourceHealthcareFhirStoreUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -709,6 +716,8 @@ func resourceHealthcareFhirStoreDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FhirStore %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -718,6 +727,7 @@ func resourceHealthcareFhirStoreDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FhirStore") diff --git a/google-beta/services/healthcare/resource_healthcare_hl7_v2_store.go b/google-beta/services/healthcare/resource_healthcare_hl7_v2_store.go index 37d11584ba..380afffd23 100644 --- a/google-beta/services/healthcare/resource_healthcare_hl7_v2_store.go +++ b/google-beta/services/healthcare/resource_healthcare_hl7_v2_store.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -283,6 +284,7 @@ func resourceHealthcareHl7V2StoreCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -291,6 +293,7 @@ func resourceHealthcareHl7V2StoreCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Hl7V2Store: %s", err) @@ -327,12 +330,14 @@ func resourceHealthcareHl7V2StoreRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("HealthcareHl7V2Store %q", d.Id())) @@ -425,6 +430,7 @@ func resourceHealthcareHl7V2StoreUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Hl7V2Store %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("reject_duplicate_message") { @@ -470,6 +476,7 @@ func resourceHealthcareHl7V2StoreUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -504,6 +511,8 @@ func resourceHealthcareHl7V2StoreDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Hl7V2Store %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -513,6 +522,7 @@ func resourceHealthcareHl7V2StoreDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Hl7V2Store") diff --git a/google-beta/services/iam2/resource_iam_access_boundary_policy.go b/google-beta/services/iam2/resource_iam_access_boundary_policy.go index 0fc4712c41..ce1c2de81a 100644 --- a/google-beta/services/iam2/resource_iam_access_boundary_policy.go +++ b/google-beta/services/iam2/resource_iam_access_boundary_policy.go @@ -20,6 +20,7 @@ package iam2 import ( "fmt" "log" + "net/http" "reflect" "time" @@ -184,6 +185,7 @@ func resourceIAM2AccessBoundaryPolicyCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -192,6 +194,7 @@ func resourceIAM2AccessBoundaryPolicyCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AccessBoundaryPolicy: %s", err) @@ -238,12 +241,14 @@ func resourceIAM2AccessBoundaryPolicyRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IAM2AccessBoundaryPolicy %q", d.Id())) @@ -297,6 +302,7 @@ func resourceIAM2AccessBoundaryPolicyUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating AccessBoundaryPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -311,6 +317,7 @@ func resourceIAM2AccessBoundaryPolicyUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -351,6 +358,8 @@ func resourceIAM2AccessBoundaryPolicyDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AccessBoundaryPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -360,6 +369,7 @@ func resourceIAM2AccessBoundaryPolicyDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AccessBoundaryPolicy") diff --git a/google-beta/services/iam2/resource_iam_deny_policy.go b/google-beta/services/iam2/resource_iam_deny_policy.go index af7b9b1b5d..32fd3102b8 100644 --- a/google-beta/services/iam2/resource_iam_deny_policy.go +++ b/google-beta/services/iam2/resource_iam_deny_policy.go @@ -20,6 +20,7 @@ package iam2 import ( "fmt" "log" + "net/http" "reflect" "time" @@ -207,6 +208,7 @@ func resourceIAM2DenyPolicyCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -215,6 +217,7 @@ func resourceIAM2DenyPolicyCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DenyPolicy: %s", err) @@ -261,12 +264,14 @@ func resourceIAM2DenyPolicyRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IAM2DenyPolicy %q", d.Id())) @@ -320,6 +325,7 @@ func resourceIAM2DenyPolicyUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating DenyPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -334,6 +340,7 @@ func resourceIAM2DenyPolicyUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -374,6 +381,8 @@ func resourceIAM2DenyPolicyDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DenyPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -383,6 +392,7 @@ func resourceIAM2DenyPolicyDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DenyPolicy") diff --git a/google-beta/services/iambeta/resource_iam_workload_identity_pool.go b/google-beta/services/iambeta/resource_iam_workload_identity_pool.go index 907a27071d..9540404f5f 100644 --- a/google-beta/services/iambeta/resource_iam_workload_identity_pool.go +++ b/google-beta/services/iambeta/resource_iam_workload_identity_pool.go @@ -20,6 +20,7 @@ package iambeta import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -184,6 +185,7 @@ func resourceIAMBetaWorkloadIdentityPoolCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -192,6 +194,7 @@ func resourceIAMBetaWorkloadIdentityPoolCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WorkloadIdentityPool: %s", err) @@ -244,12 +247,14 @@ func resourceIAMBetaWorkloadIdentityPoolRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IAMBetaWorkloadIdentityPool %q", d.Id())) @@ -331,6 +336,7 @@ func resourceIAMBetaWorkloadIdentityPoolUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating WorkloadIdentityPool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -366,6 +372,7 @@ func resourceIAMBetaWorkloadIdentityPoolUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -413,6 +420,8 @@ func resourceIAMBetaWorkloadIdentityPoolDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting WorkloadIdentityPool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -422,6 +431,7 @@ func resourceIAMBetaWorkloadIdentityPoolDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "WorkloadIdentityPool") diff --git a/google-beta/services/iambeta/resource_iam_workload_identity_pool_provider.go b/google-beta/services/iambeta/resource_iam_workload_identity_pool_provider.go index d8c33ce448..6ff0293a95 100644 --- a/google-beta/services/iambeta/resource_iam_workload_identity_pool_provider.go +++ b/google-beta/services/iambeta/resource_iam_workload_identity_pool_provider.go @@ -20,6 +20,7 @@ package iambeta import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -401,6 +402,7 @@ func resourceIAMBetaWorkloadIdentityPoolProviderCreate(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -409,6 +411,7 @@ func resourceIAMBetaWorkloadIdentityPoolProviderCreate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WorkloadIdentityPoolProvider: %s", err) @@ -461,12 +464,14 @@ func resourceIAMBetaWorkloadIdentityPoolProviderRead(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IAMBetaWorkloadIdentityPoolProvider %q", d.Id())) @@ -593,6 +598,7 @@ func resourceIAMBetaWorkloadIdentityPoolProviderUpdate(d *schema.ResourceData, m } log.Printf("[DEBUG] Updating WorkloadIdentityPoolProvider %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -650,6 +656,7 @@ func resourceIAMBetaWorkloadIdentityPoolProviderUpdate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -697,6 +704,8 @@ func resourceIAMBetaWorkloadIdentityPoolProviderDelete(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting WorkloadIdentityPoolProvider %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -706,6 +715,7 @@ func resourceIAMBetaWorkloadIdentityPoolProviderDelete(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "WorkloadIdentityPoolProvider") diff --git a/google-beta/services/iamworkforcepool/resource_iam_workforce_pool.go b/google-beta/services/iamworkforcepool/resource_iam_workforce_pool.go index 4de69db08e..550a41dcea 100644 --- a/google-beta/services/iamworkforcepool/resource_iam_workforce_pool.go +++ b/google-beta/services/iamworkforcepool/resource_iam_workforce_pool.go @@ -20,6 +20,7 @@ package iamworkforcepool import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -234,6 +235,7 @@ func resourceIAMWorkforcePoolWorkforcePoolCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -242,6 +244,7 @@ func resourceIAMWorkforcePoolWorkforcePoolCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WorkforcePool: %s", err) @@ -288,12 +291,14 @@ func resourceIAMWorkforcePoolWorkforcePoolRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IAMWorkforcePoolWorkforcePool %q", d.Id())) @@ -380,6 +385,7 @@ func resourceIAMWorkforcePoolWorkforcePoolUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating WorkforcePool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -419,6 +425,7 @@ func resourceIAMWorkforcePoolWorkforcePoolUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -460,6 +467,8 @@ func resourceIAMWorkforcePoolWorkforcePoolDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting WorkforcePool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -469,6 +478,7 @@ func resourceIAMWorkforcePoolWorkforcePoolDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "WorkforcePool") diff --git a/google-beta/services/iamworkforcepool/resource_iam_workforce_pool_provider.go b/google-beta/services/iamworkforcepool/resource_iam_workforce_pool_provider.go index d1019eb0ff..02aea306ef 100644 --- a/google-beta/services/iamworkforcepool/resource_iam_workforce_pool_provider.go +++ b/google-beta/services/iamworkforcepool/resource_iam_workforce_pool_provider.go @@ -20,6 +20,7 @@ package iamworkforcepool import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -423,6 +424,7 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -431,6 +433,7 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WorkforcePoolProvider: %s", err) @@ -495,12 +498,14 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderRead(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IAMWorkforcePoolWorkforcePoolProvider %q", d.Id())) @@ -608,6 +613,7 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating WorkforcePoolProvider %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -659,6 +665,7 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -719,6 +726,8 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting WorkforcePoolProvider %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -728,6 +737,7 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "WorkforcePoolProvider") diff --git a/google-beta/services/iap/resource_iap_brand.go b/google-beta/services/iap/resource_iap_brand.go index c4bec247aa..5e0145ef2d 100644 --- a/google-beta/services/iap/resource_iap_brand.go +++ b/google-beta/services/iap/resource_iap_brand.go @@ -20,6 +20,7 @@ package iap import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -131,6 +132,7 @@ func resourceIapBrandCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -139,6 +141,7 @@ func resourceIapBrandCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Brand: %s", err) @@ -249,12 +252,14 @@ func resourceIapBrandRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IapBrand %q", d.Id())) diff --git a/google-beta/services/iap/resource_iap_client.go b/google-beta/services/iap/resource_iap_client.go index 0678324c33..9423664202 100644 --- a/google-beta/services/iap/resource_iap_client.go +++ b/google-beta/services/iap/resource_iap_client.go @@ -20,6 +20,7 @@ package iap import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -104,6 +105,7 @@ func resourceIapClientCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -112,6 +114,7 @@ func resourceIapClientCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IapClient409Operation}, }) if err != nil { @@ -157,12 +160,14 @@ func resourceIapClientRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IapClient409Operation}, }) if err != nil { @@ -203,6 +208,8 @@ func resourceIapClientDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Client %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -212,6 +219,7 @@ func resourceIapClientDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IapClient409Operation}, }) if err != nil { diff --git a/google-beta/services/iap/resource_iap_tunnel_dest_group.go b/google-beta/services/iap/resource_iap_tunnel_dest_group.go index c0c9c82ace..576bd160df 100644 --- a/google-beta/services/iap/resource_iap_tunnel_dest_group.go +++ b/google-beta/services/iap/resource_iap_tunnel_dest_group.go @@ -20,6 +20,7 @@ package iap import ( "fmt" "log" + "net/http" "reflect" "time" @@ -137,6 +138,7 @@ func resourceIapTunnelDestGroupCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -145,6 +147,7 @@ func resourceIapTunnelDestGroupCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TunnelDestGroup: %s", err) @@ -190,12 +193,14 @@ func resourceIapTunnelDestGroupRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IapTunnelDestGroup %q", d.Id())) @@ -253,6 +258,7 @@ func resourceIapTunnelDestGroupUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating TunnelDestGroup %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -267,6 +273,7 @@ func resourceIapTunnelDestGroupUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -305,6 +312,8 @@ func resourceIapTunnelDestGroupDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TunnelDestGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -314,6 +323,7 @@ func resourceIapTunnelDestGroupDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TunnelDestGroup") diff --git a/google-beta/services/identityplatform/resource_identity_platform_config.go b/google-beta/services/identityplatform/resource_identity_platform_config.go index d3b2e57ce8..06561f6e37 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -564,12 +565,14 @@ func resourceIdentityPlatformConfigRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformConfig %q", d.Id())) @@ -699,6 +702,7 @@ func resourceIdentityPlatformConfigUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Config %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("autodelete_anonymous_users") { @@ -762,6 +766,7 @@ func resourceIdentityPlatformConfigUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/identityplatform/resource_identity_platform_default_supported_idp_config.go b/google-beta/services/identityplatform/resource_identity_platform_default_supported_idp_config.go index 08c9ac287e..6cafe0b21c 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_default_supported_idp_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_default_supported_idp_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -156,6 +157,7 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigCreate(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -164,6 +166,7 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigCreate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DefaultSupportedIdpConfig: %s", err) @@ -209,12 +212,14 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigRead(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformDefaultSupportedIdpConfig %q", d.Id())) @@ -281,6 +286,7 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigUpdate(d *schema.ResourceD } log.Printf("[DEBUG] Updating DefaultSupportedIdpConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("client_id") { @@ -316,6 +322,7 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigUpdate(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -356,6 +363,8 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigDelete(d *schema.ResourceD billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DefaultSupportedIdpConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -365,6 +374,7 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigDelete(d *schema.ResourceD UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DefaultSupportedIdpConfig") diff --git a/google-beta/services/identityplatform/resource_identity_platform_inbound_saml_config.go b/google-beta/services/identityplatform/resource_identity_platform_inbound_saml_config.go index ac7b85182a..cb0ca6acec 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_inbound_saml_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_inbound_saml_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -214,6 +215,7 @@ func resourceIdentityPlatformInboundSamlConfigCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -222,6 +224,7 @@ func resourceIdentityPlatformInboundSamlConfigCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating InboundSamlConfig: %s", err) @@ -264,12 +267,14 @@ func resourceIdentityPlatformInboundSamlConfigRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformInboundSamlConfig %q", d.Id())) @@ -345,6 +350,7 @@ func resourceIdentityPlatformInboundSamlConfigUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating InboundSamlConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -384,6 +390,7 @@ func resourceIdentityPlatformInboundSamlConfigUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -424,6 +431,8 @@ func resourceIdentityPlatformInboundSamlConfigDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting InboundSamlConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -433,6 +442,7 @@ func resourceIdentityPlatformInboundSamlConfigDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "InboundSamlConfig") diff --git a/google-beta/services/identityplatform/resource_identity_platform_oauth_idp_config.go b/google-beta/services/identityplatform/resource_identity_platform_oauth_idp_config.go index 49f3779b0e..460a00f8f3 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_oauth_idp_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_oauth_idp_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -159,6 +160,7 @@ func resourceIdentityPlatformOauthIdpConfigCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -167,6 +169,7 @@ func resourceIdentityPlatformOauthIdpConfigCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OauthIdpConfig: %s", err) @@ -209,12 +212,14 @@ func resourceIdentityPlatformOauthIdpConfigRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformOauthIdpConfig %q", d.Id())) @@ -299,6 +304,7 @@ func resourceIdentityPlatformOauthIdpConfigUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating OauthIdpConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -342,6 +348,7 @@ func resourceIdentityPlatformOauthIdpConfigUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -382,6 +389,8 @@ func resourceIdentityPlatformOauthIdpConfigDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting OauthIdpConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -391,6 +400,7 @@ func resourceIdentityPlatformOauthIdpConfigDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "OauthIdpConfig") diff --git a/google-beta/services/identityplatform/resource_identity_platform_project_default_config.go b/google-beta/services/identityplatform/resource_identity_platform_project_default_config.go index 77a4e7f449..7e316d2607 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_project_default_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_project_default_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -212,6 +213,7 @@ func resourceIdentityPlatformProjectDefaultConfigCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -220,6 +222,7 @@ func resourceIdentityPlatformProjectDefaultConfigCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ProjectDefaultConfig: %s", err) @@ -265,12 +268,14 @@ func resourceIdentityPlatformProjectDefaultConfigRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformProjectDefaultConfig %q", d.Id())) @@ -319,6 +324,7 @@ func resourceIdentityPlatformProjectDefaultConfigUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating ProjectDefaultConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("sign_in") { @@ -346,6 +352,7 @@ func resourceIdentityPlatformProjectDefaultConfigUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -386,6 +393,8 @@ func resourceIdentityPlatformProjectDefaultConfigDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ProjectDefaultConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -395,6 +404,7 @@ func resourceIdentityPlatformProjectDefaultConfigDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ProjectDefaultConfig") diff --git a/google-beta/services/identityplatform/resource_identity_platform_tenant.go b/google-beta/services/identityplatform/resource_identity_platform_tenant.go index 947915b813..4255fb3f63 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_tenant.go +++ b/google-beta/services/identityplatform/resource_identity_platform_tenant.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -143,6 +144,7 @@ func resourceIdentityPlatformTenantCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -151,6 +153,7 @@ func resourceIdentityPlatformTenantCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Tenant: %s", err) @@ -211,12 +214,14 @@ func resourceIdentityPlatformTenantRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformTenant %q", d.Id())) @@ -292,6 +297,7 @@ func resourceIdentityPlatformTenantUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Tenant %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -331,6 +337,7 @@ func resourceIdentityPlatformTenantUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -371,6 +378,8 @@ func resourceIdentityPlatformTenantDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Tenant %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -380,6 +389,7 @@ func resourceIdentityPlatformTenantDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Tenant") diff --git a/google-beta/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go b/google-beta/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go index 843efdcacb..c8f1467ebc 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -162,6 +163,7 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigCreate(d *schema.Res billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -170,6 +172,7 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigCreate(d *schema.Res UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TenantDefaultSupportedIdpConfig: %s", err) @@ -215,12 +218,14 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigRead(d *schema.Resou billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformTenantDefaultSupportedIdpConfig %q", d.Id())) @@ -287,6 +292,7 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigUpdate(d *schema.Res } log.Printf("[DEBUG] Updating TenantDefaultSupportedIdpConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("client_id") { @@ -322,6 +328,7 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigUpdate(d *schema.Res UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -362,6 +369,8 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigDelete(d *schema.Res billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TenantDefaultSupportedIdpConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -371,6 +380,7 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigDelete(d *schema.Res UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TenantDefaultSupportedIdpConfig") diff --git a/google-beta/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go b/google-beta/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go index a326a095f7..b81aab89c6 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -220,6 +221,7 @@ func resourceIdentityPlatformTenantInboundSamlConfigCreate(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -228,6 +230,7 @@ func resourceIdentityPlatformTenantInboundSamlConfigCreate(d *schema.ResourceDat UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TenantInboundSamlConfig: %s", err) @@ -270,12 +273,14 @@ func resourceIdentityPlatformTenantInboundSamlConfigRead(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformTenantInboundSamlConfig %q", d.Id())) @@ -351,6 +356,7 @@ func resourceIdentityPlatformTenantInboundSamlConfigUpdate(d *schema.ResourceDat } log.Printf("[DEBUG] Updating TenantInboundSamlConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -390,6 +396,7 @@ func resourceIdentityPlatformTenantInboundSamlConfigUpdate(d *schema.ResourceDat UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -430,6 +437,8 @@ func resourceIdentityPlatformTenantInboundSamlConfigDelete(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TenantInboundSamlConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -439,6 +448,7 @@ func resourceIdentityPlatformTenantInboundSamlConfigDelete(d *schema.ResourceDat UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TenantInboundSamlConfig") diff --git a/google-beta/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go b/google-beta/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go index f6d9602429..e4c4768010 100644 --- a/google-beta/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go +++ b/google-beta/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go @@ -20,6 +20,7 @@ package identityplatform import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -165,6 +166,7 @@ func resourceIdentityPlatformTenantOauthIdpConfigCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -173,6 +175,7 @@ func resourceIdentityPlatformTenantOauthIdpConfigCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TenantOauthIdpConfig: %s", err) @@ -215,12 +218,14 @@ func resourceIdentityPlatformTenantOauthIdpConfigRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IdentityPlatformTenantOauthIdpConfig %q", d.Id())) @@ -305,6 +310,7 @@ func resourceIdentityPlatformTenantOauthIdpConfigUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating TenantOauthIdpConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -348,6 +354,7 @@ func resourceIdentityPlatformTenantOauthIdpConfigUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -388,6 +395,8 @@ func resourceIdentityPlatformTenantOauthIdpConfigDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TenantOauthIdpConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -397,6 +406,7 @@ func resourceIdentityPlatformTenantOauthIdpConfigDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TenantOauthIdpConfig") diff --git a/google-beta/services/integrationconnectors/resource_integration_connectors_connection.go b/google-beta/services/integrationconnectors/resource_integration_connectors_connection.go index 520598fe79..d8e3c7d3cc 100644 --- a/google-beta/services/integrationconnectors/resource_integration_connectors_connection.go +++ b/google-beta/services/integrationconnectors/resource_integration_connectors_connection.go @@ -20,6 +20,7 @@ package integrationconnectors import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -1240,6 +1241,7 @@ func resourceIntegrationConnectorsConnectionCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1248,6 +1250,7 @@ func resourceIntegrationConnectorsConnectionCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Connection: %s", err) @@ -1314,12 +1317,14 @@ func resourceIntegrationConnectorsConnectionRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IntegrationConnectorsConnection %q", d.Id())) @@ -1515,6 +1520,7 @@ func resourceIntegrationConnectorsConnectionUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating Connection %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -1594,6 +1600,7 @@ func resourceIntegrationConnectorsConnectionUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1644,6 +1651,8 @@ func resourceIntegrationConnectorsConnectionDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Connection %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1653,6 +1662,7 @@ func resourceIntegrationConnectorsConnectionDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Connection") diff --git a/google-beta/services/integrationconnectors/resource_integration_connectors_connection_generated_test.go b/google-beta/services/integrationconnectors/resource_integration_connectors_connection_generated_test.go index feeec879c1..ce95792ddc 100644 --- a/google-beta/services/integrationconnectors/resource_integration_connectors_connection_generated_test.go +++ b/google-beta/services/integrationconnectors/resource_integration_connectors_connection_generated_test.go @@ -49,7 +49,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionBasic ResourceName: "google_integration_connectors_connection.pubsubconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) @@ -96,7 +96,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionAdvan ResourceName: "google_integration_connectors_connection.zendeskconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) @@ -358,7 +358,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionSaExa ResourceName: "google_integration_connectors_connection.zendeskconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) @@ -618,7 +618,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionOauth ResourceName: "google_integration_connectors_connection.boxconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) @@ -705,7 +705,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionOauth ResourceName: "google_integration_connectors_connection.boxconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) @@ -791,7 +791,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionOauth ResourceName: "google_integration_connectors_connection.boxconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) @@ -908,7 +908,7 @@ func TestAccIntegrationConnectorsConnection_integrationConnectorsConnectionOauth ResourceName: "google_integration_connectors_connection.boxconnection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"location", "name", "status.0.description", "labels", "terraform_labels"}, }, }, }) diff --git a/google-beta/services/integrationconnectors/resource_integration_connectors_endpoint_attachment.go b/google-beta/services/integrationconnectors/resource_integration_connectors_endpoint_attachment.go index c7c16071bf..afeaa99ea0 100644 --- a/google-beta/services/integrationconnectors/resource_integration_connectors_endpoint_attachment.go +++ b/google-beta/services/integrationconnectors/resource_integration_connectors_endpoint_attachment.go @@ -20,6 +20,7 @@ package integrationconnectors import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -172,6 +173,7 @@ func resourceIntegrationConnectorsEndpointAttachmentCreate(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -180,6 +182,7 @@ func resourceIntegrationConnectorsEndpointAttachmentCreate(d *schema.ResourceDat UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EndpointAttachment: %s", err) @@ -242,12 +245,14 @@ func resourceIntegrationConnectorsEndpointAttachmentRead(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IntegrationConnectorsEndpointAttachment %q", d.Id())) @@ -320,6 +325,7 @@ func resourceIntegrationConnectorsEndpointAttachmentUpdate(d *schema.ResourceDat } log.Printf("[DEBUG] Updating EndpointAttachment %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -351,6 +357,7 @@ func resourceIntegrationConnectorsEndpointAttachmentUpdate(d *schema.ResourceDat UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -398,6 +405,8 @@ func resourceIntegrationConnectorsEndpointAttachmentDelete(d *schema.ResourceDat billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EndpointAttachment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -407,6 +416,7 @@ func resourceIntegrationConnectorsEndpointAttachmentDelete(d *schema.ResourceDat UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EndpointAttachment") diff --git a/google-beta/services/integrations/resource_integrations_auth_config.go b/google-beta/services/integrations/resource_integrations_auth_config.go new file mode 100644 index 0000000000..f06bbfa988 --- /dev/null +++ b/google-beta/services/integrations/resource_integrations_auth_config.go @@ -0,0 +1,1942 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package integrations + +import ( + "fmt" + "log" + "net/http" + "reflect" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" +) + +func ResourceIntegrationsAuthConfig() *schema.Resource { + return &schema.Resource{ + Create: resourceIntegrationsAuthConfigCreate, + Read: resourceIntegrationsAuthConfigRead, + Update: resourceIntegrationsAuthConfigUpdate, + Delete: resourceIntegrationsAuthConfigDelete, + + Importer: &schema.ResourceImporter{ + State: resourceIntegrationsAuthConfigImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + Update: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(20 * time.Minute), + }, + + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + + Schema: map[string]*schema.Schema{ + "display_name": { + Type: schema.TypeString, + Required: true, + Description: `The name of the auth config.`, + }, + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Location in which client needs to be provisioned.`, + }, + "client_certificate": { + Type: schema.TypeList, + Optional: true, + Description: `Raw client certificate`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "encrypted_private_key": { + Type: schema.TypeString, + Required: true, + Description: `The ssl certificate encoded in PEM format. This string must include the begin header and end footer lines.`, + }, + "ssl_certificate": { + Type: schema.TypeString, + Required: true, + Description: `The ssl certificate encoded in PEM format. This string must include the begin header and end footer lines.`, + }, + "passphrase": { + Type: schema.TypeString, + Optional: true, + Description: `'passphrase' should be left unset if private key is not encrypted. +Note that 'passphrase' is not the password for web server, but an extra layer of security to protected private key.`, + }, + }, + }, + }, + "decrypted_credential": { + Type: schema.TypeList, + Optional: true, + Description: `Raw auth credentials.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "credential_type": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `Credential type associated with auth configs.`, + }, + "auth_token": { + Type: schema.TypeList, + Optional: true, + Description: `Auth token credential.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "token": { + Type: schema.TypeString, + Optional: true, + Description: `The token for the auth type.`, + }, + "type": { + Type: schema.TypeString, + Optional: true, + Description: `Authentication type, e.g. "Basic", "Bearer", etc.`, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.username_and_password", "decrypted_credential.0.oauth2_authorization_code", "decrypted_credential.0.oauth2_client_credentials", "decrypted_credential.0.jwt", "decrypted_credential.0.service_account_credentials", "decrypted_credential.0.oidc_token"}, + }, + "jwt": { + Type: schema.TypeList, + Optional: true, + Description: `JWT credential.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "jwt_header": { + Type: schema.TypeString, + Optional: true, + Description: `Identifies which algorithm is used to generate the signature.`, + }, + "jwt_payload": { + Type: schema.TypeString, + Optional: true, + Description: `Contains a set of claims. The JWT specification defines seven Registered Claim Names which are the standard fields commonly included in tokens. Custom claims are usually also included, depending on the purpose of the token.`, + }, + "secret": { + Type: schema.TypeString, + Optional: true, + Description: `User's pre-shared secret to sign the token.`, + }, + "jwt": { + Type: schema.TypeString, + Computed: true, + Description: `The token calculated by the header, payload and signature.`, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.username_and_password", "decrypted_credential.0.oauth2_authorization_code", "decrypted_credential.0.oauth2_client_credentials", "decrypted_credential.0.auth_token", "decrypted_credential.0.service_account_credentials", "decrypted_credential.0.oidc_token"}, + }, + "oauth2_authorization_code": { + Type: schema.TypeList, + Optional: true, + Description: `OAuth2 authorization code credential.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "auth_endpoint": { + Type: schema.TypeString, + Optional: true, + Description: `The auth url endpoint to send the auth code request to.`, + }, + "client_id": { + Type: schema.TypeString, + Optional: true, + Description: `The client's id.`, + }, + "client_secret": { + Type: schema.TypeString, + Optional: true, + Description: `The client's secret.`, + }, + "scope": { + Type: schema.TypeString, + Optional: true, + Description: `A space-delimited list of requested scope permissions.`, + }, + "token_endpoint": { + Type: schema.TypeString, + Optional: true, + Description: `The token url endpoint to send the token request to.`, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.username_and_password", "decrypted_credential.0.oauth2_client_credentials", "decrypted_credential.0.jwt", "decrypted_credential.0.auth_token", "decrypted_credential.0.service_account_credentials", "decrypted_credential.0.oidc_token"}, + }, + "oauth2_client_credentials": { + Type: schema.TypeList, + Optional: true, + Description: `OAuth2 client credentials.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "client_id": { + Type: schema.TypeString, + Optional: true, + Description: `The client's ID.`, + }, + "client_secret": { + Type: schema.TypeString, + Optional: true, + Description: `The client's secret.`, + }, + "request_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"REQUEST_TYPE_UNSPECIFIED", "REQUEST_BODY", "QUERY_PARAMETERS", "ENCODED_HEADER", ""}), + Description: `Represent how to pass parameters to fetch access token Possible values: ["REQUEST_TYPE_UNSPECIFIED", "REQUEST_BODY", "QUERY_PARAMETERS", "ENCODED_HEADER"]`, + }, + "scope": { + Type: schema.TypeString, + Optional: true, + Description: `A space-delimited list of requested scope permissions.`, + }, + "token_endpoint": { + Type: schema.TypeString, + Optional: true, + Description: `The token endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token.`, + }, + "token_params": { + Type: schema.TypeList, + Optional: true, + Description: `Token parameters for the auth request.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "entries": { + Type: schema.TypeList, + Optional: true, + Description: `A list of parameter map entries.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeList, + Optional: true, + Description: `Key of the map entry.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "literal_value": { + Type: schema.TypeList, + Optional: true, + Description: `Passing a literal value`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "string_value": { + Type: schema.TypeString, + Optional: true, + Description: `String.`, + }, + }, + }, + }, + }, + }, + }, + "value": { + Type: schema.TypeList, + Optional: true, + Description: `Value of the map entry.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "literal_value": { + Type: schema.TypeList, + Optional: true, + Description: `Passing a literal value`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "string_value": { + Type: schema.TypeString, + Optional: true, + Description: `String.`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.username_and_password", "decrypted_credential.0.oauth2_authorization_code", "decrypted_credential.0.jwt", "decrypted_credential.0.auth_token", "decrypted_credential.0.service_account_credentials", "decrypted_credential.0.oidc_token"}, + }, + "oidc_token": { + Type: schema.TypeList, + Optional: true, + Description: `Google OIDC ID Token.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "audience": { + Type: schema.TypeString, + Optional: true, + Description: `Audience to be used when generating OIDC token. The audience claim identifies the recipients that the JWT is intended for.`, + }, + "service_account_email": { + Type: schema.TypeString, + Optional: true, + Description: `The service account email to be used as the identity for the token.`, + }, + "token": { + Type: schema.TypeString, + Computed: true, + Description: `ID token obtained for the service account.`, + }, + "token_expire_time": { + Type: schema.TypeString, + Computed: true, + Description: `The approximate time until the token retrieved is valid. + +A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.username_and_password", "decrypted_credential.0.oauth2_authorization_code", "decrypted_credential.0.oauth2_client_credentials", "decrypted_credential.0.jwt", "decrypted_credential.0.auth_token", "decrypted_credential.0.service_account_credentials"}, + }, + "service_account_credentials": { + Type: schema.TypeList, + Optional: true, + Description: `Service account credential.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "scope": { + Type: schema.TypeString, + Optional: true, + Description: `A space-delimited list of requested scope permissions.`, + }, + "service_account": { + Type: schema.TypeString, + Optional: true, + Description: `Name of the service account that has the permission to make the request.`, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.username_and_password", "decrypted_credential.0.oauth2_authorization_code", "decrypted_credential.0.oauth2_client_credentials", "decrypted_credential.0.jwt", "decrypted_credential.0.auth_token", "decrypted_credential.0.oidc_token"}, + }, + "username_and_password": { + Type: schema.TypeList, + Optional: true, + Description: `Username and password credential.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "password": { + Type: schema.TypeString, + Optional: true, + Description: `Password to be used.`, + }, + "username": { + Type: schema.TypeString, + Optional: true, + Description: `Username to be used.`, + }, + }, + }, + ConflictsWith: []string{"decrypted_credential.0.oauth2_authorization_code", "decrypted_credential.0.oauth2_client_credentials", "decrypted_credential.0.jwt", "decrypted_credential.0.auth_token", "decrypted_credential.0.service_account_credentials", "decrypted_credential.0.oidc_token"}, + }, + }, + }, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: `A description of the auth config.`, + }, + "expiry_notification_duration": { + Type: schema.TypeList, + Optional: true, + Description: `User can define the time to receive notification after which the auth config becomes invalid. Support up to 30 days. Support granularity in hours. + +A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "override_valid_time": { + Type: schema.TypeString, + Optional: true, + Description: `User provided expiry time to override. For the example of Salesforce, username/password credentials can be valid for 6 months depending on the instance settings. + +A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, + }, + "visibility": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"PRIVATE", "CLIENT_VISIBLE", ""}), + Description: `The visibility of the auth config. Possible values: ["PRIVATE", "CLIENT_VISIBLE"]`, + }, + "certificate_id": { + Type: schema.TypeString, + Computed: true, + Description: `Certificate id for client certificate.`, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + Description: `The timestamp when the auth config is created. + +A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, + }, + "creator_email": { + Type: schema.TypeString, + Computed: true, + Description: `The creator's email address. Generated based on the End User Credentials/LOAS role of the user making the call.`, + }, + "credential_type": { + Type: schema.TypeString, + Computed: true, + Description: `Credential type of the encrypted credential.`, + }, + "encrypted_credential": { + Type: schema.TypeString, + Computed: true, + Description: `Auth credential encrypted by Cloud KMS. Can be decrypted as Credential with proper KMS key. + +A base64-encoded string.`, + }, + "last_modifier_email": { + Type: schema.TypeString, + Computed: true, + Description: `The last modifier's email address. Generated based on the End User Credentials/LOAS role of the user making the call.`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Description: `Resource name of the auth config.`, + }, + "reason": { + Type: schema.TypeString, + Computed: true, + Description: `The reason / details of the current status.`, + }, + "state": { + Type: schema.TypeString, + Computed: true, + Description: `The status of the auth config.`, + }, + "update_time": { + Type: schema.TypeString, + Computed: true, + Description: `The timestamp when the auth config is modified. + +A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, + }, + "valid_time": { + Type: schema.TypeString, + Computed: true, + Description: `The time until the auth config is valid. Empty or max value is considered the auth config won't expire. + +A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func resourceIntegrationsAuthConfigCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + obj := make(map[string]interface{}) + displayNameProp, err := expandIntegrationsAuthConfigDisplayName(d.Get("display_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { + obj["displayName"] = displayNameProp + } + descriptionProp, err := expandIntegrationsAuthConfigDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + visibilityProp, err := expandIntegrationsAuthConfigVisibility(d.Get("visibility"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("visibility"); !tpgresource.IsEmptyValue(reflect.ValueOf(visibilityProp)) && (ok || !reflect.DeepEqual(v, visibilityProp)) { + obj["visibility"] = visibilityProp + } + expiryNotificationDurationProp, err := expandIntegrationsAuthConfigExpiryNotificationDuration(d.Get("expiry_notification_duration"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("expiry_notification_duration"); !tpgresource.IsEmptyValue(reflect.ValueOf(expiryNotificationDurationProp)) && (ok || !reflect.DeepEqual(v, expiryNotificationDurationProp)) { + obj["expiryNotificationDuration"] = expiryNotificationDurationProp + } + overrideValidTimeProp, err := expandIntegrationsAuthConfigOverrideValidTime(d.Get("override_valid_time"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("override_valid_time"); !tpgresource.IsEmptyValue(reflect.ValueOf(overrideValidTimeProp)) && (ok || !reflect.DeepEqual(v, overrideValidTimeProp)) { + obj["overrideValidTime"] = overrideValidTimeProp + } + decryptedCredentialProp, err := expandIntegrationsAuthConfigDecryptedCredential(d.Get("decrypted_credential"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("decrypted_credential"); !tpgresource.IsEmptyValue(reflect.ValueOf(decryptedCredentialProp)) && (ok || !reflect.DeepEqual(v, decryptedCredentialProp)) { + obj["decryptedCredential"] = decryptedCredentialProp + } + client_certificateProp, err := expandIntegrationsAuthConfigClientCertificate(d.Get("client_certificate"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("client_certificate"); !tpgresource.IsEmptyValue(reflect.ValueOf(client_certificateProp)) && (ok || !reflect.DeepEqual(v, client_certificateProp)) { + obj["client_certificate"] = client_certificateProp + } + + lockName, err := tpgresource.ReplaceVars(d, config, "{{name}}") + if err != nil { + return err + } + transport_tpg.MutexStore.Lock(lockName) + defer transport_tpg.MutexStore.Unlock(lockName) + + url, err := tpgresource.ReplaceVars(d, config, "{{IntegrationsBasePath}}projects/{{project}}/locations/{{location}}/authConfigs") + if err != nil { + return err + } + + log.Printf("[DEBUG] Creating new AuthConfig: %#v", obj) + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for AuthConfig: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + // Move client certificate to url param from request body + if cc, ok := obj["client_certificate"]; ok { + ccm := cc.(map[string]any) + + params := map[string]string{ + "clientCertificate.sslCertificate": ccm["ssl_certificate"].(string), + "clientCertificate.encryptedPrivateKey": ccm["encrypted_private_key"].(string), + } + if pp, ok := ccm["passphrase"]; ok { + params["clientCertificate.passphrase"] = pp.(string) + } + url, err = transport_tpg.AddQueryParams(url, params) + if err != nil { + return err + } + delete(obj, "client_certificate") + } + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, + }) + if err != nil { + return fmt.Errorf("Error creating AuthConfig: %s", err) + } + if err := d.Set("name", flattenIntegrationsAuthConfigName(res["name"], d, config)); err != nil { + return fmt.Errorf(`Error setting computed identity field "name": %s`, err) + } + + // Store the ID now + id, err := tpgresource.ReplaceVars(d, config, "{{name}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + // `name` is autogenerated from the api so needs to be set post-create + name, ok := res["name"] + if !ok { + respBody, ok := res["response"] + if !ok { + return fmt.Errorf("Create response didn't contain critical fields. Create may not have succeeded.") + } + + name, ok = respBody.(map[string]interface{})["name"] + if !ok { + return fmt.Errorf("Create response didn't contain critical fields. Create may not have succeeded.") + } + } + if err := d.Set("name", name.(string)); err != nil { + return fmt.Errorf("Error setting name: %s", err) + } + d.SetId(name.(string)) + + log.Printf("[DEBUG] Finished creating AuthConfig %q: %#v", d.Id(), res) + + return resourceIntegrationsAuthConfigRead(d, meta) +} + +func resourceIntegrationsAuthConfigRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{IntegrationsBasePath}}{{name}}") + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for AuthConfig: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("IntegrationsAuthConfig %q", d.Id())) + } + + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + + if err := d.Set("name", flattenIntegrationsAuthConfigName(res["name"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("display_name", flattenIntegrationsAuthConfigDisplayName(res["displayName"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("description", flattenIntegrationsAuthConfigDescription(res["description"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("certificate_id", flattenIntegrationsAuthConfigCertificateId(res["certificateId"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("credential_type", flattenIntegrationsAuthConfigCredentialType(res["credentialType"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("creator_email", flattenIntegrationsAuthConfigCreatorEmail(res["creatorEmail"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("create_time", flattenIntegrationsAuthConfigCreateTime(res["createTime"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("last_modifier_email", flattenIntegrationsAuthConfigLastModifierEmail(res["lastModifierEmail"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("update_time", flattenIntegrationsAuthConfigUpdateTime(res["updateTime"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("visibility", flattenIntegrationsAuthConfigVisibility(res["visibility"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("state", flattenIntegrationsAuthConfigState(res["state"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("reason", flattenIntegrationsAuthConfigReason(res["reason"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("expiry_notification_duration", flattenIntegrationsAuthConfigExpiryNotificationDuration(res["expiryNotificationDuration"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("valid_time", flattenIntegrationsAuthConfigValidTime(res["validTime"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("override_valid_time", flattenIntegrationsAuthConfigOverrideValidTime(res["overrideValidTime"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("encrypted_credential", flattenIntegrationsAuthConfigEncryptedCredential(res["encryptedCredential"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + if err := d.Set("decrypted_credential", flattenIntegrationsAuthConfigDecryptedCredential(res["decryptedCredential"], d, config)); err != nil { + return fmt.Errorf("Error reading AuthConfig: %s", err) + } + + return nil +} + +func resourceIntegrationsAuthConfigUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for AuthConfig: %s", err) + } + billingProject = project + + obj := make(map[string]interface{}) + displayNameProp, err := expandIntegrationsAuthConfigDisplayName(d.Get("display_name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { + obj["displayName"] = displayNameProp + } + descriptionProp, err := expandIntegrationsAuthConfigDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + visibilityProp, err := expandIntegrationsAuthConfigVisibility(d.Get("visibility"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("visibility"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, visibilityProp)) { + obj["visibility"] = visibilityProp + } + expiryNotificationDurationProp, err := expandIntegrationsAuthConfigExpiryNotificationDuration(d.Get("expiry_notification_duration"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("expiry_notification_duration"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, expiryNotificationDurationProp)) { + obj["expiryNotificationDuration"] = expiryNotificationDurationProp + } + overrideValidTimeProp, err := expandIntegrationsAuthConfigOverrideValidTime(d.Get("override_valid_time"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("override_valid_time"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, overrideValidTimeProp)) { + obj["overrideValidTime"] = overrideValidTimeProp + } + decryptedCredentialProp, err := expandIntegrationsAuthConfigDecryptedCredential(d.Get("decrypted_credential"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("decrypted_credential"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, decryptedCredentialProp)) { + obj["decryptedCredential"] = decryptedCredentialProp + } + client_certificateProp, err := expandIntegrationsAuthConfigClientCertificate(d.Get("client_certificate"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("client_certificate"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, client_certificateProp)) { + obj["client_certificate"] = client_certificateProp + } + + lockName, err := tpgresource.ReplaceVars(d, config, "{{name}}") + if err != nil { + return err + } + transport_tpg.MutexStore.Lock(lockName) + defer transport_tpg.MutexStore.Unlock(lockName) + + url, err := tpgresource.ReplaceVars(d, config, "{{IntegrationsBasePath}}{{name}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating AuthConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) + params := map[string]string{} + + // Move client certificate to url param from request body + if cc, ok := obj["client_certificate"]; ok { + ccm := cc.(map[string]any) + + params["clientCertificate.sslCertificate"] = ccm["ssl_certificate"].(string) + params["clientCertificate.encryptedPrivateKey"] = ccm["encrypted_private_key"].(string) + if pp, ok := ccm["passphrase"]; ok { + params["clientCertificate.passphrase"] = pp.(string) + } + delete(obj, "client_certificate") + } + + // By default allow all fields to be updated via terraform + params["updateMask"] = "*" + + url, err = transport_tpg.AddQueryParams(url, params) + if err != nil { + return err + } + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "PATCH", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, + }) + + if err != nil { + return fmt.Errorf("Error updating AuthConfig %q: %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Finished updating AuthConfig %q: %#v", d.Id(), res) + } + + return resourceIntegrationsAuthConfigRead(d, meta) +} + +func resourceIntegrationsAuthConfigDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for AuthConfig: %s", err) + } + billingProject = project + + lockName, err := tpgresource.ReplaceVars(d, config, "{{name}}") + if err != nil { + return err + } + transport_tpg.MutexStore.Lock(lockName) + defer transport_tpg.MutexStore.Unlock(lockName) + + url, err := tpgresource.ReplaceVars(d, config, "{{IntegrationsBasePath}}{{name}}") + if err != nil { + return err + } + + var obj map[string]interface{} + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + + log.Printf("[DEBUG] Deleting AuthConfig %q", d.Id()) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, "AuthConfig") + } + + log.Printf("[DEBUG] Finished deleting AuthConfig %q: %#v", d.Id(), res) + return nil +} + +func resourceIntegrationsAuthConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + + config := meta.(*transport_tpg.Config) + + // current import_formats can't import fields with forward slashes in their value + if err := tpgresource.ParseImportId([]string{"(?P[^ ]+) (?P[^ ]+)", "(?P[^ ]+)"}, d, config); err != nil { + return nil, err + } + + return []*schema.ResourceData{d}, nil +} + +func flattenIntegrationsAuthConfigName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDisplayName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigCertificateId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigCredentialType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigCreatorEmail(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigLastModifierEmail(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigUpdateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigVisibility(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigReason(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigExpiryNotificationDuration(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigValidTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigOverrideValidTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigEncryptedCredential(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredential(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["credential_type"] = + flattenIntegrationsAuthConfigDecryptedCredentialCredentialType(original["credentialType"], d, config) + transformed["username_and_password"] = + flattenIntegrationsAuthConfigDecryptedCredentialUsernameAndPassword(original["usernameAndPassword"], d, config) + transformed["oauth2_authorization_code"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCode(original["oauth2AuthorizationCode"], d, config) + transformed["oauth2_client_credentials"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentials(original["oauth2ClientCredentials"], d, config) + transformed["jwt"] = + flattenIntegrationsAuthConfigDecryptedCredentialJwt(original["jwt"], d, config) + transformed["auth_token"] = + flattenIntegrationsAuthConfigDecryptedCredentialAuthToken(original["authToken"], d, config) + transformed["service_account_credentials"] = + flattenIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentials(original["serviceAccountCredentials"], d, config) + transformed["oidc_token"] = + flattenIntegrationsAuthConfigDecryptedCredentialOidcToken(original["oidcToken"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialCredentialType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + return tpgresource.ConvertSelfLinkToV1(v.(string)) +} + +func flattenIntegrationsAuthConfigDecryptedCredentialUsernameAndPassword(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["username"] = + flattenIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordUsername(original["username"], d, config) + transformed["password"] = + flattenIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordPassword(original["password"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordUsername(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordPassword(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["client_id"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientId(original["clientId"], d, config) + transformed["client_secret"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientSecret(original["clientSecret"], d, config) + transformed["scope"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeScope(original["scope"], d, config) + transformed["auth_endpoint"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeAuthEndpoint(original["authEndpoint"], d, config) + transformed["token_endpoint"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeTokenEndpoint(original["tokenEndpoint"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeScope(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeAuthEndpoint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeTokenEndpoint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentials(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["client_id"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientId(original["clientId"], d, config) + transformed["client_secret"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientSecret(original["clientSecret"], d, config) + transformed["token_endpoint"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenEndpoint(original["tokenEndpoint"], d, config) + transformed["scope"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsScope(original["scope"], d, config) + transformed["token_params"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParams(original["tokenParams"], d, config) + transformed["request_type"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsRequestType(original["requestType"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenEndpoint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsScope(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParams(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["entries"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntries(original["entries"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntries(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "key": flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKey(original["key"], d, config), + "value": flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValue(original["value"], d, config), + }) + } + return transformed +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKey(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["literal_value"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValue(original["literalValue"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["string_value"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValueStringValue(original["stringValue"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValueStringValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["literal_value"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValue(original["literalValue"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["string_value"] = + flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValueStringValue(original["stringValue"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValueStringValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsRequestType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialJwt(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["jwt_header"] = + flattenIntegrationsAuthConfigDecryptedCredentialJwtJwtHeader(original["jwtHeader"], d, config) + transformed["jwt_payload"] = + flattenIntegrationsAuthConfigDecryptedCredentialJwtJwtPayload(original["jwtPayload"], d, config) + transformed["secret"] = + flattenIntegrationsAuthConfigDecryptedCredentialJwtSecret(original["secret"], d, config) + transformed["jwt"] = + flattenIntegrationsAuthConfigDecryptedCredentialJwtJwt(original["jwt"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialJwtJwtHeader(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialJwtJwtPayload(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialJwtSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialJwtJwt(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialAuthToken(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["type"] = + flattenIntegrationsAuthConfigDecryptedCredentialAuthTokenType(original["type"], d, config) + transformed["token"] = + flattenIntegrationsAuthConfigDecryptedCredentialAuthTokenToken(original["token"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialAuthTokenType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialAuthTokenToken(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentials(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["service_account"] = + flattenIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsServiceAccount(original["serviceAccount"], d, config) + transformed["scope"] = + flattenIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsScope(original["scope"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsServiceAccount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsScope(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOidcToken(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["service_account_email"] = + flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenServiceAccountEmail(original["serviceAccountEmail"], d, config) + transformed["audience"] = + flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenAudience(original["audience"], d, config) + transformed["token"] = + flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenToken(original["token"], d, config) + transformed["token_expire_time"] = + flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenTokenExpireTime(original["tokenExpireTime"], d, config) + return []interface{}{transformed} +} +func flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenServiceAccountEmail(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenAudience(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenToken(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenIntegrationsAuthConfigDecryptedCredentialOidcTokenTokenExpireTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandIntegrationsAuthConfigDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigVisibility(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigExpiryNotificationDuration(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigOverrideValidTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredential(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedCredentialType, err := expandIntegrationsAuthConfigDecryptedCredentialCredentialType(original["credential_type"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedCredentialType); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["credentialType"] = transformedCredentialType + } + + transformedUsernameAndPassword, err := expandIntegrationsAuthConfigDecryptedCredentialUsernameAndPassword(original["username_and_password"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedUsernameAndPassword); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["usernameAndPassword"] = transformedUsernameAndPassword + } + + transformedOauth2AuthorizationCode, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCode(original["oauth2_authorization_code"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOauth2AuthorizationCode); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["oauth2AuthorizationCode"] = transformedOauth2AuthorizationCode + } + + transformedOauth2ClientCredentials, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentials(original["oauth2_client_credentials"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOauth2ClientCredentials); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["oauth2ClientCredentials"] = transformedOauth2ClientCredentials + } + + transformedJwt, err := expandIntegrationsAuthConfigDecryptedCredentialJwt(original["jwt"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedJwt); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["jwt"] = transformedJwt + } + + transformedAuthToken, err := expandIntegrationsAuthConfigDecryptedCredentialAuthToken(original["auth_token"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedAuthToken); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["authToken"] = transformedAuthToken + } + + transformedServiceAccountCredentials, err := expandIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentials(original["service_account_credentials"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedServiceAccountCredentials); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["serviceAccountCredentials"] = transformedServiceAccountCredentials + } + + transformedOidcToken, err := expandIntegrationsAuthConfigDecryptedCredentialOidcToken(original["oidc_token"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedOidcToken); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["oidcToken"] = transformedOidcToken + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialCredentialType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialUsernameAndPassword(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedUsername, err := expandIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordUsername(original["username"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedUsername); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["username"] = transformedUsername + } + + transformedPassword, err := expandIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordPassword(original["password"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPassword); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["password"] = transformedPassword + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordUsername(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialUsernameAndPasswordPassword(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCode(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedClientId, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientId(original["client_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedClientId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["clientId"] = transformedClientId + } + + transformedClientSecret, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientSecret(original["client_secret"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedClientSecret); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["clientSecret"] = transformedClientSecret + } + + transformedScope, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeScope(original["scope"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedScope); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["scope"] = transformedScope + } + + transformedAuthEndpoint, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeAuthEndpoint(original["auth_endpoint"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedAuthEndpoint); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["authEndpoint"] = transformedAuthEndpoint + } + + transformedTokenEndpoint, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeTokenEndpoint(original["token_endpoint"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTokenEndpoint); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tokenEndpoint"] = transformedTokenEndpoint + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeClientSecret(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeScope(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeAuthEndpoint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2AuthorizationCodeTokenEndpoint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentials(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedClientId, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientId(original["client_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedClientId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["clientId"] = transformedClientId + } + + transformedClientSecret, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientSecret(original["client_secret"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedClientSecret); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["clientSecret"] = transformedClientSecret + } + + transformedTokenEndpoint, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenEndpoint(original["token_endpoint"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTokenEndpoint); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tokenEndpoint"] = transformedTokenEndpoint + } + + transformedScope, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsScope(original["scope"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedScope); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["scope"] = transformedScope + } + + transformedTokenParams, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParams(original["token_params"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTokenParams); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tokenParams"] = transformedTokenParams + } + + transformedRequestType, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsRequestType(original["request_type"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedRequestType); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["requestType"] = transformedRequestType + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsClientSecret(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenEndpoint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsScope(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParams(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedEntries, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntries(original["entries"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEntries); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["entries"] = transformedEntries + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntries(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + req := make([]interface{}, 0, len(l)) + for _, raw := range l { + if raw == nil { + continue + } + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedKey, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKey(original["key"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedKey); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["key"] = transformedKey + } + + transformedValue, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValue(original["value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["value"] = transformedValue + } + + req = append(req, transformed) + } + return req, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKey(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedLiteralValue, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValue(original["literal_value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedLiteralValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["literalValue"] = transformedLiteralValue + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedStringValue, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValueStringValue(original["string_value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedStringValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["stringValue"] = transformedStringValue + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesKeyLiteralValueStringValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedLiteralValue, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValue(original["literal_value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedLiteralValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["literalValue"] = transformedLiteralValue + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedStringValue, err := expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValueStringValue(original["string_value"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedStringValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["stringValue"] = transformedStringValue + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsTokenParamsEntriesValueLiteralValueStringValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOauth2ClientCredentialsRequestType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialJwt(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedJwtHeader, err := expandIntegrationsAuthConfigDecryptedCredentialJwtJwtHeader(original["jwt_header"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedJwtHeader); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["jwtHeader"] = transformedJwtHeader + } + + transformedJwtPayload, err := expandIntegrationsAuthConfigDecryptedCredentialJwtJwtPayload(original["jwt_payload"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedJwtPayload); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["jwtPayload"] = transformedJwtPayload + } + + transformedSecret, err := expandIntegrationsAuthConfigDecryptedCredentialJwtSecret(original["secret"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSecret); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["secret"] = transformedSecret + } + + transformedJwt, err := expandIntegrationsAuthConfigDecryptedCredentialJwtJwt(original["jwt"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedJwt); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["jwt"] = transformedJwt + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialJwtJwtHeader(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialJwtJwtPayload(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialJwtSecret(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialJwtJwt(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialAuthToken(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedType, err := expandIntegrationsAuthConfigDecryptedCredentialAuthTokenType(original["type"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedType); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["type"] = transformedType + } + + transformedToken, err := expandIntegrationsAuthConfigDecryptedCredentialAuthTokenToken(original["token"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedToken); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["token"] = transformedToken + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialAuthTokenType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialAuthTokenToken(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentials(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedServiceAccount, err := expandIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsServiceAccount(original["service_account"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedServiceAccount); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["serviceAccount"] = transformedServiceAccount + } + + transformedScope, err := expandIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsScope(original["scope"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedScope); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["scope"] = transformedScope + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsServiceAccount(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialServiceAccountCredentialsScope(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOidcToken(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedServiceAccountEmail, err := expandIntegrationsAuthConfigDecryptedCredentialOidcTokenServiceAccountEmail(original["service_account_email"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedServiceAccountEmail); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["serviceAccountEmail"] = transformedServiceAccountEmail + } + + transformedAudience, err := expandIntegrationsAuthConfigDecryptedCredentialOidcTokenAudience(original["audience"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedAudience); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["audience"] = transformedAudience + } + + transformedToken, err := expandIntegrationsAuthConfigDecryptedCredentialOidcTokenToken(original["token"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedToken); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["token"] = transformedToken + } + + transformedTokenExpireTime, err := expandIntegrationsAuthConfigDecryptedCredentialOidcTokenTokenExpireTime(original["token_expire_time"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedTokenExpireTime); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["tokenExpireTime"] = transformedTokenExpireTime + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOidcTokenServiceAccountEmail(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOidcTokenAudience(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOidcTokenToken(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigDecryptedCredentialOidcTokenTokenExpireTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigClientCertificate(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedSslCertificate, err := expandIntegrationsAuthConfigClientCertificateSslCertificate(original["ssl_certificate"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSslCertificate); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["ssl_certificate"] = transformedSslCertificate + } + + transformedEncryptedPrivateKey, err := expandIntegrationsAuthConfigClientCertificateEncryptedPrivateKey(original["encrypted_private_key"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEncryptedPrivateKey); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["encrypted_private_key"] = transformedEncryptedPrivateKey + } + + transformedPassphrase, err := expandIntegrationsAuthConfigClientCertificatePassphrase(original["passphrase"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPassphrase); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["passphrase"] = transformedPassphrase + } + + return transformed, nil +} + +func expandIntegrationsAuthConfigClientCertificateSslCertificate(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigClientCertificateEncryptedPrivateKey(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandIntegrationsAuthConfigClientCertificatePassphrase(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} diff --git a/google-beta/services/integrations/resource_integrations_auth_config_generated_test.go b/google-beta/services/integrations/resource_integrations_auth_config_generated_test.go new file mode 100644 index 0000000000..ebd47511a6 --- /dev/null +++ b/google-beta/services/integrations/resource_integrations_auth_config_generated_test.go @@ -0,0 +1,582 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package integrations_test + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigAdvanceExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigAdvanceExample(context), + }, + { + ResourceName: "google_integrations_auth_config.advance_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigAdvanceExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "asia-east2" + provision_gmek = true +} + +resource "google_integrations_auth_config" "advance_example" { + location = "asia-east2" + display_name = "tf-test-test-authconfig%{random_suffix}" + description = "Test auth config created via terraform" + visibility = "CLIENT_VISIBLE" + expiry_notification_duration = ["3.500s"] + override_valid_time = "2014-10-02T15:01:23Z" + decrypted_credential { + credential_type = "USERNAME_AND_PASSWORD" + username_and_password { + username = "test-username" + password = "test-password" + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigUsernameAndPasswordExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigUsernameAndPasswordExample(context), + }, + { + ResourceName: "google_integrations_auth_config.username_and_password_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigUsernameAndPasswordExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "northamerica-northeast2" + provision_gmek = true +} + +resource "google_integrations_auth_config" "username_and_password_example" { + location = "northamerica-northeast2" + display_name = "tf-test-test-authconfig-username-and-password%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "USERNAME_AND_PASSWORD" + username_and_password { + username = "test-username" + password = "test-password" + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigOauth2AuthorizationCodeExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigOauth2AuthorizationCodeExample(context), + }, + { + ResourceName: "google_integrations_auth_config.oauth2_authotization_code_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigOauth2AuthorizationCodeExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "asia-east1" + provision_gmek = true +} + +resource "google_integrations_auth_config" "oauth2_authotization_code_example" { + location = "asia-east1" + display_name = "tf-test-test-authconfig-oauth2-authorization-code%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "OAUTH2_AUTHORIZATION_CODE" + oauth2_authorization_code { + client_id = "Kf7utRvgr95oGO5YMmhFOLo8" + client_secret = "D-XXFDDMLrg2deDgczzHTBwC3p16wRK1rdKuuoFdWqO0wliJ" + scope = "photo offline_access" + auth_endpoint = "https://authorization-server.com/authorize" + token_endpoint = "https://authorization-server.com/token" + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigOauth2ClientCredentialsExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigOauth2ClientCredentialsExample(context), + }, + { + ResourceName: "google_integrations_auth_config.oauth2_client_credentials_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigOauth2ClientCredentialsExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "southamerica-east1" + provision_gmek = true +} + +resource "google_integrations_auth_config" "oauth2_client_credentials_example" { + location = "southamerica-east1" + display_name = "tf-test-test-authconfig-oauth2-client-credentials%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "OAUTH2_CLIENT_CREDENTIALS" + oauth2_client_credentials { + client_id = "demo-backend-client" + client_secret = "MJlO3binatD9jk1" + scope = "read" + token_endpoint = "https://login-demo.curity.io/oauth/v2/oauth-token" + request_type = "ENCODED_HEADER" + token_params { + entries { + key { + literal_value { + string_value = "string-key" + } + } + value { + literal_value { + string_value = "string-value" + } + } + } + } + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigJwtExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigJwtExample(context), + }, + { + ResourceName: "google_integrations_auth_config.jwt_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigJwtExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "us-west4" + provision_gmek = true +} + +resource "google_integrations_auth_config" "jwt_example" { + location = "us-west4" + display_name = "tf-test-test-authconfig-jwt%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "JWT" + jwt { + jwt_header = "{\"alg\": \"HS256\", \"typ\": \"JWT\"}" + jwt_payload = "{\"sub\": \"1234567890\", \"name\": \"John Doe\", \"iat\": 1516239022}" + secret = "secret" + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigAuthTokenExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigAuthTokenExample(context), + }, + { + ResourceName: "google_integrations_auth_config.auth_token_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigAuthTokenExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "us-west2" + provision_gmek = true +} + +resource "google_integrations_auth_config" "auth_token_example" { + location = "us-west2" + display_name = "tf-test-test-authconfig-auth-token%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "AUTH_TOKEN" + auth_token { + type = "Basic" + token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9" + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigServiceAccountExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigServiceAccountExample(context), + }, + { + ResourceName: "google_integrations_auth_config.service_account_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigServiceAccountExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "northamerica-northeast1" + provision_gmek = true +} + +resource "google_service_account" "service_account" { + account_id = "sa%{random_suffix}" + display_name = "Service Account" +} + +resource "google_integrations_auth_config" "service_account_example" { + location = "northamerica-northeast1" + display_name = "tf-test-test-authconfig-service-account%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "SERVICE_ACCOUNT" + service_account_credentials { + service_account = google_service_account.service_account.email + scope = "https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/adexchange.buyer https://www.googleapis.com/auth/admob.readonly" + } + } + depends_on = [google_service_account.service_account, google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigOidcTokenExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigOidcTokenExample(context), + }, + { + ResourceName: "google_integrations_auth_config.oidc_token_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigOidcTokenExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "us-south1" + provision_gmek = true +} + +resource "google_service_account" "service_account" { + account_id = "sa%{random_suffix}" + display_name = "Service Account" +} + +resource "google_integrations_auth_config" "oidc_token_example" { + location = "us-south1" + display_name = "tf-test-test-authconfig-oidc-token%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "OIDC_TOKEN" + oidc_token { + service_account_email = google_service_account.service_account.email + audience = "https://us-south1-project.cloudfunctions.net/functionA 1234987819200.apps.googleusercontent.com" + } + } + depends_on = [google_service_account.service_account, google_integrations_client.client] +} +`, context) +} + +func TestAccIntegrationsAuthConfig_integrationsAuthConfigClientCertificateOnlyExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_integrationsAuthConfigClientCertificateOnlyExample(context), + }, + { + ResourceName: "google_integrations_auth_config.client_certificate_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_integrationsAuthConfigClientCertificateOnlyExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "us-west3" + provision_gmek = true +} + +resource "google_integrations_auth_config" "client_certificate_example" { + location = "us-west3" + display_name = "tf-test-test-authconfig-client-certificate%{random_suffix}" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "CLIENT_CERTIFICATE_ONLY" + } + client_certificate { + ssl_certificate = < 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} diff --git a/google-beta/services/integrations/resource_integrations_auth_config_test.go b/google-beta/services/integrations/resource_integrations_auth_config_test.go new file mode 100644 index 0000000000..2d7dfb73d3 --- /dev/null +++ b/google-beta/services/integrations/resource_integrations_auth_config_test.go @@ -0,0 +1,143 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package integrations_test + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" +) + +func TestAccIntegrationsAuthConfig_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckIntegrationsAuthConfigDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccIntegrationsAuthConfig_full(context), + }, + { + ResourceName: "google_integrations_auth_config.update_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + { + Config: testAccIntegrationsAuthConfig_update(context), + }, + { + ResourceName: "google_integrations_auth_config.update_example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_certificate", "location"}, + }, + }, + }) +} + +func testAccIntegrationsAuthConfig_full(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "southamerica-west1" + provision_gmek = true +} + +resource "google_integrations_auth_config" "update_example" { + location = "southamerica-west1" + display_name = "tf-test-test-authconfig%{random_suffix}" + description = "Test auth config created via terraform" + visibility = "CLIENT_VISIBLE" + expiry_notification_duration = ["3.500s"] + override_valid_time = "2014-10-02T15:01:23Z" + decrypted_credential { + credential_type = "USERNAME_AND_PASSWORD" + username_and_password { + username = "test-username" + password = "test-password" + } + } + depends_on = [google_integrations_client.client] +} +`, context) +} + +func testAccIntegrationsAuthConfig_update(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_integrations_client" "client" { + location = "southamerica-west1" + provision_gmek = true +} + +resource "google_integrations_auth_config" "update_example" { + location = "southamerica-west1" + display_name = "tf-test-test-authconfig-update%{random_suffix}" + description = "Test auth config updated via terraform" + visibility = "CLIENT_VISIBLE" + expiry_notification_duration = ["4s"] + override_valid_time = "2014-10-10T15:01:23Z" + decrypted_credential { + credential_type = "CLIENT_CERTIFICATE_ONLY" + } + client_certificate { + ssl_certificate = <=ERROR" + include_children = true + intercept_children = %t +} + +resource "google_folder" "intercept_folder" { + display_name = "%s" + parent = "%s" +} +`, sinkName, envvar.GetTestProjectFromEnv(), envvar.GetTestProjectFromEnv(), intercept_children, folderName, folderParent) +} diff --git a/google-beta/services/logging/resource_logging_linked_dataset.go b/google-beta/services/logging/resource_logging_linked_dataset.go index 60d2372218..43cd8ac1a5 100644 --- a/google-beta/services/logging/resource_logging_linked_dataset.go +++ b/google-beta/services/logging/resource_logging_linked_dataset.go @@ -20,6 +20,7 @@ package logging import ( "fmt" "log" + "net/http" "reflect" "time" @@ -155,6 +156,7 @@ func resourceLoggingLinkedDatasetCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -163,6 +165,7 @@ func resourceLoggingLinkedDatasetCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating LinkedDataset: %s", err) @@ -223,12 +226,14 @@ func resourceLoggingLinkedDatasetRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("LoggingLinkedDataset %q", d.Id())) @@ -274,6 +279,8 @@ func resourceLoggingLinkedDatasetDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting LinkedDataset %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -283,6 +290,7 @@ func resourceLoggingLinkedDatasetDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "LinkedDataset") diff --git a/google-beta/services/logging/resource_logging_log_view.go b/google-beta/services/logging/resource_logging_log_view.go index 47571c19e7..c886882d58 100644 --- a/google-beta/services/logging/resource_logging_log_view.go +++ b/google-beta/services/logging/resource_logging_log_view.go @@ -20,6 +20,7 @@ package logging import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -147,6 +148,7 @@ func resourceLoggingLogViewCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -155,6 +157,7 @@ func resourceLoggingLogViewCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating LogView: %s", err) @@ -191,12 +194,14 @@ func resourceLoggingLogViewRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("LoggingLogView %q", d.Id())) @@ -252,6 +257,7 @@ func resourceLoggingLogViewUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating LogView %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -283,6 +289,7 @@ func resourceLoggingLogViewUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -317,6 +324,8 @@ func resourceLoggingLogViewDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting LogView %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -326,6 +335,7 @@ func resourceLoggingLogViewDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "LogView") diff --git a/google-beta/services/logging/resource_logging_metric.go b/google-beta/services/logging/resource_logging_metric.go index 68efa985ab..c92a75be38 100644 --- a/google-beta/services/logging/resource_logging_metric.go +++ b/google-beta/services/logging/resource_logging_metric.go @@ -20,6 +20,7 @@ package logging import ( "fmt" "log" + "net/http" "reflect" "time" @@ -370,6 +371,7 @@ func resourceLoggingMetricCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -378,6 +380,7 @@ func resourceLoggingMetricCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Metric: %s", err) @@ -438,12 +441,14 @@ func resourceLoggingMetricRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("LoggingMetric %q", d.Id())) @@ -568,6 +573,7 @@ func resourceLoggingMetricUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Metric %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -582,6 +588,7 @@ func resourceLoggingMetricUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -627,6 +634,8 @@ func resourceLoggingMetricDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Metric %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -636,6 +645,7 @@ func resourceLoggingMetricDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Metric") diff --git a/google-beta/services/logging/resource_logging_organization_settings.go b/google-beta/services/logging/resource_logging_organization_settings.go index 209d4bad4b..63b453a05e 100644 --- a/google-beta/services/logging/resource_logging_organization_settings.go +++ b/google-beta/services/logging/resource_logging_organization_settings.go @@ -20,6 +20,7 @@ package logging import ( "fmt" "log" + "net/http" "reflect" "time" @@ -131,6 +132,7 @@ func resourceLoggingOrganizationSettingsCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -139,6 +141,7 @@ func resourceLoggingOrganizationSettingsCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OrganizationSettings: %s", err) @@ -178,12 +181,14 @@ func resourceLoggingOrganizationSettingsRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("LoggingOrganizationSettings %q", d.Id())) @@ -246,6 +251,7 @@ func resourceLoggingOrganizationSettingsUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating OrganizationSettings %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -260,6 +266,7 @@ func resourceLoggingOrganizationSettingsUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/logging/resource_logging_organization_sink.go b/google-beta/services/logging/resource_logging_organization_sink.go index 5593d1fb06..d9fec3ffb0 100644 --- a/google-beta/services/logging/resource_logging_organization_sink.go +++ b/google-beta/services/logging/resource_logging_organization_sink.go @@ -34,10 +34,15 @@ func ResourceLoggingOrganizationSink() *schema.Resource { schm.Schema["include_children"] = &schema.Schema{ Type: schema.TypeBool, Optional: true, - ForceNew: true, Default: false, Description: `Whether or not to include children organizations in the sink export. If true, logs associated with child projects are also exported; otherwise only logs relating to the provided organization are included.`, } + schm.Schema["intercept_children"] = &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether or not to intercept logs from child projects. If true, matching logs will not match with sinks in child resources, except _Required sinks. This sink will be visible to child resources when listing sinks.`, + } return schm } @@ -52,6 +57,7 @@ func resourceLoggingOrganizationSinkCreate(d *schema.ResourceData, meta interfac org := d.Get("org_id").(string) id, sink := expandResourceLoggingSink(d, "organizations", org) sink.IncludeChildren = d.Get("include_children").(bool) + sink.InterceptChildren = d.Get("intercept_children").(bool) // Must use a unique writer, since all destinations are in projects. // The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true. @@ -84,6 +90,10 @@ func resourceLoggingOrganizationSinkRead(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error setting include_children: %s", err) } + if err := d.Set("intercept_children", sink.InterceptChildren); err != nil { + return fmt.Errorf("Error setting intercept_children: %s", err) + } + return nil } @@ -95,10 +105,6 @@ func resourceLoggingOrganizationSinkUpdate(d *schema.ResourceData, meta interfac } sink, updateMask := expandResourceLoggingSinkForUpdate(d) - // It seems the API might actually accept an update for include_children; this is not in the list of updatable - // properties though and might break in the future. Always include the value to prevent it changing. - sink.IncludeChildren = d.Get("include_children").(bool) - sink.ForceSendFields = append(sink.ForceSendFields, "IncludeChildren") // The API will reject any requests that don't explicitly set 'uniqueWriterIdentity' to true. _, err = config.NewLoggingClient(userAgent).Organizations.Sinks.Patch(d.Id(), sink). diff --git a/google-beta/services/logging/resource_logging_organization_sink_test.go b/google-beta/services/logging/resource_logging_organization_sink_test.go index e6a30371f5..ead673bcdc 100644 --- a/google-beta/services/logging/resource_logging_organization_sink_test.go +++ b/google-beta/services/logging/resource_logging_organization_sink_test.go @@ -266,10 +266,52 @@ func testAccCheckLoggingOrganizationSink(sink *logging.LogSink, n string) resour return fmt.Errorf("mismatch on include_children: api has %v but client has %v", sink.IncludeChildren, includeChildren) } + interceptChildren := false + if attributes["intercept_children"] != "" { + includeChildren, err = strconv.ParseBool(attributes["intercept_children"]) + if err != nil { + return err + } + } + if sink.InterceptChildren != interceptChildren { + return fmt.Errorf("mismatch on intercept_children: api has %v but client has %v", sink.InterceptChildren, interceptChildren) + } + return nil } } +func TestAccLoggingOrganizationSink_updateInterceptChildren(t *testing.T) { + t.Parallel() + + orgId := envvar.GetTestOrgFromEnv(t) + sinkName := "tf-test-sink-" + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckLoggingOrganizationSinkDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccLoggingOrganizationSink_intercept_updated(sinkName, orgId, true), + }, + { + ResourceName: "google_logging_organization_sink.intercept_update", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccLoggingOrganizationSink_intercept_updated(sinkName, orgId, false), + }, + { + ResourceName: "google_logging_organization_sink.intercept_update", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccLoggingOrganizationSink_basic(sinkName, bucketName, orgId string) string { return fmt.Sprintf(` resource "google_logging_organization_sink" "basic" { @@ -398,3 +440,15 @@ resource "google_bigquery_dataset" "logging_sink" { description = "Log sink (generated during acc test of terraform-provider-google(-beta))." }`, sinkName, orgId, envvar.GetTestProjectFromEnv(), envvar.GetTestProjectFromEnv(), bqDatasetID) } + +func testAccLoggingOrganizationSink_intercept_updated(sinkName, orgId string, intercept_children bool) string { + return fmt.Sprintf(` +resource "google_logging_organization_sink" "intercept_update" { + name = "%s" + org_id = "%s" + destination = "logging.googleapis.com/projects/%s" + filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" + include_children = true + intercept_children = %t +}`, sinkName, orgId, envvar.GetTestProjectFromEnv(), envvar.GetTestProjectFromEnv(), intercept_children) +} diff --git a/google-beta/services/logging/resource_logging_project_sink_test.go b/google-beta/services/logging/resource_logging_project_sink_test.go index 43562a70cf..4312154bb5 100644 --- a/google-beta/services/logging/resource_logging_project_sink_test.go +++ b/google-beta/services/logging/resource_logging_project_sink_test.go @@ -15,6 +15,9 @@ import ( func TestAccLoggingProjectSink_basic(t *testing.T) { t.Parallel() + orgId := envvar.GetTestOrgFromEnv(t) + billingAccount := envvar.GetTestBillingAccountFromEnv(t) + projectId := "tf-test" + acctest.RandString(t, 10) sinkName := "tf-test-sink-" + acctest.RandString(t, 10) bucketName := "tf-test-sink-bucket-" + acctest.RandString(t, 10) @@ -24,7 +27,7 @@ func TestAccLoggingProjectSink_basic(t *testing.T) { CheckDestroy: testAccCheckLoggingProjectSinkDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccLoggingProjectSink_basic(sinkName, envvar.GetTestProjectFromEnv(), bucketName), + Config: testAccLoggingProjectSink_basic(projectId, orgId, billingAccount, sinkName, bucketName, "false"), }, { ResourceName: "google_logging_project_sink.basic", @@ -38,6 +41,9 @@ func TestAccLoggingProjectSink_basic(t *testing.T) { func TestAccLoggingProjectSink_default(t *testing.T) { t.Parallel() + orgId := envvar.GetTestOrgFromEnv(t) + billingAccount := envvar.GetTestBillingAccountFromEnv(t) + projectId := "tf-test" + acctest.RandString(t, 10) sinkName := "_Default" bucketName := "tf-test-sink-bucket-" + acctest.RandString(t, 10) @@ -46,7 +52,8 @@ func TestAccLoggingProjectSink_default(t *testing.T) { ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), Steps: []resource.TestStep{ { - Config: testAccLoggingProjectSink_basic(sinkName, envvar.GetTestProjectFromEnv(), bucketName), + // Default sink has a permadiff if any value is sent for "disabled" other than "true" + Config: testAccLoggingProjectSink_basic(projectId, orgId, billingAccount, sinkName, bucketName, "true"), }, { ResourceName: "google_logging_project_sink.basic", @@ -365,20 +372,34 @@ func testAccCheckLoggingProjectSinkDestroyProducer(t *testing.T) func(s *terrafo } } -func testAccLoggingProjectSink_basic(name, project, bucketName string) string { +func testAccLoggingProjectSink_basic(projectId, orgId, billingAccount, sinkName, bucketName, disabled string) string { return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "%s" + name = "%s" + org_id = "%s" + billing_account = "%s" +} + +resource "google_project_service" "logging_service" { + project = google_project.project.project_id + service = "logging.googleapis.com" +} + resource "google_logging_project_sink" "basic" { name = "%s" - project = "%s" + disabled = %s + project = google_project_service.logging_service.project destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" - filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" + filter = "logName=\"projects/${google_project.project.project_id}/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" } resource "google_storage_bucket" "gcs-bucket" { name = "%s" + project = google_project.project.project_id location = "US" } -`, name, project, project, bucketName) +`, projectId, projectId, orgId, billingAccount, sinkName, disabled, bucketName) } func testAccLoggingProjectSink_described(name, project, bucketName string) string { diff --git a/google-beta/services/logging/resource_logging_sink.go b/google-beta/services/logging/resource_logging_sink.go index faf23674a6..c6d3b45ba2 100644 --- a/google-beta/services/logging/resource_logging_sink.go +++ b/google-beta/services/logging/resource_logging_sink.go @@ -178,6 +178,12 @@ func expandResourceLoggingSinkForUpdate(d *schema.ResourceData) (sink *logging.L sink.BigqueryOptions = expandLoggingSinkBigqueryOptions(d.Get("bigquery_options")) updateFields = append(updateFields, "bigqueryOptions") } + if d.HasChange("include_children") { + updateFields = append(updateFields, "includeChildren") + } + if d.HasChange("intercept_children") { + updateFields = append(updateFields, "interceptChildren") + } updateMask = strings.Join(updateFields, ",") return } diff --git a/google-beta/services/looker/resource_looker_instance.go b/google-beta/services/looker/resource_looker_instance.go index 7fc36135dd..2055705adc 100644 --- a/google-beta/services/looker/resource_looker_instance.go +++ b/google-beta/services/looker/resource_looker_instance.go @@ -20,6 +20,7 @@ package looker import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -335,8 +336,8 @@ disrupt service.`, ForceNew: true, ValidateFunc: verify.ValidateEnum([]string{"LOOKER_CORE_TRIAL", "LOOKER_CORE_STANDARD", "LOOKER_CORE_STANDARD_ANNUAL", "LOOKER_CORE_ENTERPRISE_ANNUAL", "LOOKER_CORE_EMBED_ANNUAL", ""}), Description: `Platform editions for a Looker instance. Each edition maps to a set of instance features, like its size. Must be one of these values: -- LOOKER_CORE_TRIAL: trial instance -- LOOKER_CORE_STANDARD: pay as you go standard instance +- LOOKER_CORE_TRIAL: trial instance (Currently Unavailable) +- LOOKER_CORE_STANDARD: pay as you go standard instance (Currently Unavailable) - LOOKER_CORE_STANDARD_ANNUAL: subscription standard instance - LOOKER_CORE_ENTERPRISE_ANNUAL: subscription enterprise instance - LOOKER_CORE_EMBED_ANNUAL: subscription embed instance Default value: "LOOKER_CORE_TRIAL" Possible values: ["LOOKER_CORE_TRIAL", "LOOKER_CORE_STANDARD", "LOOKER_CORE_STANDARD_ANNUAL", "LOOKER_CORE_ENTERPRISE_ANNUAL", "LOOKER_CORE_EMBED_ANNUAL"]`, @@ -549,6 +550,7 @@ func resourceLookerInstanceCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -557,6 +559,7 @@ func resourceLookerInstanceCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -620,12 +623,14 @@ func resourceLookerInstanceRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -783,6 +788,7 @@ func resourceLookerInstanceUpdate(d *schema.ResourceData, meta interface{}) erro } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("admin_settings") { @@ -850,6 +856,7 @@ func resourceLookerInstanceUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -898,6 +905,8 @@ func resourceLookerInstanceDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -907,6 +916,7 @@ func resourceLookerInstanceDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { diff --git a/google-beta/services/looker/resource_looker_instance_generated_test.go b/google-beta/services/looker/resource_looker_instance_generated_test.go index 24732e195c..8c763dde3b 100644 --- a/google-beta/services/looker/resource_looker_instance_generated_test.go +++ b/google-beta/services/looker/resource_looker_instance_generated_test.go @@ -59,7 +59,7 @@ func testAccLookerInstance_lookerInstanceBasicExample(context map[string]interfa return acctest.Nprintf(` resource "google_looker_instance" "looker-instance" { name = "tf-test-my-instance%{random_suffix}" - platform_edition = "LOOKER_CORE_STANDARD" + platform_edition = "LOOKER_CORE_STANDARD_ANNUAL" region = "us-central1" oauth_config { client_id = "tf-test-my-client-id%{random_suffix}" @@ -98,18 +98,12 @@ func testAccLookerInstance_lookerInstanceFullExample(context map[string]interfac return acctest.Nprintf(` resource "google_looker_instance" "looker-instance" { name = "tf-test-my-instance%{random_suffix}" - platform_edition = "LOOKER_CORE_STANDARD" + platform_edition = "LOOKER_CORE_STANDARD_ANNUAL" region = "us-central1" public_ip_enabled = true admin_settings { allowed_email_domains = ["google.com"] } - // User metadata config is only available when platform edition is LOOKER_CORE_STANDARD. - user_metadata { - additional_developer_user_count = 10 - additional_standard_user_count = 10 - additional_viewer_user_count = 10 - } maintenance_window { day_of_week = "THURSDAY" start_time { @@ -149,9 +143,9 @@ func TestAccLookerInstance_lookerInstanceEnterpriseFullTestExample(t *testing.T) t.Parallel() context := map[string]interface{}{ - "address_name": acctest.BootstrapSharedTestGlobalAddress(t, "looker-vpc-network-1", acctest.AddressWithPrefixLength(20)), + "address_name": acctest.BootstrapSharedTestGlobalAddress(t, "looker-vpc-network-2"), "kms_key_name": acctest.BootstrapKMSKeyInLocation(t, "us-central1").CryptoKey.Name, - "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "looker-vpc-network-1", acctest.ServiceNetworkWithPrefixLength(20)), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "looker-vpc-network-2"), "random_suffix": acctest.RandString(t, 10), } @@ -269,12 +263,14 @@ func testAccLookerInstance_lookerInstanceCustomDomainExample(context map[string] return acctest.Nprintf(` resource "google_looker_instance" "looker-instance" { name = "tf-test-my-instance%{random_suffix}" - platform_edition = "LOOKER_CORE_STANDARD" + platform_edition = "LOOKER_CORE_STANDARD_ANNUAL" region = "us-central1" oauth_config { client_id = "tf-test-my-client-id%{random_suffix}" client_secret = "tf-test-my-client-secret%{random_suffix}" } + // After your Looker (Google Cloud core) instance has been created, you can set up, view information about, or delete a custom domain for your instance. + // Therefore 2 terraform applies, one to create the instance, then another to set up the custom domain. custom_domain { domain = "tf-test-my-custom-domain%{random_suffix}.com" } diff --git a/google-beta/services/memcache/resource_memcache_instance.go b/google-beta/services/memcache/resource_memcache_instance.go index 6bbb893758..aaebbd5fef 100644 --- a/google-beta/services/memcache/resource_memcache_instance.go +++ b/google-beta/services/memcache/resource_memcache_instance.go @@ -20,6 +20,7 @@ package memcache import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -465,6 +466,7 @@ func resourceMemcacheInstanceCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -473,6 +475,7 @@ func resourceMemcacheInstanceCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -535,12 +538,14 @@ func resourceMemcacheInstanceRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("MemcacheInstance %q", d.Id())) @@ -649,6 +654,7 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -688,6 +694,7 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -721,6 +728,8 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -734,6 +743,7 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Instance %q: %s", d.Id(), err) @@ -781,6 +791,8 @@ func resourceMemcacheInstanceDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -790,6 +802,7 @@ func resourceMemcacheInstanceDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/migrationcenter/resource_migration_center_group.go b/google-beta/services/migrationcenter/resource_migration_center_group.go index 451aa50637..77f7fa1455 100644 --- a/google-beta/services/migrationcenter/resource_migration_center_group.go +++ b/google-beta/services/migrationcenter/resource_migration_center_group.go @@ -20,6 +20,7 @@ package migrationcenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -170,6 +171,7 @@ func resourceMigrationCenterGroupCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -178,6 +180,7 @@ func resourceMigrationCenterGroupCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Group: %s", err) @@ -244,12 +247,14 @@ func resourceMigrationCenterGroupRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("MigrationCenterGroup %q", d.Id())) @@ -328,6 +333,7 @@ func resourceMigrationCenterGroupUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Group %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -363,6 +369,7 @@ func resourceMigrationCenterGroupUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -410,6 +417,8 @@ func resourceMigrationCenterGroupDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Group %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -419,6 +428,7 @@ func resourceMigrationCenterGroupDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Group") diff --git a/google-beta/services/migrationcenter/resource_migration_center_preference_set.go b/google-beta/services/migrationcenter/resource_migration_center_preference_set.go index 148e3929df..3e9df265ee 100644 --- a/google-beta/services/migrationcenter/resource_migration_center_preference_set.go +++ b/google-beta/services/migrationcenter/resource_migration_center_preference_set.go @@ -20,6 +20,7 @@ package migrationcenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -334,6 +335,7 @@ func resourceMigrationCenterPreferenceSetCreate(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -342,6 +344,7 @@ func resourceMigrationCenterPreferenceSetCreate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PreferenceSet: %s", err) @@ -408,12 +411,14 @@ func resourceMigrationCenterPreferenceSetRead(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("MigrationCenterPreferenceSet %q", d.Id())) @@ -486,6 +491,7 @@ func resourceMigrationCenterPreferenceSetUpdate(d *schema.ResourceData, meta int } log.Printf("[DEBUG] Updating PreferenceSet %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -521,6 +527,7 @@ func resourceMigrationCenterPreferenceSetUpdate(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -568,6 +575,8 @@ func resourceMigrationCenterPreferenceSetDelete(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PreferenceSet %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -577,6 +586,7 @@ func resourceMigrationCenterPreferenceSetDelete(d *schema.ResourceData, meta int UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PreferenceSet") diff --git a/google-beta/services/mlengine/resource_ml_engine_model.go b/google-beta/services/mlengine/resource_ml_engine_model.go index fb5c0a1eee..35a5503463 100644 --- a/google-beta/services/mlengine/resource_ml_engine_model.go +++ b/google-beta/services/mlengine/resource_ml_engine_model.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -220,6 +221,7 @@ func resourceMLEngineModelCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -228,6 +230,7 @@ func resourceMLEngineModelCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Model: %s", err) @@ -270,12 +273,14 @@ func resourceMLEngineModelRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("MLEngineModel %q", d.Id())) @@ -348,6 +353,8 @@ func resourceMLEngineModelDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Model %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -357,6 +364,7 @@ func resourceMLEngineModelDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Model") diff --git a/google-beta/services/monitoring/resource_monitoring_alert_policy.go b/google-beta/services/monitoring/resource_monitoring_alert_policy.go index 1c6bc26631..bfbeb4786e 100644 --- a/google-beta/services/monitoring/resource_monitoring_alert_policy.go +++ b/google-beta/services/monitoring/resource_monitoring_alert_policy.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -1106,6 +1107,7 @@ func resourceMonitoringAlertPolicyCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1114,6 +1116,7 @@ func resourceMonitoringAlertPolicyCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -1179,12 +1182,14 @@ func resourceMonitoringAlertPolicyRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -1317,6 +1322,7 @@ func resourceMonitoringAlertPolicyUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating AlertPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1376,6 +1382,7 @@ func resourceMonitoringAlertPolicyUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -1425,6 +1432,8 @@ func resourceMonitoringAlertPolicyDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AlertPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1434,6 +1443,7 @@ func resourceMonitoringAlertPolicyDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) diff --git a/google-beta/services/monitoring/resource_monitoring_custom_service.go b/google-beta/services/monitoring/resource_monitoring_custom_service.go index 83525df315..f0af556892 100644 --- a/google-beta/services/monitoring/resource_monitoring_custom_service.go +++ b/google-beta/services/monitoring/resource_monitoring_custom_service.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -170,6 +171,7 @@ func resourceMonitoringServiceCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -178,6 +180,7 @@ func resourceMonitoringServiceCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -224,12 +227,14 @@ func resourceMonitoringServiceRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -305,6 +310,7 @@ func resourceMonitoringServiceUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Service %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -340,6 +346,7 @@ func resourceMonitoringServiceUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) @@ -381,6 +388,8 @@ func resourceMonitoringServiceDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Service %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -390,6 +399,7 @@ func resourceMonitoringServiceDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { diff --git a/google-beta/services/monitoring/resource_monitoring_group.go b/google-beta/services/monitoring/resource_monitoring_group.go index 7c55da093c..de8e8d4406 100644 --- a/google-beta/services/monitoring/resource_monitoring_group.go +++ b/google-beta/services/monitoring/resource_monitoring_group.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "time" @@ -155,6 +156,7 @@ func resourceMonitoringGroupCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -163,6 +165,7 @@ func resourceMonitoringGroupCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -227,12 +230,14 @@ func resourceMonitoringGroupRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -316,6 +321,7 @@ func resourceMonitoringGroupUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Group %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -330,6 +336,7 @@ func resourceMonitoringGroupUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) @@ -376,6 +383,8 @@ func resourceMonitoringGroupDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Group %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -385,6 +394,7 @@ func resourceMonitoringGroupDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { diff --git a/google-beta/services/monitoring/resource_monitoring_metric_descriptor.go b/google-beta/services/monitoring/resource_monitoring_metric_descriptor.go index cf3ea060ce..d771f43779 100644 --- a/google-beta/services/monitoring/resource_monitoring_metric_descriptor.go +++ b/google-beta/services/monitoring/resource_monitoring_metric_descriptor.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "time" @@ -278,6 +279,7 @@ func resourceMonitoringMetricDescriptorCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -286,6 +288,7 @@ func resourceMonitoringMetricDescriptorCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -380,12 +383,14 @@ func resourceMonitoringMetricDescriptorRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -504,6 +509,7 @@ func resourceMonitoringMetricDescriptorUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating MetricDescriptor %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -518,6 +524,7 @@ func resourceMonitoringMetricDescriptorUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) @@ -562,6 +569,8 @@ func resourceMonitoringMetricDescriptorDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MetricDescriptor %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -571,6 +580,7 @@ func resourceMonitoringMetricDescriptorDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { diff --git a/google-beta/services/monitoring/resource_monitoring_monitored_project.go b/google-beta/services/monitoring/resource_monitoring_monitored_project.go index cfde7a9558..fb189a4ed8 100644 --- a/google-beta/services/monitoring/resource_monitoring_monitored_project.go +++ b/google-beta/services/monitoring/resource_monitoring_monitored_project.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strconv" "strings" @@ -142,6 +143,7 @@ func resourceMonitoringMonitoredProjectCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -150,6 +152,7 @@ func resourceMonitoringMonitoredProjectCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringPermissionError}, }) if err != nil { @@ -187,6 +190,7 @@ func resourceMonitoringMonitoredProjectRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) name := d.Get("name").(string) name = tpgresource.GetResourceNameFromSelfLink(name) d.Set("name", name) @@ -203,6 +207,7 @@ func resourceMonitoringMonitoredProjectRead(d *schema.ResourceData, meta interfa Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringPermissionError}, }) if err != nil { @@ -252,6 +257,8 @@ func resourceMonitoringMonitoredProjectDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MonitoredProject %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -261,6 +268,7 @@ func resourceMonitoringMonitoredProjectDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringPermissionError}, }) if err != nil { diff --git a/google-beta/services/monitoring/resource_monitoring_notification_channel.go b/google-beta/services/monitoring/resource_monitoring_notification_channel.go index 0030924bb7..c5c9f38d4d 100644 --- a/google-beta/services/monitoring/resource_monitoring_notification_channel.go +++ b/google-beta/services/monitoring/resource_monitoring_notification_channel.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -253,6 +254,7 @@ func resourceMonitoringNotificationChannelCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -261,6 +263,7 @@ func resourceMonitoringNotificationChannelCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -325,12 +328,14 @@ func resourceMonitoringNotificationChannelRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -458,6 +463,7 @@ func resourceMonitoringNotificationChannelUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating NotificationChannel %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -472,6 +478,7 @@ func resourceMonitoringNotificationChannelUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) @@ -518,6 +525,8 @@ func resourceMonitoringNotificationChannelDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NotificationChannel %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -527,6 +536,7 @@ func resourceMonitoringNotificationChannelDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { diff --git a/google-beta/services/monitoring/resource_monitoring_service.go b/google-beta/services/monitoring/resource_monitoring_service.go index 5efaa3d6af..69add341d0 100644 --- a/google-beta/services/monitoring/resource_monitoring_service.go +++ b/google-beta/services/monitoring/resource_monitoring_service.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -182,6 +183,7 @@ func resourceMonitoringGenericServiceCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -190,6 +192,7 @@ func resourceMonitoringGenericServiceCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -236,12 +239,14 @@ func resourceMonitoringGenericServiceRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -306,6 +311,7 @@ func resourceMonitoringGenericServiceUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating GenericService %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -337,6 +343,7 @@ func resourceMonitoringGenericServiceUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) @@ -378,6 +385,8 @@ func resourceMonitoringGenericServiceDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GenericService %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -387,6 +396,7 @@ func resourceMonitoringGenericServiceDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { diff --git a/google-beta/services/monitoring/resource_monitoring_slo.go b/google-beta/services/monitoring/resource_monitoring_slo.go index bfaedacd1e..b226c65c75 100644 --- a/google-beta/services/monitoring/resource_monitoring_slo.go +++ b/google-beta/services/monitoring/resource_monitoring_slo.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -832,6 +833,7 @@ func resourceMonitoringSloCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -840,6 +842,7 @@ func resourceMonitoringSloCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Slo: %s", err) @@ -885,12 +888,14 @@ func resourceMonitoringSloRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("MonitoringSlo %q", d.Id())) @@ -1011,6 +1016,7 @@ func resourceMonitoringSloUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Slo %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1082,6 +1088,7 @@ func resourceMonitoringSloUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1129,6 +1136,8 @@ func resourceMonitoringSloDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Slo %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1138,6 +1147,7 @@ func resourceMonitoringSloDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Slo") diff --git a/google-beta/services/monitoring/resource_monitoring_uptime_check_config.go b/google-beta/services/monitoring/resource_monitoring_uptime_check_config.go index 01aae2673b..4e2c15bf6d 100644 --- a/google-beta/services/monitoring/resource_monitoring_uptime_check_config.go +++ b/google-beta/services/monitoring/resource_monitoring_uptime_check_config.go @@ -20,6 +20,7 @@ package monitoring import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -150,7 +151,7 @@ func ResourceMonitoringUptimeCheckConfig() *schema.Resource { "auth_info": { Type: schema.TypeList, Optional: true, - Description: `The authentication information. Optional when creating an HTTP check; defaults to empty.`, + Description: `The authentication information using username and password. Optional when creating an HTTP check; defaults to empty. Do not use with other authentication fields.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -237,6 +238,22 @@ func ResourceMonitoringUptimeCheckConfig() *schema.Resource { Description: `The HTTP request method to use for the check. If set to 'METHOD_UNSPECIFIED' then 'request_method' defaults to 'GET'. Default value: "GET" Possible values: ["METHOD_UNSPECIFIED", "GET", "POST"]`, Default: "GET", }, + "service_agent_authentication": { + Type: schema.TypeList, + Optional: true, + Description: `The authentication information using the Monitoring Service Agent. Optional when creating an HTTPS check; defaults to empty. Do not use with other authentication fields.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"SERVICE_AGENT_AUTHENTICATION_TYPE_UNSPECIFIED", "OIDC_TOKEN", ""}), + Description: `The type of authentication to use. Possible values: ["SERVICE_AGENT_AUTHENTICATION_TYPE_UNSPECIFIED", "OIDC_TOKEN"]`, + }, + }, + }, + }, "use_ssl": { Type: schema.TypeBool, Optional: true, @@ -522,6 +539,7 @@ func resourceMonitoringUptimeCheckConfigCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -530,6 +548,7 @@ func resourceMonitoringUptimeCheckConfigCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -594,12 +613,14 @@ func resourceMonitoringUptimeCheckConfigRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) if err != nil { @@ -734,6 +755,7 @@ func resourceMonitoringUptimeCheckConfigUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating UptimeCheckConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -789,6 +811,7 @@ func resourceMonitoringUptimeCheckConfigUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.IsMonitoringConcurrentEditError}, }) @@ -974,6 +997,8 @@ func flattenMonitoringUptimeCheckConfigHttpCheck(v interface{}, d *schema.Resour flattenMonitoringUptimeCheckConfigHttpCheckCustomContentType(original["customContentType"], d, config) transformed["auth_info"] = flattenMonitoringUptimeCheckConfigHttpCheckAuthInfo(original["authInfo"], d, config) + transformed["service_agent_authentication"] = + flattenMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthentication(original["serviceAgentAuthentication"], d, config) transformed["port"] = flattenMonitoringUptimeCheckConfigHttpCheckPort(original["port"], d, config) transformed["headers"] = @@ -1029,6 +1054,23 @@ func flattenMonitoringUptimeCheckConfigHttpCheckAuthInfoUsername(v interface{}, return v } +func flattenMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthentication(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["type"] = + flattenMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthenticationType(original["type"], d, config) + return []interface{}{transformed} +} +func flattenMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthenticationType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenMonitoringUptimeCheckConfigHttpCheckPort(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { // Handles the string fixed64 format if strVal, ok := v.(string); ok { @@ -1425,6 +1467,13 @@ func expandMonitoringUptimeCheckConfigHttpCheck(v interface{}, d tpgresource.Ter transformed["authInfo"] = transformedAuthInfo } + transformedServiceAgentAuthentication, err := expandMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthentication(original["service_agent_authentication"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedServiceAgentAuthentication); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["serviceAgentAuthentication"] = transformedServiceAgentAuthentication + } + transformedPort, err := expandMonitoringUptimeCheckConfigHttpCheckPort(original["port"], d, config) if err != nil { return nil, err @@ -1537,6 +1586,29 @@ func expandMonitoringUptimeCheckConfigHttpCheckAuthInfoUsername(v interface{}, d return v, nil } +func expandMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthentication(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedType, err := expandMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthenticationType(original["type"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedType); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["type"] = transformedType + } + + return transformed, nil +} + +func expandMonitoringUptimeCheckConfigHttpCheckServiceAgentAuthenticationType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandMonitoringUptimeCheckConfigHttpCheckPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/monitoring/resource_monitoring_uptime_check_config_generated_test.go b/google-beta/services/monitoring/resource_monitoring_uptime_check_config_generated_test.go index 62d11fae5b..7cd4a27bd1 100644 --- a/google-beta/services/monitoring/resource_monitoring_uptime_check_config_generated_test.go +++ b/google-beta/services/monitoring/resource_monitoring_uptime_check_config_generated_test.go @@ -206,6 +206,9 @@ resource "google_monitoring_uptime_check_config" "https" { port = "443" use_ssl = true validate_ssl = true + service_agent_authentication { + type = "OIDC_TOKEN" + } } monitored_resource { diff --git a/google-beta/services/netapp/resource_netapp_active_directory.go b/google-beta/services/netapp/resource_netapp_active_directory.go index 911673070f..a48e5c846d 100644 --- a/google-beta/services/netapp/resource_netapp_active_directory.go +++ b/google-beta/services/netapp/resource_netapp_active_directory.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -339,6 +340,7 @@ func resourceNetappactiveDirectoryCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -347,6 +349,7 @@ func resourceNetappactiveDirectoryCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating activeDirectory: %s", err) @@ -399,12 +402,14 @@ func resourceNetappactiveDirectoryRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappactiveDirectory %q", d.Id())) @@ -606,6 +611,7 @@ func resourceNetappactiveDirectoryUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating activeDirectory %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("domain") { @@ -697,6 +703,7 @@ func resourceNetappactiveDirectoryUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -744,6 +751,8 @@ func resourceNetappactiveDirectoryDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting activeDirectory %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -753,6 +762,7 @@ func resourceNetappactiveDirectoryDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "activeDirectory") diff --git a/google-beta/services/netapp/resource_netapp_backup_policy.go b/google-beta/services/netapp/resource_netapp_backup_policy.go index 352937ffb6..f13732e7ce 100644 --- a/google-beta/services/netapp/resource_netapp_backup_policy.go +++ b/google-beta/services/netapp/resource_netapp_backup_policy.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -206,6 +207,7 @@ func resourceNetappbackupPolicyCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -214,6 +216,7 @@ func resourceNetappbackupPolicyCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating backupPolicy: %s", err) @@ -266,12 +269,14 @@ func resourceNetappbackupPolicyRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappbackupPolicy %q", d.Id())) @@ -377,6 +382,7 @@ func resourceNetappbackupPolicyUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating backupPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("daily_backup_limit") { @@ -424,6 +430,7 @@ func resourceNetappbackupPolicyUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -471,6 +478,8 @@ func resourceNetappbackupPolicyDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting backupPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -480,6 +489,7 @@ func resourceNetappbackupPolicyDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "backupPolicy") diff --git a/google-beta/services/netapp/resource_netapp_backup_vault.go b/google-beta/services/netapp/resource_netapp_backup_vault.go index 16981cefc8..1030184c7e 100644 --- a/google-beta/services/netapp/resource_netapp_backup_vault.go +++ b/google-beta/services/netapp/resource_netapp_backup_vault.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -155,6 +156,7 @@ func resourceNetappbackupVaultCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -163,6 +165,7 @@ func resourceNetappbackupVaultCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating backupVault: %s", err) @@ -215,12 +218,14 @@ func resourceNetappbackupVaultRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappbackupVault %q", d.Id())) @@ -287,6 +292,7 @@ func resourceNetappbackupVaultUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating backupVault %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -318,6 +324,7 @@ func resourceNetappbackupVaultUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -365,6 +372,8 @@ func resourceNetappbackupVaultDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting backupVault %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -374,6 +383,7 @@ func resourceNetappbackupVaultDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "backupVault") diff --git a/google-beta/services/netapp/resource_netapp_kmsconfig.go b/google-beta/services/netapp/resource_netapp_kmsconfig.go index 54c9db2aaa..e76ac247a3 100644 --- a/google-beta/services/netapp/resource_netapp_kmsconfig.go +++ b/google-beta/services/netapp/resource_netapp_kmsconfig.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -168,6 +169,7 @@ func resourceNetappkmsconfigCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -176,6 +178,7 @@ func resourceNetappkmsconfigCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating kmsconfig: %s", err) @@ -246,12 +249,14 @@ func resourceNetappkmsconfigRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Netappkmsconfig %q", d.Id())) @@ -327,6 +332,7 @@ func resourceNetappkmsconfigUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating kmsconfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -362,6 +368,7 @@ func resourceNetappkmsconfigUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -409,6 +416,8 @@ func resourceNetappkmsconfigDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting kmsconfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -418,6 +427,7 @@ func resourceNetappkmsconfigDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "kmsconfig") diff --git a/google-beta/services/netapp/resource_netapp_storage_pool.go b/google-beta/services/netapp/resource_netapp_storage_pool.go index ebad7a9400..8d7130fe08 100644 --- a/google-beta/services/netapp/resource_netapp_storage_pool.go +++ b/google-beta/services/netapp/resource_netapp_storage_pool.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -236,6 +237,7 @@ func resourceNetappstoragePoolCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -244,6 +246,7 @@ func resourceNetappstoragePoolCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating storagePool: %s", err) @@ -296,12 +299,14 @@ func resourceNetappstoragePoolRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappstoragePool %q", d.Id())) @@ -401,6 +406,7 @@ func resourceNetappstoragePoolUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating storagePool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("capacity_gib") { @@ -440,6 +446,7 @@ func resourceNetappstoragePoolUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -487,6 +494,8 @@ func resourceNetappstoragePoolDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting storagePool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -496,6 +505,7 @@ func resourceNetappstoragePoolDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "storagePool") diff --git a/google-beta/services/netapp/resource_netapp_volume.go b/google-beta/services/netapp/resource_netapp_volume.go index 281d339e66..2f0bf8b083 100644 --- a/google-beta/services/netapp/resource_netapp_volume.go +++ b/google-beta/services/netapp/resource_netapp_volume.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -632,6 +633,7 @@ func resourceNetappVolumeCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -640,6 +642,7 @@ func resourceNetappVolumeCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Volume: %s", err) @@ -692,12 +695,14 @@ func resourceNetappVolumeRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappVolume %q", d.Id())) @@ -887,6 +892,7 @@ func resourceNetappVolumeUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Volume %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("storage_pool") { @@ -950,6 +956,7 @@ func resourceNetappVolumeUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -997,6 +1004,7 @@ func resourceNetappVolumeDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) // Delete volume even when nested snapshots do exist if deletionPolicy := d.Get("deletion_policy"); deletionPolicy == "FORCE" { url = url + "?force=true" @@ -1011,6 +1019,7 @@ func resourceNetappVolumeDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Volume") diff --git a/google-beta/services/netapp/resource_netapp_volume_replication.go b/google-beta/services/netapp/resource_netapp_volume_replication.go index 0dff89fa67..811e743531 100644 --- a/google-beta/services/netapp/resource_netapp_volume_replication.go +++ b/google-beta/services/netapp/resource_netapp_volume_replication.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -394,6 +395,7 @@ func resourceNetappVolumeReplicationCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -402,6 +404,7 @@ func resourceNetappVolumeReplicationCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VolumeReplication: %s", err) @@ -462,12 +465,14 @@ func resourceNetappVolumeReplicationRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappVolumeReplication %q", d.Id())) @@ -591,6 +596,7 @@ func resourceNetappVolumeReplicationUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating VolumeReplication %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("replication_schedule") { @@ -630,6 +636,7 @@ func resourceNetappVolumeReplicationUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -766,6 +773,7 @@ func resourceNetappVolumeReplicationDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) // A replication can only be deleted if mirrorState==STOPPED // We are about to delete the replication and need to stop the mirror before. // FYI: Stopping a PREPARING mirror currently doesn't work. User have to wait until @@ -809,6 +817,7 @@ func resourceNetappVolumeReplicationDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VolumeReplication") diff --git a/google-beta/services/netapp/resource_netapp_volume_snapshot.go b/google-beta/services/netapp/resource_netapp_volume_snapshot.go index f57cc3a350..af43463bbc 100644 --- a/google-beta/services/netapp/resource_netapp_volume_snapshot.go +++ b/google-beta/services/netapp/resource_netapp_volume_snapshot.go @@ -20,6 +20,7 @@ package netapp import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -156,6 +157,7 @@ func resourceNetappVolumeSnapshotCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -164,6 +166,7 @@ func resourceNetappVolumeSnapshotCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating VolumeSnapshot: %s", err) @@ -216,12 +219,14 @@ func resourceNetappVolumeSnapshotRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetappVolumeSnapshot %q", d.Id())) @@ -285,6 +290,7 @@ func resourceNetappVolumeSnapshotUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating VolumeSnapshot %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -316,6 +322,7 @@ func resourceNetappVolumeSnapshotUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -363,6 +370,8 @@ func resourceNetappVolumeSnapshotDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting VolumeSnapshot %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -372,6 +381,7 @@ func resourceNetappVolumeSnapshotDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "VolumeSnapshot") diff --git a/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range.go b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range.go new file mode 100644 index 0000000000..26e5647cd6 --- /dev/null +++ b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range.go @@ -0,0 +1,724 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package networkconnectivity + +import ( + "fmt" + "log" + "net/http" + "reflect" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" +) + +func ResourceNetworkConnectivityInternalRange() *schema.Resource { + return &schema.Resource{ + Create: resourceNetworkConnectivityInternalRangeCreate, + Read: resourceNetworkConnectivityInternalRangeRead, + Update: resourceNetworkConnectivityInternalRangeUpdate, + Delete: resourceNetworkConnectivityInternalRangeDelete, + + Importer: &schema.ResourceImporter{ + State: resourceNetworkConnectivityInternalRangeImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the policy based route.`, + }, + "network": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `Fully-qualified URL of the network that this route applies to, for example: projects/my-project/global/networks/my-network.`, + }, + "peering": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateEnum([]string{"FOR_SELF", "FOR_PEER", "NOT_SHARED"}), + Description: `The type of peering set for this internal range. Possible values: ["FOR_SELF", "FOR_PEER", "NOT_SHARED"]`, + }, + "usage": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateEnum([]string{"FOR_VPC", "EXTERNAL_TO_VPC"}), + Description: `The type of usage set for this InternalRange. Possible values: ["FOR_VPC", "EXTERNAL_TO_VPC"]`, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: `An optional description of this resource.`, + }, + "ip_cidr_range": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `The IP range that this internal range defines.`, + }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "overlaps": { + Type: schema.TypeList, + Optional: true, + Description: `Optional. Types of resources that are allowed to overlap with the current internal range. Possible values: ["OVERLAP_ROUTE_RANGE", "OVERLAP_EXISTING_SUBNET_RANGE"]`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidateEnum([]string{"OVERLAP_ROUTE_RANGE", "OVERLAP_EXISTING_SUBNET_RANGE"}), + }, + }, + "prefix_length": { + Type: schema.TypeInt, + Optional: true, + Description: `An alternate to ipCidrRange. Can be set when trying to create a reservation that automatically finds a free range of the given size. +If both ipCidrRange and prefixLength are set, there is an error if the range sizes do not match. Can also be used during updates to change the range size.`, + }, + "target_cidr_range": { + Type: schema.TypeList, + Optional: true, + Description: `Optional. Can be set to narrow down or pick a different address space while searching for a free range. +If not set, defaults to the "10.0.0.0/8" address space. This can be used to search in other rfc-1918 address spaces like "172.16.0.0/12" and "192.168.0.0/16" or non-rfc-1918 address spaces used in the VPC.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "users": { + Type: schema.TypeList, + Computed: true, + Description: `Output only. The list of resources that refer to this internal range. +Resources that use the internal range for their range allocation are referred to as users of the range. +Other resources mark themselves as users while doing so by creating a reference to this internal range. Having a user, based on this reference, prevents deletion of the internal range referred to. Can be empty.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func resourceNetworkConnectivityInternalRangeCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + obj := make(map[string]interface{}) + descriptionProp, err := expandNetworkConnectivityInternalRangeDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + ipCidrRangeProp, err := expandNetworkConnectivityInternalRangeIpCidrRange(d.Get("ip_cidr_range"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("ip_cidr_range"); !tpgresource.IsEmptyValue(reflect.ValueOf(ipCidrRangeProp)) && (ok || !reflect.DeepEqual(v, ipCidrRangeProp)) { + obj["ipCidrRange"] = ipCidrRangeProp + } + networkProp, err := expandNetworkConnectivityInternalRangeNetwork(d.Get("network"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("network"); !tpgresource.IsEmptyValue(reflect.ValueOf(networkProp)) && (ok || !reflect.DeepEqual(v, networkProp)) { + obj["network"] = networkProp + } + usageProp, err := expandNetworkConnectivityInternalRangeUsage(d.Get("usage"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("usage"); !tpgresource.IsEmptyValue(reflect.ValueOf(usageProp)) && (ok || !reflect.DeepEqual(v, usageProp)) { + obj["usage"] = usageProp + } + peeringProp, err := expandNetworkConnectivityInternalRangePeering(d.Get("peering"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("peering"); !tpgresource.IsEmptyValue(reflect.ValueOf(peeringProp)) && (ok || !reflect.DeepEqual(v, peeringProp)) { + obj["peering"] = peeringProp + } + prefixLengthProp, err := expandNetworkConnectivityInternalRangePrefixLength(d.Get("prefix_length"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("prefix_length"); !tpgresource.IsEmptyValue(reflect.ValueOf(prefixLengthProp)) && (ok || !reflect.DeepEqual(v, prefixLengthProp)) { + obj["prefixLength"] = prefixLengthProp + } + targetCidrRangeProp, err := expandNetworkConnectivityInternalRangeTargetCidrRange(d.Get("target_cidr_range"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("target_cidr_range"); !tpgresource.IsEmptyValue(reflect.ValueOf(targetCidrRangeProp)) && (ok || !reflect.DeepEqual(v, targetCidrRangeProp)) { + obj["targetCidrRange"] = targetCidrRangeProp + } + overlapsProp, err := expandNetworkConnectivityInternalRangeOverlaps(d.Get("overlaps"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("overlaps"); !tpgresource.IsEmptyValue(reflect.ValueOf(overlapsProp)) && (ok || !reflect.DeepEqual(v, overlapsProp)) { + obj["overlaps"] = overlapsProp + } + labelsProp, err := expandNetworkConnectivityInternalRangeEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{NetworkConnectivityBasePath}}projects/{{project}}/locations/global/internalRanges?internalRangeId={{name}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Creating new InternalRange: %#v", obj) + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for InternalRange: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, + }) + if err != nil { + return fmt.Errorf("Error creating InternalRange: %s", err) + } + + // Store the ID now + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/global/internalRanges/{{name}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + err = NetworkConnectivityOperationWaitTime( + config, res, project, "Creating InternalRange", userAgent, + d.Timeout(schema.TimeoutCreate)) + + if err != nil { + // The resource didn't actually create + d.SetId("") + return fmt.Errorf("Error waiting to create InternalRange: %s", err) + } + + log.Printf("[DEBUG] Finished creating InternalRange %q: %#v", d.Id(), res) + + return resourceNetworkConnectivityInternalRangeRead(d, meta) +} + +func resourceNetworkConnectivityInternalRangeRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{NetworkConnectivityBasePath}}projects/{{project}}/locations/global/internalRanges/{{name}}") + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for InternalRange: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkConnectivityInternalRange %q", d.Id())) + } + + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + + if err := d.Set("labels", flattenNetworkConnectivityInternalRangeLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("description", flattenNetworkConnectivityInternalRangeDescription(res["description"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("ip_cidr_range", flattenNetworkConnectivityInternalRangeIpCidrRange(res["ipCidrRange"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("network", flattenNetworkConnectivityInternalRangeNetwork(res["network"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("usage", flattenNetworkConnectivityInternalRangeUsage(res["usage"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("peering", flattenNetworkConnectivityInternalRangePeering(res["peering"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("prefix_length", flattenNetworkConnectivityInternalRangePrefixLength(res["prefixLength"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("target_cidr_range", flattenNetworkConnectivityInternalRangeTargetCidrRange(res["targetCidrRange"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("users", flattenNetworkConnectivityInternalRangeUsers(res["users"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("overlaps", flattenNetworkConnectivityInternalRangeOverlaps(res["overlaps"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("terraform_labels", flattenNetworkConnectivityInternalRangeTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkConnectivityInternalRangeEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading InternalRange: %s", err) + } + + return nil +} + +func resourceNetworkConnectivityInternalRangeUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for InternalRange: %s", err) + } + billingProject = project + + obj := make(map[string]interface{}) + descriptionProp, err := expandNetworkConnectivityInternalRangeDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + ipCidrRangeProp, err := expandNetworkConnectivityInternalRangeIpCidrRange(d.Get("ip_cidr_range"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("ip_cidr_range"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, ipCidrRangeProp)) { + obj["ipCidrRange"] = ipCidrRangeProp + } + networkProp, err := expandNetworkConnectivityInternalRangeNetwork(d.Get("network"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("network"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, networkProp)) { + obj["network"] = networkProp + } + usageProp, err := expandNetworkConnectivityInternalRangeUsage(d.Get("usage"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("usage"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, usageProp)) { + obj["usage"] = usageProp + } + peeringProp, err := expandNetworkConnectivityInternalRangePeering(d.Get("peering"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("peering"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, peeringProp)) { + obj["peering"] = peeringProp + } + prefixLengthProp, err := expandNetworkConnectivityInternalRangePrefixLength(d.Get("prefix_length"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("prefix_length"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, prefixLengthProp)) { + obj["prefixLength"] = prefixLengthProp + } + targetCidrRangeProp, err := expandNetworkConnectivityInternalRangeTargetCidrRange(d.Get("target_cidr_range"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("target_cidr_range"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, targetCidrRangeProp)) { + obj["targetCidrRange"] = targetCidrRangeProp + } + overlapsProp, err := expandNetworkConnectivityInternalRangeOverlaps(d.Get("overlaps"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("overlaps"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, overlapsProp)) { + obj["overlaps"] = overlapsProp + } + labelsProp, err := expandNetworkConnectivityInternalRangeEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{NetworkConnectivityBasePath}}projects/{{project}}/locations/global/internalRanges/{{name}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating InternalRange %q: %#v", d.Id(), obj) + headers := make(http.Header) + updateMask := []string{} + + if d.HasChange("description") { + updateMask = append(updateMask, "description") + } + + if d.HasChange("ip_cidr_range") { + updateMask = append(updateMask, "ipCidrRange") + } + + if d.HasChange("network") { + updateMask = append(updateMask, "network") + } + + if d.HasChange("usage") { + updateMask = append(updateMask, "usage") + } + + if d.HasChange("peering") { + updateMask = append(updateMask, "peering") + } + + if d.HasChange("prefix_length") { + updateMask = append(updateMask, "prefixLength") + } + + if d.HasChange("target_cidr_range") { + updateMask = append(updateMask, "targetCidrRange") + } + + if d.HasChange("overlaps") { + updateMask = append(updateMask, "overlaps") + } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } + // updateMask is a URL parameter but not present in the schema, so ReplaceVars + // won't set it + url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + // if updateMask is empty we are not updating anything so skip the post + if len(updateMask) > 0 { + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "PATCH", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, + }) + + if err != nil { + return fmt.Errorf("Error updating InternalRange %q: %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Finished updating InternalRange %q: %#v", d.Id(), res) + } + + err = NetworkConnectivityOperationWaitTime( + config, res, project, "Updating InternalRange", userAgent, + d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return err + } + } + + return resourceNetworkConnectivityInternalRangeRead(d, meta) +} + +func resourceNetworkConnectivityInternalRangeDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for InternalRange: %s", err) + } + billingProject = project + + url, err := tpgresource.ReplaceVars(d, config, "{{NetworkConnectivityBasePath}}projects/{{project}}/locations/global/internalRanges/{{name}}") + if err != nil { + return err + } + + var obj map[string]interface{} + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + + log.Printf("[DEBUG] Deleting InternalRange %q", d.Id()) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, "InternalRange") + } + + err = NetworkConnectivityOperationWaitTime( + config, res, project, "Deleting InternalRange", userAgent, + d.Timeout(schema.TimeoutDelete)) + + if err != nil { + return err + } + + log.Printf("[DEBUG] Finished deleting InternalRange %q: %#v", d.Id(), res) + return nil +} + +func resourceNetworkConnectivityInternalRangeImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*transport_tpg.Config) + if err := tpgresource.ParseImportId([]string{ + "^projects/(?P[^/]+)/locations/global/internalRanges/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", + }, d, config); err != nil { + return nil, err + } + + // Replace import id for the resource id + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/global/internalRanges/{{name}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil +} + +func flattenNetworkConnectivityInternalRangeLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNetworkConnectivityInternalRangeDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangeIpCidrRange(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangeNetwork(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + return tpgresource.ConvertSelfLinkToV1(v.(string)) +} + +func flattenNetworkConnectivityInternalRangeUsage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangePeering(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangePrefixLength(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + // Handles the string fixed64 format + if strVal, ok := v.(string); ok { + if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { + return intVal + } + } + + // number values are represented as float64 + if floatVal, ok := v.(float64); ok { + intVal := int(floatVal) + return intVal + } + + return v // let terraform core handle it otherwise +} + +func flattenNetworkConnectivityInternalRangeTargetCidrRange(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangeUsers(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangeOverlaps(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenNetworkConnectivityInternalRangeTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNetworkConnectivityInternalRangeEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandNetworkConnectivityInternalRangeDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangeIpCidrRange(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangeNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangeUsage(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangePeering(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangePrefixLength(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangeTargetCidrRange(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangeOverlaps(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandNetworkConnectivityInternalRangeEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_generated_test.go b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_generated_test.go new file mode 100644 index 0000000000..9a15d1bd49 --- /dev/null +++ b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_generated_test.go @@ -0,0 +1,266 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package networkconnectivity_test + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func TestAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "basic%{random_suffix}" + description = "Test internal range" + network = google_compute_network.default.self_link + usage = "FOR_VPC" + peering = "FOR_SELF" + ip_cidr_range = "10.0.0.0/24" + + labels = { + label-a: "b" + } +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} + +func TestAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesAutomaticReservationExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesAutomaticReservationExample(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesAutomaticReservationExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "tf-test-automatic-reservation%{random_suffix}" + network = google_compute_network.default.id + usage = "FOR_VPC" + peering = "FOR_SELF" + prefix_length = 24 + target_cidr_range = [ + "192.16.0.0/16" + ] +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} + +func TestAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "tf-test-external-ranges%{random_suffix}" + network = google_compute_network.default.id + usage = "EXTERNAL_TO_VPC" + peering = "FOR_SELF" + ip_cidr_range = "172.16.0.0/24" + + labels = { + external-reserved-range: "on-premises" + } +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} + +func TestAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesReserveWithOverlapExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesReserveWithOverlapExample(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesReserveWithOverlapExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "tf-test-overlap-range%{random_suffix}" + description = "Test internal range" + network = google_compute_network.default.id + usage = "FOR_VPC" + peering = "FOR_SELF" + ip_cidr_range = "10.0.0.0/30" + + overlaps = [ + "OVERLAP_EXISTING_SUBNET_RANGE" + ] + + depends_on = [ + google_compute_subnetwork.default + ] +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "default" { + name = "overlapping-subnet" + ip_cidr_range = "10.0.0.0/24" + region = "us-central1" + network = google_compute_network.default.id +} +`, context) +} + +func testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t *testing.T) func(s *terraform.State) error { + return func(s *terraform.State) error { + for name, rs := range s.RootModule().Resources { + if rs.Type != "google_network_connectivity_internal_range" { + continue + } + if strings.HasPrefix(name, "data.") { + continue + } + + config := acctest.GoogleProviderConfig(t) + + url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{NetworkConnectivityBasePath}}projects/{{project}}/locations/global/internalRanges/{{name}}") + if err != nil { + return err + } + + billingProject := "" + + if config.BillingProject != "" { + billingProject = config.BillingProject + } + + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: config.UserAgent, + }) + if err == nil { + return fmt.Errorf("NetworkConnectivityInternalRange still exists at %s", url) + } + } + + return nil + } +} diff --git a/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_sweeper.go b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_sweeper.go new file mode 100644 index 0000000000..41be5ecb6e --- /dev/null +++ b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_sweeper.go @@ -0,0 +1,139 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package networkconnectivity + +import ( + "context" + "log" + "strings" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/sweeper" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func init() { + sweeper.AddTestSweepers("NetworkConnectivityInternalRange", testSweepNetworkConnectivityInternalRange) +} + +// At the time of writing, the CI only passes us-central1 as the region +func testSweepNetworkConnectivityInternalRange(region string) error { + resourceName := "NetworkConnectivityInternalRange" + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) + + config, err := sweeper.SharedConfigForRegion(region) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) + return err + } + + err = config.LoadAndValidate(context.Background()) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) + return err + } + + t := &testing.T{} + billingId := envvar.GetTestBillingAccountFromEnv(t) + + // Setup variables to replace in list template + d := &tpgresource.ResourceDataMock{ + FieldsInSchema: map[string]interface{}{ + "project": config.Project, + "region": region, + "location": region, + "zone": "-", + "billing_account": billingId, + }, + } + + listTemplate := strings.Split("https://networkconnectivity.googleapis.com/v1/projects/{{project}}/locations/global/internalRanges", "?")[0] + listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) + return nil + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: config.Project, + RawURL: listUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) + return nil + } + + resourceList, ok := res["internalRanges"] + if !ok { + log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") + return nil + } + + rl := resourceList.([]interface{}) + + log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for _, ri := range rl { + obj := ri.(map[string]interface{}) + if obj["name"] == nil { + log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) + return nil + } + + name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) + // Skip resources that shouldn't be sweeped + if !sweeper.IsSweepableTestResource(name) { + nonPrefixCount++ + continue + } + + deleteTemplate := "https://networkconnectivity.googleapis.com/v1/projects/{{project}}/locations/global/internalRanges/{{name}}" + deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + return nil + } + deleteUrl = deleteUrl + name + + // Don't wait on operations as we may have a lot to delete + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: config.Project, + RawURL: deleteUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} diff --git a/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_test.go b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_test.go new file mode 100644 index 0000000000..f47b0b39bd --- /dev/null +++ b/google-beta/services/networkconnectivity/resource_network_connectivity_internal_range_test.go @@ -0,0 +1,161 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package networkconnectivity_test + +import ( + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "testing" +) + +func TestAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample_full(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample_update(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample_full(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "basic%{random_suffix}" + description = "Test internal range" + network = google_compute_network.default.self_link + usage = "FOR_VPC" + peering = "FOR_SELF" + target_cidr_range = ["10.0.0.0/8"] + prefix_length = 24 + overlaps = ["OVERLAP_ROUTE_RANGE"] + + labels = { + label-a: "b" + } +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesBasicExample_update(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "updated-internal-range%{random_suffix}" + description = "Update internal range" + network = google_compute_network.default.self_link + usage = "FOR_VPC" + peering = "NOT_SHARED" + target_cidr_range = ["192.168.0.0/16"] + prefix_length = 22 + overlaps = ["OVERLAP_ROUTE_RANGE", "OVERLAP_EXISTING_SUBNET_RANGE"] + + labels = { + label-b: "c" + } +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} + +func TestAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckNetworkConnectivityInternalRangeDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample_full(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + { + Config: testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample_update(context), + }, + { + ResourceName: "google_network_connectivity_internal_range.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name", "network", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample_full(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "basic%{random_suffix}" + description = "Test internal range for resources outside the VPC" + network = google_compute_network.default.self_link + usage = "EXTERNAL_TO_VPC" + peering = "FOR_SELF" + ip_cidr_range = "192.16.0.0/16" +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} + +func testAccNetworkConnectivityInternalRange_networkConnectivityInternalRangesExternalRangesExample_update(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_network_connectivity_internal_range" "default" { + name = "updated-internal-range%{random_suffix}" + description = "Update internal range" + network = google_compute_network.default.self_link + usage = "FOR_VPC" + peering = "FOR_SELF" + ip_cidr_range = "10.0.0.0/24" +} + +resource "google_compute_network" "default" { + name = "tf-test-internal-ranges%{random_suffix}" + auto_create_subnetworks = false +} +`, context) +} diff --git a/google-beta/services/networkconnectivity/resource_network_connectivity_policy_based_route.go b/google-beta/services/networkconnectivity/resource_network_connectivity_policy_based_route.go index ebcc8e58db..d280588838 100644 --- a/google-beta/services/networkconnectivity/resource_network_connectivity_policy_based_route.go +++ b/google-beta/services/networkconnectivity/resource_network_connectivity_policy_based_route.go @@ -20,6 +20,7 @@ package networkconnectivity import ( "fmt" "log" + "net/http" "reflect" "time" @@ -329,6 +330,7 @@ func resourceNetworkConnectivityPolicyBasedRouteCreate(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -337,6 +339,7 @@ func resourceNetworkConnectivityPolicyBasedRouteCreate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PolicyBasedRoute: %s", err) @@ -389,12 +392,14 @@ func resourceNetworkConnectivityPolicyBasedRouteRead(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkConnectivityPolicyBasedRoute %q", d.Id())) @@ -485,6 +490,8 @@ func resourceNetworkConnectivityPolicyBasedRouteDelete(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PolicyBasedRoute %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -494,6 +501,7 @@ func resourceNetworkConnectivityPolicyBasedRouteDelete(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PolicyBasedRoute") diff --git a/google-beta/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go b/google-beta/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go index 2e27ce8853..0885d168f2 100644 --- a/google-beta/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go +++ b/google-beta/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go @@ -20,6 +20,7 @@ package networkconnectivity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -327,6 +328,7 @@ func resourceNetworkConnectivityServiceConnectionPolicyCreate(d *schema.Resource billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -335,6 +337,7 @@ func resourceNetworkConnectivityServiceConnectionPolicyCreate(d *schema.Resource UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceConnectionPolicy: %s", err) @@ -387,12 +390,14 @@ func resourceNetworkConnectivityServiceConnectionPolicyRead(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkConnectivityServiceConnectionPolicy %q", d.Id())) @@ -494,6 +499,7 @@ func resourceNetworkConnectivityServiceConnectionPolicyUpdate(d *schema.Resource } log.Printf("[DEBUG] Updating ServiceConnectionPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -533,6 +539,7 @@ func resourceNetworkConnectivityServiceConnectionPolicyUpdate(d *schema.Resource UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -580,6 +587,8 @@ func resourceNetworkConnectivityServiceConnectionPolicyDelete(d *schema.Resource billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServiceConnectionPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -589,6 +598,7 @@ func resourceNetworkConnectivityServiceConnectionPolicyDelete(d *schema.Resource UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServiceConnectionPolicy") diff --git a/google-beta/services/networkmanagement/resource_network_management_connectivity_test_resource.go b/google-beta/services/networkmanagement/resource_network_management_connectivity_test_resource.go index b571c68c93..e17083ef1f 100644 --- a/google-beta/services/networkmanagement/resource_network_management_connectivity_test_resource.go +++ b/google-beta/services/networkmanagement/resource_network_management_connectivity_test_resource.go @@ -20,6 +20,7 @@ package networkmanagement import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -324,6 +325,7 @@ func resourceNetworkManagementConnectivityTestCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -332,6 +334,7 @@ func resourceNetworkManagementConnectivityTestCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ConnectivityTest: %s", err) @@ -398,12 +401,14 @@ func resourceNetworkManagementConnectivityTestRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkManagementConnectivityTest %q", d.Id())) @@ -503,6 +508,7 @@ func resourceNetworkManagementConnectivityTestUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating ConnectivityTest %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -559,6 +565,7 @@ func resourceNetworkManagementConnectivityTestUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -606,6 +613,8 @@ func resourceNetworkManagementConnectivityTestDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ConnectivityTest %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -615,6 +624,7 @@ func resourceNetworkManagementConnectivityTestDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ConnectivityTest") diff --git a/google-beta/services/networksecurity/resource_network_security_address_group.go b/google-beta/services/networksecurity/resource_network_security_address_group.go index 8b6fffc4ac..256d1fffa2 100644 --- a/google-beta/services/networksecurity/resource_network_security_address_group.go +++ b/google-beta/services/networksecurity/resource_network_security_address_group.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -191,6 +192,7 @@ func resourceNetworkSecurityAddressGroupCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -199,6 +201,7 @@ func resourceNetworkSecurityAddressGroupCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AddressGroup: %s", err) @@ -245,12 +248,14 @@ func resourceNetworkSecurityAddressGroupRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityAddressGroup %q", d.Id())) @@ -335,6 +340,7 @@ func resourceNetworkSecurityAddressGroupUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating AddressGroup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -378,6 +384,7 @@ func resourceNetworkSecurityAddressGroupUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -420,6 +427,8 @@ func resourceNetworkSecurityAddressGroupDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AddressGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -429,6 +438,7 @@ func resourceNetworkSecurityAddressGroupDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AddressGroup") diff --git a/google-beta/services/networksecurity/resource_network_security_authorization_policy.go b/google-beta/services/networksecurity/resource_network_security_authorization_policy.go index cef7b7e15d..fce9d20c2a 100644 --- a/google-beta/services/networksecurity/resource_network_security_authorization_policy.go +++ b/google-beta/services/networksecurity/resource_network_security_authorization_policy.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -267,6 +268,7 @@ func resourceNetworkSecurityAuthorizationPolicyCreate(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -275,6 +277,7 @@ func resourceNetworkSecurityAuthorizationPolicyCreate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AuthorizationPolicy: %s", err) @@ -327,12 +330,14 @@ func resourceNetworkSecurityAuthorizationPolicyRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityAuthorizationPolicy %q", d.Id())) @@ -417,6 +422,7 @@ func resourceNetworkSecurityAuthorizationPolicyUpdate(d *schema.ResourceData, me } log.Printf("[DEBUG] Updating AuthorizationPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -456,6 +462,7 @@ func resourceNetworkSecurityAuthorizationPolicyUpdate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -503,6 +510,8 @@ func resourceNetworkSecurityAuthorizationPolicyDelete(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AuthorizationPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -512,6 +521,7 @@ func resourceNetworkSecurityAuthorizationPolicyDelete(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AuthorizationPolicy") diff --git a/google-beta/services/networksecurity/resource_network_security_client_tls_policy.go b/google-beta/services/networksecurity/resource_network_security_client_tls_policy.go index 8af5f0c5f8..ed52f8436e 100644 --- a/google-beta/services/networksecurity/resource_network_security_client_tls_policy.go +++ b/google-beta/services/networksecurity/resource_network_security_client_tls_policy.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -261,6 +262,7 @@ func resourceNetworkSecurityClientTlsPolicyCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -269,6 +271,7 @@ func resourceNetworkSecurityClientTlsPolicyCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ClientTlsPolicy: %s", err) @@ -321,12 +324,14 @@ func resourceNetworkSecurityClientTlsPolicyRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityClientTlsPolicy %q", d.Id())) @@ -420,6 +425,7 @@ func resourceNetworkSecurityClientTlsPolicyUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating ClientTlsPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -463,6 +469,7 @@ func resourceNetworkSecurityClientTlsPolicyUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -510,6 +517,8 @@ func resourceNetworkSecurityClientTlsPolicyDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ClientTlsPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -519,6 +528,7 @@ func resourceNetworkSecurityClientTlsPolicyDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ClientTlsPolicy") diff --git a/google-beta/services/networksecurity/resource_network_security_firewall_endpoint.go b/google-beta/services/networksecurity/resource_network_security_firewall_endpoint.go index 2939271dcc..9aa4f92a30 100644 --- a/google-beta/services/networksecurity/resource_network_security_firewall_endpoint.go +++ b/google-beta/services/networksecurity/resource_network_security_firewall_endpoint.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -176,6 +177,7 @@ func resourceNetworkSecurityFirewallEndpointCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -184,6 +186,7 @@ func resourceNetworkSecurityFirewallEndpointCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FirewallEndpoint: %s", err) @@ -230,12 +233,14 @@ func resourceNetworkSecurityFirewallEndpointRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityFirewallEndpoint %q", d.Id())) @@ -305,6 +310,7 @@ func resourceNetworkSecurityFirewallEndpointUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating FirewallEndpoint %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("billing_project_id") { @@ -336,6 +342,7 @@ func resourceNetworkSecurityFirewallEndpointUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -378,6 +385,8 @@ func resourceNetworkSecurityFirewallEndpointDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FirewallEndpoint %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -387,6 +396,7 @@ func resourceNetworkSecurityFirewallEndpointDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FirewallEndpoint") diff --git a/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association.go b/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association.go index e117f17b16..616abab3b0 100644 --- a/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association.go +++ b/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -75,6 +76,15 @@ func ResourceNetworkSecurityFirewallEndpointAssociation() *schema.Resource { Required: true, Description: `The URL of the network that is being associated.`, }, + "disabled": { + Type: schema.TypeBool, + Optional: true, + Description: `Whether the association is disabled. True indicates that traffic will not be intercepted. + +~> **Note:** The API will reject the request if this value is set to true when creating the resource, +otherwise on an update the association can be disabled.`, + Default: false, + }, "labels": { Type: schema.TypeMap, Optional: true, @@ -167,6 +177,12 @@ func resourceNetworkSecurityFirewallEndpointAssociationCreate(d *schema.Resource } else if v, ok := d.GetOkExists("tls_inspection_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(tlsInspectionPolicyProp)) && (ok || !reflect.DeepEqual(v, tlsInspectionPolicyProp)) { obj["tlsInspectionPolicy"] = tlsInspectionPolicyProp } + disabledProp, err := expandNetworkSecurityFirewallEndpointAssociationDisabled(d.Get("disabled"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("disabled"); !tpgresource.IsEmptyValue(reflect.ValueOf(disabledProp)) && (ok || !reflect.DeepEqual(v, disabledProp)) { + obj["disabled"] = disabledProp + } labelsProp, err := expandNetworkSecurityFirewallEndpointAssociationEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err @@ -174,7 +190,7 @@ func resourceNetworkSecurityFirewallEndpointAssociationCreate(d *schema.Resource obj["labels"] = labelsProp } - url, err := tpgresource.ReplaceVars(d, config, "{{NetworkSecurityBasePath}}{{parent}}/locations/{{location}}/firewallEndpointAssociations?firewallEndpointId={{name}}") + url, err := tpgresource.ReplaceVars(d, config, "{{NetworkSecurityBasePath}}{{parent}}/locations/{{location}}/firewallEndpointAssociations?firewallEndpointAssociationId={{name}}") if err != nil { return err } @@ -187,6 +203,7 @@ func resourceNetworkSecurityFirewallEndpointAssociationCreate(d *schema.Resource billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -195,6 +212,7 @@ func resourceNetworkSecurityFirewallEndpointAssociationCreate(d *schema.Resource UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FirewallEndpointAssociation: %s", err) @@ -241,12 +259,14 @@ func resourceNetworkSecurityFirewallEndpointAssociationRead(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityFirewallEndpointAssociation %q", d.Id())) @@ -264,6 +284,9 @@ func resourceNetworkSecurityFirewallEndpointAssociationRead(d *schema.ResourceDa if err := d.Set("labels", flattenNetworkSecurityFirewallEndpointAssociationLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading FirewallEndpointAssociation: %s", err) } + if err := d.Set("disabled", flattenNetworkSecurityFirewallEndpointAssociationDisabled(res["disabled"], d, config)); err != nil { + return fmt.Errorf("Error reading FirewallEndpointAssociation: %s", err) + } if err := d.Set("self_link", flattenNetworkSecurityFirewallEndpointAssociationSelfLink(res["selfLink"], d, config)); err != nil { return fmt.Errorf("Error reading FirewallEndpointAssociation: %s", err) } @@ -318,6 +341,12 @@ func resourceNetworkSecurityFirewallEndpointAssociationUpdate(d *schema.Resource } else if v, ok := d.GetOkExists("tls_inspection_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, tlsInspectionPolicyProp)) { obj["tlsInspectionPolicy"] = tlsInspectionPolicyProp } + disabledProp, err := expandNetworkSecurityFirewallEndpointAssociationDisabled(d.Get("disabled"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("disabled"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, disabledProp)) { + obj["disabled"] = disabledProp + } labelsProp, err := expandNetworkSecurityFirewallEndpointAssociationEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err @@ -331,6 +360,7 @@ func resourceNetworkSecurityFirewallEndpointAssociationUpdate(d *schema.Resource } log.Printf("[DEBUG] Updating FirewallEndpointAssociation %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("firewall_endpoint") { @@ -345,6 +375,10 @@ func resourceNetworkSecurityFirewallEndpointAssociationUpdate(d *schema.Resource updateMask = append(updateMask, "tlsInspectionPolicy") } + if d.HasChange("disabled") { + updateMask = append(updateMask, "disabled") + } + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } @@ -370,6 +404,7 @@ func resourceNetworkSecurityFirewallEndpointAssociationUpdate(d *schema.Resource UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -412,6 +447,8 @@ func resourceNetworkSecurityFirewallEndpointAssociationDelete(d *schema.Resource billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FirewallEndpointAssociation %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -421,6 +458,7 @@ func resourceNetworkSecurityFirewallEndpointAssociationDelete(d *schema.Resource UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FirewallEndpointAssociation") @@ -483,6 +521,10 @@ func flattenNetworkSecurityFirewallEndpointAssociationLabels(v interface{}, d *s return transformed } +func flattenNetworkSecurityFirewallEndpointAssociationDisabled(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenNetworkSecurityFirewallEndpointAssociationSelfLink(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -534,6 +576,10 @@ func expandNetworkSecurityFirewallEndpointAssociationTlsInspectionPolicy(v inter return v, nil } +func expandNetworkSecurityFirewallEndpointAssociationDisabled(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandNetworkSecurityFirewallEndpointAssociationEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil diff --git a/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association_test.go b/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association_test.go index 0fe26d22a8..006f6a9de3 100644 --- a/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association_test.go +++ b/google-beta/services/networksecurity/resource_network_security_firewall_endpoint_association_test.go @@ -4,6 +4,7 @@ package networksecurity_test import ( "fmt" + "strconv" "strings" "testing" @@ -17,11 +18,16 @@ import ( ) func TestAccNetworkSecurityFirewallEndpointAssociations_basic(t *testing.T) { - acctest.SkipIfVcr(t) t.Parallel() - orgId := envvar.GetTestOrgFromEnv(t) - randomSuffix := acctest.RandString(t, 10) + context := map[string]interface{}{ + "orgId": envvar.GetTestOrgFromEnv(t), + "randomSuffix": acctest.RandString(t, 10), + "billingProjectId": envvar.GetTestProjectFromEnv(), + "disabled": strconv.FormatBool(false), + } + + testResourceName := "google_network_security_firewall_endpoint_association.foobar" acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -29,19 +35,19 @@ func TestAccNetworkSecurityFirewallEndpointAssociations_basic(t *testing.T) { CheckDestroy: testAccCheckNetworkSecurityFirewallEndpointDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccNetworkSecurityFirewallEndpointAssociation_basic(randomSuffix, orgId), + Config: testAccNetworkSecurityFirewallEndpointAssociation_basic(context), }, { - ResourceName: "google_network_security_firewall_endpoint_association.foobar", + ResourceName: testResourceName, ImportState: true, ImportStateVerify: true, ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { - Config: testAccNetworkSecurityFirewallEndpointAssociation_update(randomSuffix, orgId), + Config: testAccNetworkSecurityFirewallEndpointAssociation_update(context), }, { - ResourceName: "google_network_security_firewall_endpoint_association.foobar", + ResourceName: testResourceName, ImportState: true, ImportStateVerify: true, ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, @@ -50,66 +56,130 @@ func TestAccNetworkSecurityFirewallEndpointAssociations_basic(t *testing.T) { }) } -func testAccNetworkSecurityFirewallEndpointAssociation_basic(randomSuffix string, orgId string) string { - return fmt.Sprintf(` +func TestAccNetworkSecurityFirewallEndpointAssociations_disabled(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "orgId": envvar.GetTestOrgFromEnv(t), + "randomSuffix": acctest.RandString(t, 10), + "billingProjectId": envvar.GetTestProjectFromEnv(), + } + + testResourceName := "google_network_security_firewall_endpoint_association.foobar" + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckNetworkSecurityFirewallEndpointDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccNetworkSecurityFirewallEndpointAssociation_basic(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "disabled", "false"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + { + Config: testAccNetworkSecurityFirewallEndpointAssociation_update(testContextMapDisabledField(context, true)), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "disabled", "true"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + { + Config: testAccNetworkSecurityFirewallEndpointAssociation_update(testContextMapDisabledField(context, false)), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(testResourceName, "disabled", "false"), + ), + }, + { + ResourceName: testResourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + }, + }) +} + +func testContextMapDisabledField(context map[string]interface{}, disabled bool) map[string]interface{} { + context["disabled"] = strconv.FormatBool(disabled) + return context +} + +func testAccNetworkSecurityFirewallEndpointAssociation_basic(context map[string]interface{}) string { + return acctest.Nprintf(` resource "google_compute_network" "foobar" { - provider = google-beta - name = "tf-test-my-vpc%s" - auto_create_subnetworks = false + provider = google-beta + name = "tf-test-my-vpc%{randomSuffix}" + auto_create_subnetworks = false } resource "google_network_security_firewall_endpoint" "foobar" { - provider = google-beta - name = "tf-test-my-firewall-endpoint%s" - parent = "organizations/%s" - location = "us-central1-a" + provider = google-beta + name = "tf-test-my-firewall-endpoint%{randomSuffix}" + parent = "organizations/%{orgId}" + location = "us-central1-a" + billing_project_id = "%{billingProjectId}" } # TODO: add tlsInspectionPolicy once resource is ready resource "google_network_security_firewall_endpoint_association" "foobar" { - provider = google-beta - name = "tf-test-my-firewall-endpoint%s" - parent = "organizations/%s" - location = "us-central1-a" - firewall_endpoint = google_network_security_firewall_endpoint.foobar.id - network = google_compute_network.foobar.id - - labels = { - foo = "bar" - } + provider = google-beta + name = "tf-test-my-firewall-endpoint-association%{randomSuffix}" + parent = "projects/%{billingProjectId}" + location = "us-central1-a" + firewall_endpoint = google_network_security_firewall_endpoint.foobar.id + network = google_compute_network.foobar.id + + labels = { + foo = "bar" + } } -`, randomSuffix, randomSuffix, orgId, randomSuffix, orgId) +`, context) } -func testAccNetworkSecurityFirewallEndpointAssociation_update(randomSuffix string, orgId string) string { - return fmt.Sprintf(` +func testAccNetworkSecurityFirewallEndpointAssociation_update(context map[string]interface{}) string { + return acctest.Nprintf(` resource "google_compute_network" "foobar" { - provider = google-beta - name = "tf-test-my-vpc%s" - auto_create_subnetworks = false + provider = google-beta + name = "tf-test-my-vpc%{randomSuffix}" + auto_create_subnetworks = false } resource "google_network_security_firewall_endpoint" "foobar" { - provider = google-beta - name = "tf-test-my-firewall-endpoint%s" - parent = "organizations/%s" - location = "us-central1-a" + provider = google-beta + name = "tf-test-my-firewall-endpoint%{randomSuffix}" + parent = "organizations/%{orgId}" + location = "us-central1-a" + billing_project_id = "%{billingProjectId}" } # TODO: add tlsInspectionPolicy once resource is ready resource "google_network_security_firewall_endpoint_association" "foobar" { - provider = google-beta - name = "tf-test-my-firewall-endpoint%s" - parent = "organizations/%s" - location = "us-central1-a" - firewall_endpoint = google_network_security_firewall_endpoint.foobar.id - network = google_compute_network.foobar.id - - labels = { - foo = "bar-updated" - } + provider = google-beta + name = "tf-test-my-firewall-endpoint-association%{randomSuffix}" + parent = "projects/%{billingProjectId}" + location = "us-central1-a" + firewall_endpoint = google_network_security_firewall_endpoint.foobar.id + network = google_compute_network.foobar.id + disabled = "%{disabled}" + + labels = { + foo = "bar-updated" + } } -`, randomSuffix, randomSuffix, orgId, randomSuffix, orgId) +`, context) } func testAccCheckNetworkSecurityFirewallEndpointAssociationDestroyProducer(t *testing.T) func(s *terraform.State) error { diff --git a/google-beta/services/networksecurity/resource_network_security_gateway_security_policy.go b/google-beta/services/networksecurity/resource_network_security_gateway_security_policy.go index 0d8f919324..6fbb16f99f 100644 --- a/google-beta/services/networksecurity/resource_network_security_gateway_security_policy.go +++ b/google-beta/services/networksecurity/resource_network_security_gateway_security_policy.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -146,6 +147,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -154,6 +156,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GatewaySecurityPolicy: %s", err) @@ -206,12 +209,14 @@ func resourceNetworkSecurityGatewaySecurityPolicyRead(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityGatewaySecurityPolicy %q", d.Id())) @@ -272,6 +277,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating GatewaySecurityPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -303,6 +309,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -350,6 +357,8 @@ func resourceNetworkSecurityGatewaySecurityPolicyDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GatewaySecurityPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -359,6 +368,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GatewaySecurityPolicy") diff --git a/google-beta/services/networksecurity/resource_network_security_gateway_security_policy_rule.go b/google-beta/services/networksecurity/resource_network_security_gateway_security_policy_rule.go index 84b0a2ca40..0d6449a201 100644 --- a/google-beta/services/networksecurity/resource_network_security_gateway_security_policy_rule.go +++ b/google-beta/services/networksecurity/resource_network_security_gateway_security_policy_rule.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -210,6 +211,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleCreate(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -218,6 +220,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleCreate(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GatewaySecurityPolicyRule: %s", err) @@ -270,12 +273,14 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleRead(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityGatewaySecurityPolicyRule %q", d.Id())) @@ -384,6 +389,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleUpdate(d *schema.ResourceDa } log.Printf("[DEBUG] Updating GatewaySecurityPolicyRule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enabled") { @@ -435,6 +441,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleUpdate(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -482,6 +489,8 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleDelete(d *schema.ResourceDa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GatewaySecurityPolicyRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -491,6 +500,7 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleDelete(d *schema.ResourceDa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GatewaySecurityPolicyRule") diff --git a/google-beta/services/networksecurity/resource_network_security_security_profile.go b/google-beta/services/networksecurity/resource_network_security_security_profile.go index f9ce32cd3a..0a1ec50d6b 100644 --- a/google-beta/services/networksecurity/resource_network_security_security_profile.go +++ b/google-beta/services/networksecurity/resource_network_security_security_profile.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -242,6 +243,7 @@ func resourceNetworkSecuritySecurityProfileCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -250,6 +252,7 @@ func resourceNetworkSecuritySecurityProfileCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SecurityProfile: %s", err) @@ -296,12 +299,14 @@ func resourceNetworkSecuritySecurityProfileRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecuritySecurityProfile %q", d.Id())) @@ -377,6 +382,7 @@ func resourceNetworkSecuritySecurityProfileUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating SecurityProfile %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -412,6 +418,7 @@ func resourceNetworkSecuritySecurityProfileUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -454,6 +461,8 @@ func resourceNetworkSecuritySecurityProfileDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SecurityProfile %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -463,6 +472,7 @@ func resourceNetworkSecuritySecurityProfileDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SecurityProfile") diff --git a/google-beta/services/networksecurity/resource_network_security_security_profile_group.go b/google-beta/services/networksecurity/resource_network_security_security_profile_group.go index c6d31ad673..69217ab2b3 100644 --- a/google-beta/services/networksecurity/resource_network_security_security_profile_group.go +++ b/google-beta/services/networksecurity/resource_network_security_security_profile_group.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -170,6 +171,7 @@ func resourceNetworkSecuritySecurityProfileGroupCreate(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -178,6 +180,7 @@ func resourceNetworkSecuritySecurityProfileGroupCreate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SecurityProfileGroup: %s", err) @@ -224,12 +227,14 @@ func resourceNetworkSecuritySecurityProfileGroupRead(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecuritySecurityProfileGroup %q", d.Id())) @@ -299,6 +304,7 @@ func resourceNetworkSecuritySecurityProfileGroupUpdate(d *schema.ResourceData, m } log.Printf("[DEBUG] Updating SecurityProfileGroup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -334,6 +340,7 @@ func resourceNetworkSecuritySecurityProfileGroupUpdate(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -376,6 +383,8 @@ func resourceNetworkSecuritySecurityProfileGroupDelete(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SecurityProfileGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -385,6 +394,7 @@ func resourceNetworkSecuritySecurityProfileGroupDelete(d *schema.ResourceData, m UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SecurityProfileGroup") diff --git a/google-beta/services/networksecurity/resource_network_security_server_tls_policy.go b/google-beta/services/networksecurity/resource_network_security_server_tls_policy.go index a8bd8bd906..e4f9fa9c19 100644 --- a/google-beta/services/networksecurity/resource_network_security_server_tls_policy.go +++ b/google-beta/services/networksecurity/resource_network_security_server_tls_policy.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -292,6 +293,7 @@ func resourceNetworkSecurityServerTlsPolicyCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -300,6 +302,7 @@ func resourceNetworkSecurityServerTlsPolicyCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServerTlsPolicy: %s", err) @@ -352,12 +355,14 @@ func resourceNetworkSecurityServerTlsPolicyRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityServerTlsPolicy %q", d.Id())) @@ -451,6 +456,7 @@ func resourceNetworkSecurityServerTlsPolicyUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating ServerTlsPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -494,6 +500,7 @@ func resourceNetworkSecurityServerTlsPolicyUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -541,6 +548,8 @@ func resourceNetworkSecurityServerTlsPolicyDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServerTlsPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -550,6 +559,7 @@ func resourceNetworkSecurityServerTlsPolicyDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServerTlsPolicy") diff --git a/google-beta/services/networksecurity/resource_network_security_tls_inspection_policy.go b/google-beta/services/networksecurity/resource_network_security_tls_inspection_policy.go index 6e6f25e47f..2fe953a072 100644 --- a/google-beta/services/networksecurity/resource_network_security_tls_inspection_policy.go +++ b/google-beta/services/networksecurity/resource_network_security_tls_inspection_policy.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -145,6 +146,7 @@ func resourceNetworkSecurityTlsInspectionPolicyCreate(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -153,6 +155,7 @@ func resourceNetworkSecurityTlsInspectionPolicyCreate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TlsInspectionPolicy: %s", err) @@ -205,12 +208,14 @@ func resourceNetworkSecurityTlsInspectionPolicyRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityTlsInspectionPolicy %q", d.Id())) @@ -280,6 +285,7 @@ func resourceNetworkSecurityTlsInspectionPolicyUpdate(d *schema.ResourceData, me } log.Printf("[DEBUG] Updating TlsInspectionPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -315,6 +321,7 @@ func resourceNetworkSecurityTlsInspectionPolicyUpdate(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -362,6 +369,8 @@ func resourceNetworkSecurityTlsInspectionPolicyDelete(d *schema.ResourceData, me billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TlsInspectionPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -371,6 +380,7 @@ func resourceNetworkSecurityTlsInspectionPolicyDelete(d *schema.ResourceData, me UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TlsInspectionPolicy") diff --git a/google-beta/services/networksecurity/resource_network_security_url_lists.go b/google-beta/services/networksecurity/resource_network_security_url_lists.go index 4d1b6dc011..bea99f5bfa 100644 --- a/google-beta/services/networksecurity/resource_network_security_url_lists.go +++ b/google-beta/services/networksecurity/resource_network_security_url_lists.go @@ -20,6 +20,7 @@ package networksecurity import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -142,6 +143,7 @@ func resourceNetworkSecurityUrlListsCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -150,6 +152,7 @@ func resourceNetworkSecurityUrlListsCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating UrlLists: %s", err) @@ -202,12 +205,14 @@ func resourceNetworkSecurityUrlListsRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkSecurityUrlLists %q", d.Id())) @@ -268,6 +273,7 @@ func resourceNetworkSecurityUrlListsUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating UrlLists %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -299,6 +305,7 @@ func resourceNetworkSecurityUrlListsUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -346,6 +353,8 @@ func resourceNetworkSecurityUrlListsDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting UrlLists %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -355,6 +364,7 @@ func resourceNetworkSecurityUrlListsDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "UrlLists") diff --git a/google-beta/services/networkservices/resource_network_services_edge_cache_keyset.go b/google-beta/services/networkservices/resource_network_services_edge_cache_keyset.go index b4f5204785..56e805389f 100644 --- a/google-beta/services/networkservices/resource_network_services_edge_cache_keyset.go +++ b/google-beta/services/networkservices/resource_network_services_edge_cache_keyset.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -215,6 +216,7 @@ func resourceNetworkServicesEdgeCacheKeysetCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -223,6 +225,7 @@ func resourceNetworkServicesEdgeCacheKeysetCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -276,12 +279,14 @@ func resourceNetworkServicesEdgeCacheKeysetRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -361,6 +366,7 @@ func resourceNetworkServicesEdgeCacheKeysetUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating EdgeCacheKeyset %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -400,6 +406,7 @@ func resourceNetworkServicesEdgeCacheKeysetUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -448,6 +455,8 @@ func resourceNetworkServicesEdgeCacheKeysetDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EdgeCacheKeyset %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -457,6 +466,7 @@ func resourceNetworkServicesEdgeCacheKeysetDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { diff --git a/google-beta/services/networkservices/resource_network_services_edge_cache_origin.go b/google-beta/services/networkservices/resource_network_services_edge_cache_origin.go index a6ed4d01a7..190f796817 100644 --- a/google-beta/services/networkservices/resource_network_services_edge_cache_origin.go +++ b/google-beta/services/networkservices/resource_network_services_edge_cache_origin.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -464,6 +465,7 @@ func resourceNetworkServicesEdgeCacheOriginCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -472,6 +474,7 @@ func resourceNetworkServicesEdgeCacheOriginCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EdgeCacheOrigin: %s", err) @@ -524,12 +527,14 @@ func resourceNetworkServicesEdgeCacheOriginRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesEdgeCacheOrigin %q", d.Id())) @@ -680,6 +685,7 @@ func resourceNetworkServicesEdgeCacheOriginUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating EdgeCacheOrigin %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -751,6 +757,7 @@ func resourceNetworkServicesEdgeCacheOriginUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -798,6 +805,8 @@ func resourceNetworkServicesEdgeCacheOriginDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EdgeCacheOrigin %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -807,6 +816,7 @@ func resourceNetworkServicesEdgeCacheOriginDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EdgeCacheOrigin") diff --git a/google-beta/services/networkservices/resource_network_services_edge_cache_service.go b/google-beta/services/networkservices/resource_network_services_edge_cache_service.go index 656ef2aebf..d2fe62a371 100644 --- a/google-beta/services/networkservices/resource_network_services_edge_cache_service.go +++ b/google-beta/services/networkservices/resource_network_services_edge_cache_service.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -1111,6 +1112,7 @@ func resourceNetworkServicesEdgeCacheServiceCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1119,6 +1121,7 @@ func resourceNetworkServicesEdgeCacheServiceCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EdgeCacheService: %s", err) @@ -1171,12 +1174,14 @@ func resourceNetworkServicesEdgeCacheServiceRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesEdgeCacheService %q", d.Id())) @@ -1315,6 +1320,7 @@ func resourceNetworkServicesEdgeCacheServiceUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating EdgeCacheService %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -1378,6 +1384,7 @@ func resourceNetworkServicesEdgeCacheServiceUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1425,6 +1432,8 @@ func resourceNetworkServicesEdgeCacheServiceDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EdgeCacheService %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1434,6 +1443,7 @@ func resourceNetworkServicesEdgeCacheServiceDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EdgeCacheService") diff --git a/google-beta/services/networkservices/resource_network_services_endpoint_policy.go b/google-beta/services/networkservices/resource_network_services_endpoint_policy.go index 879a6aae4a..2122d02f08 100644 --- a/google-beta/services/networkservices/resource_network_services_endpoint_policy.go +++ b/google-beta/services/networkservices/resource_network_services_endpoint_policy.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -269,6 +270,7 @@ func resourceNetworkServicesEndpointPolicyCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -277,6 +279,7 @@ func resourceNetworkServicesEndpointPolicyCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EndpointPolicy: %s", err) @@ -329,12 +332,14 @@ func resourceNetworkServicesEndpointPolicyRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesEndpointPolicy %q", d.Id())) @@ -455,6 +460,7 @@ func resourceNetworkServicesEndpointPolicyUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating EndpointPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -510,6 +516,7 @@ func resourceNetworkServicesEndpointPolicyUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -557,6 +564,8 @@ func resourceNetworkServicesEndpointPolicyDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EndpointPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -566,6 +575,7 @@ func resourceNetworkServicesEndpointPolicyDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EndpointPolicy") diff --git a/google-beta/services/networkservices/resource_network_services_gateway.go b/google-beta/services/networkservices/resource_network_services_gateway.go index 7ac24bbbad..6c509e2d7e 100644 --- a/google-beta/services/networkservices/resource_network_services_gateway.go +++ b/google-beta/services/networkservices/resource_network_services_gateway.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -415,6 +416,7 @@ func resourceNetworkServicesGatewayCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -423,6 +425,7 @@ func resourceNetworkServicesGatewayCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Gateway: %s", err) @@ -475,12 +478,14 @@ func resourceNetworkServicesGatewayRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesGateway %q", d.Id())) @@ -589,6 +594,7 @@ func resourceNetworkServicesGatewayUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Gateway %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -624,6 +630,7 @@ func resourceNetworkServicesGatewayUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -671,6 +678,8 @@ func resourceNetworkServicesGatewayDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Gateway %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -680,6 +689,7 @@ func resourceNetworkServicesGatewayDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Gateway") diff --git a/google-beta/services/networkservices/resource_network_services_grpc_route.go b/google-beta/services/networkservices/resource_network_services_grpc_route.go index f9ced184dd..ee1eadd05e 100644 --- a/google-beta/services/networkservices/resource_network_services_grpc_route.go +++ b/google-beta/services/networkservices/resource_network_services_grpc_route.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -380,6 +381,7 @@ func resourceNetworkServicesGrpcRouteCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -388,6 +390,7 @@ func resourceNetworkServicesGrpcRouteCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GrpcRoute: %s", err) @@ -440,12 +443,14 @@ func resourceNetworkServicesGrpcRouteRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesGrpcRoute %q", d.Id())) @@ -551,6 +556,7 @@ func resourceNetworkServicesGrpcRouteUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating GrpcRoute %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -598,6 +604,7 @@ func resourceNetworkServicesGrpcRouteUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -645,6 +652,8 @@ func resourceNetworkServicesGrpcRouteDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GrpcRoute %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -654,6 +663,7 @@ func resourceNetworkServicesGrpcRouteDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GrpcRoute") diff --git a/google-beta/services/networkservices/resource_network_services_http_route.go b/google-beta/services/networkservices/resource_network_services_http_route.go index 7852a4f793..bc1a81c81f 100644 --- a/google-beta/services/networkservices/resource_network_services_http_route.go +++ b/google-beta/services/networkservices/resource_network_services_http_route.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -687,6 +688,7 @@ func resourceNetworkServicesHttpRouteCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -695,6 +697,7 @@ func resourceNetworkServicesHttpRouteCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating HttpRoute: %s", err) @@ -747,12 +750,14 @@ func resourceNetworkServicesHttpRouteRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesHttpRoute %q", d.Id())) @@ -858,6 +863,7 @@ func resourceNetworkServicesHttpRouteUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating HttpRoute %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -905,6 +911,7 @@ func resourceNetworkServicesHttpRouteUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -952,6 +959,8 @@ func resourceNetworkServicesHttpRouteDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting HttpRoute %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -961,6 +970,7 @@ func resourceNetworkServicesHttpRouteDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "HttpRoute") diff --git a/google-beta/services/networkservices/resource_network_services_mesh.go b/google-beta/services/networkservices/resource_network_services_mesh.go index 21c9539b34..083ed57ad6 100644 --- a/google-beta/services/networkservices/resource_network_services_mesh.go +++ b/google-beta/services/networkservices/resource_network_services_mesh.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -167,6 +168,7 @@ func resourceNetworkServicesMeshCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -175,6 +177,7 @@ func resourceNetworkServicesMeshCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Mesh: %s", err) @@ -227,12 +230,14 @@ func resourceNetworkServicesMeshRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesMesh %q", d.Id())) @@ -311,6 +316,7 @@ func resourceNetworkServicesMeshUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Mesh %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -346,6 +352,7 @@ func resourceNetworkServicesMeshUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -393,6 +400,8 @@ func resourceNetworkServicesMeshDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Mesh %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -402,6 +411,7 @@ func resourceNetworkServicesMeshDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Mesh") diff --git a/google-beta/services/networkservices/resource_network_services_service_binding.go b/google-beta/services/networkservices/resource_network_services_service_binding.go index e2e6f6567b..0b7f52b983 100644 --- a/google-beta/services/networkservices/resource_network_services_service_binding.go +++ b/google-beta/services/networkservices/resource_network_services_service_binding.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "time" @@ -172,6 +173,7 @@ func resourceNetworkServicesServiceBindingCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -180,6 +182,7 @@ func resourceNetworkServicesServiceBindingCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ServiceBinding: %s", err) @@ -232,12 +235,14 @@ func resourceNetworkServicesServiceBindingRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesServiceBinding %q", d.Id())) @@ -304,6 +309,8 @@ func resourceNetworkServicesServiceBindingDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ServiceBinding %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -313,6 +320,7 @@ func resourceNetworkServicesServiceBindingDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ServiceBinding") diff --git a/google-beta/services/networkservices/resource_network_services_tcp_route.go b/google-beta/services/networkservices/resource_network_services_tcp_route.go index 7c441aa53c..a87d4a6a74 100644 --- a/google-beta/services/networkservices/resource_network_services_tcp_route.go +++ b/google-beta/services/networkservices/resource_network_services_tcp_route.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -258,6 +259,7 @@ func resourceNetworkServicesTcpRouteCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -266,6 +268,7 @@ func resourceNetworkServicesTcpRouteCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TcpRoute: %s", err) @@ -318,12 +321,14 @@ func resourceNetworkServicesTcpRouteRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesTcpRoute %q", d.Id())) @@ -420,6 +425,7 @@ func resourceNetworkServicesTcpRouteUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating TcpRoute %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -463,6 +469,7 @@ func resourceNetworkServicesTcpRouteUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -510,6 +517,8 @@ func resourceNetworkServicesTcpRouteDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TcpRoute %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -519,6 +528,7 @@ func resourceNetworkServicesTcpRouteDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TcpRoute") diff --git a/google-beta/services/networkservices/resource_network_services_tls_route.go b/google-beta/services/networkservices/resource_network_services_tls_route.go index 1d7f1c1fa8..9d933867db 100644 --- a/google-beta/services/networkservices/resource_network_services_tls_route.go +++ b/google-beta/services/networkservices/resource_network_services_tls_route.go @@ -20,6 +20,7 @@ package networkservices import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -225,6 +226,7 @@ func resourceNetworkServicesTlsRouteCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -233,6 +235,7 @@ func resourceNetworkServicesTlsRouteCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TlsRoute: %s", err) @@ -285,12 +288,14 @@ func resourceNetworkServicesTlsRouteRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NetworkServicesTlsRoute %q", d.Id())) @@ -372,6 +377,7 @@ func resourceNetworkServicesTlsRouteUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating TlsRoute %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -411,6 +417,7 @@ func resourceNetworkServicesTlsRouteUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -458,6 +465,8 @@ func resourceNetworkServicesTlsRouteDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TlsRoute %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -467,6 +476,7 @@ func resourceNetworkServicesTlsRouteDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TlsRoute") diff --git a/google-beta/services/notebooks/resource_notebooks_environment.go b/google-beta/services/notebooks/resource_notebooks_environment.go index 698985e77b..8381d9b859 100644 --- a/google-beta/services/notebooks/resource_notebooks_environment.go +++ b/google-beta/services/notebooks/resource_notebooks_environment.go @@ -20,6 +20,7 @@ package notebooks import ( "fmt" "log" + "net/http" "reflect" "time" @@ -204,6 +205,7 @@ func resourceNotebooksEnvironmentCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -212,6 +214,7 @@ func resourceNotebooksEnvironmentCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Environment: %s", err) @@ -274,12 +277,14 @@ func resourceNotebooksEnvironmentRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NotebooksEnvironment %q", d.Id())) @@ -364,6 +369,7 @@ func resourceNotebooksEnvironmentUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Environment %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -378,6 +384,7 @@ func resourceNotebooksEnvironmentUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -424,6 +431,8 @@ func resourceNotebooksEnvironmentDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Environment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -433,6 +442,7 @@ func resourceNotebooksEnvironmentDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Environment") diff --git a/google-beta/services/notebooks/resource_notebooks_instance.go b/google-beta/services/notebooks/resource_notebooks_instance.go index 1539db95d0..f2cd3c378f 100644 --- a/google-beta/services/notebooks/resource_notebooks_instance.go +++ b/google-beta/services/notebooks/resource_notebooks_instance.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "sort" "strings" @@ -769,6 +770,7 @@ func resourceNotebooksInstanceCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -777,6 +779,7 @@ func resourceNotebooksInstanceCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -853,12 +856,14 @@ func resourceNotebooksInstanceRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NotebooksInstance %q", d.Id())) @@ -987,6 +992,8 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1000,6 +1007,7 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Instance %q: %s", d.Id(), err) @@ -1034,6 +1042,8 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1047,6 +1057,7 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Instance %q: %s", d.Id(), err) @@ -1115,6 +1126,8 @@ func resourceNotebooksInstanceDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1124,6 +1137,7 @@ func resourceNotebooksInstanceDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/notebooks/resource_notebooks_location.go b/google-beta/services/notebooks/resource_notebooks_location.go index fbe9aa008e..404d27432b 100644 --- a/google-beta/services/notebooks/resource_notebooks_location.go +++ b/google-beta/services/notebooks/resource_notebooks_location.go @@ -20,6 +20,7 @@ package notebooks import ( "fmt" "log" + "net/http" "reflect" "time" @@ -106,6 +107,7 @@ func resourceNotebooksLocationCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -114,6 +116,7 @@ func resourceNotebooksLocationCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Location: %s", err) @@ -180,12 +183,14 @@ func resourceNotebooksLocationRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NotebooksLocation %q", d.Id())) @@ -234,6 +239,7 @@ func resourceNotebooksLocationUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Location %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -248,6 +254,7 @@ func resourceNotebooksLocationUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -294,6 +301,8 @@ func resourceNotebooksLocationDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Location %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -303,6 +312,7 @@ func resourceNotebooksLocationDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Location") diff --git a/google-beta/services/notebooks/resource_notebooks_runtime.go b/google-beta/services/notebooks/resource_notebooks_runtime.go index 83353b77ac..ad37d7156f 100644 --- a/google-beta/services/notebooks/resource_notebooks_runtime.go +++ b/google-beta/services/notebooks/resource_notebooks_runtime.go @@ -20,6 +20,7 @@ package notebooks import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -715,6 +716,7 @@ func resourceNotebooksRuntimeCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -723,6 +725,7 @@ func resourceNotebooksRuntimeCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Runtime: %s", err) @@ -785,12 +788,14 @@ func resourceNotebooksRuntimeRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("NotebooksRuntime %q", d.Id())) @@ -878,6 +883,7 @@ func resourceNotebooksRuntimeUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Runtime %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("virtual_machine") { @@ -986,6 +992,7 @@ func resourceNotebooksRuntimeUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1033,6 +1040,8 @@ func resourceNotebooksRuntimeDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Runtime %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1042,6 +1051,7 @@ func resourceNotebooksRuntimeDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Runtime") diff --git a/google-beta/services/orgpolicy/resource_org_policy_custom_constraint.go b/google-beta/services/orgpolicy/resource_org_policy_custom_constraint.go index 1813920ecd..0f0c9c8b61 100644 --- a/google-beta/services/orgpolicy/resource_org_policy_custom_constraint.go +++ b/google-beta/services/orgpolicy/resource_org_policy_custom_constraint.go @@ -20,6 +20,7 @@ package orgpolicy import ( "fmt" "log" + "net/http" "reflect" "time" @@ -174,6 +175,7 @@ func resourceOrgPolicyCustomConstraintCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -182,6 +184,7 @@ func resourceOrgPolicyCustomConstraintCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating CustomConstraint: %s", err) @@ -218,12 +221,14 @@ func resourceOrgPolicyCustomConstraintRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("OrgPolicyCustomConstraint %q", d.Id())) @@ -309,6 +314,7 @@ func resourceOrgPolicyCustomConstraintUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating CustomConstraint %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -323,6 +329,7 @@ func resourceOrgPolicyCustomConstraintUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -355,6 +362,8 @@ func resourceOrgPolicyCustomConstraintDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting CustomConstraint %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -364,6 +373,7 @@ func resourceOrgPolicyCustomConstraintDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "CustomConstraint") diff --git a/google-beta/services/osconfig/resource_os_config_guest_policies.go b/google-beta/services/osconfig/resource_os_config_guest_policies.go index 215364bfb6..0c28a2d77c 100644 --- a/google-beta/services/osconfig/resource_os_config_guest_policies.go +++ b/google-beta/services/osconfig/resource_os_config_guest_policies.go @@ -20,6 +20,7 @@ package osconfig import ( "fmt" "log" + "net/http" "reflect" "time" @@ -949,6 +950,7 @@ func resourceOSConfigGuestPoliciesCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -957,6 +959,7 @@ func resourceOSConfigGuestPoliciesCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating GuestPolicies: %s", err) @@ -1017,12 +1020,14 @@ func resourceOSConfigGuestPoliciesRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("OSConfigGuestPolicies %q", d.Id())) @@ -1122,6 +1127,7 @@ func resourceOSConfigGuestPoliciesUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating GuestPolicies %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -1136,6 +1142,7 @@ func resourceOSConfigGuestPoliciesUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1174,6 +1181,8 @@ func resourceOSConfigGuestPoliciesDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting GuestPolicies %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1183,6 +1192,7 @@ func resourceOSConfigGuestPoliciesDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "GuestPolicies") diff --git a/google-beta/services/osconfig/resource_os_config_patch_deployment.go b/google-beta/services/osconfig/resource_os_config_patch_deployment.go index 89e1c0fc8c..aee7a47b9c 100644 --- a/google-beta/services/osconfig/resource_os_config_patch_deployment.go +++ b/google-beta/services/osconfig/resource_os_config_patch_deployment.go @@ -20,6 +20,7 @@ package osconfig import ( "fmt" "log" + "net/http" "reflect" "time" @@ -1037,6 +1038,7 @@ func resourceOSConfigPatchDeploymentCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -1045,6 +1047,7 @@ func resourceOSConfigPatchDeploymentCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PatchDeployment: %s", err) @@ -1108,12 +1111,14 @@ func resourceOSConfigPatchDeploymentRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("OSConfigPatchDeployment %q", d.Id())) @@ -1199,6 +1204,8 @@ func resourceOSConfigPatchDeploymentDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PatchDeployment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1208,6 +1215,7 @@ func resourceOSConfigPatchDeploymentDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PatchDeployment") diff --git a/google-beta/services/oslogin/resource_os_login_ssh_public_key.go b/google-beta/services/oslogin/resource_os_login_ssh_public_key.go index 2b15892f72..0b99467a3a 100644 --- a/google-beta/services/oslogin/resource_os_login_ssh_public_key.go +++ b/google-beta/services/oslogin/resource_os_login_ssh_public_key.go @@ -20,6 +20,7 @@ package oslogin import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -115,6 +116,7 @@ func resourceOSLoginSSHPublicKeyCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) // Don't use `GetProject()` because we only want to set the project in the URL // if the user set it explicitly on the resource. if p, ok := d.GetOk("project"); ok { @@ -131,6 +133,7 @@ func resourceOSLoginSSHPublicKeyCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SSHPublicKey: %s", err) @@ -190,12 +193,14 @@ func resourceOSLoginSSHPublicKeyRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("OSLoginSSHPublicKey %q", d.Id())) @@ -237,6 +242,7 @@ func resourceOSLoginSSHPublicKeyUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating SSHPublicKey %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("expiration_time_usec") { @@ -264,6 +270,7 @@ func resourceOSLoginSSHPublicKeyUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -298,6 +305,8 @@ func resourceOSLoginSSHPublicKeyDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SSHPublicKey %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -307,6 +316,7 @@ func resourceOSLoginSSHPublicKeyDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SSHPublicKey") diff --git a/google-beta/services/parallelstore/parallelstore_operation.go b/google-beta/services/parallelstore/parallelstore_operation.go new file mode 100644 index 0000000000..16347a0679 --- /dev/null +++ b/google-beta/services/parallelstore/parallelstore_operation.go @@ -0,0 +1,92 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package parallelstore + +import ( + "encoding/json" + "errors" + "fmt" + "time" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +type ParallelstoreOperationWaiter struct { + Config *transport_tpg.Config + UserAgent string + Project string + tpgresource.CommonOperationWaiter +} + +func (w *ParallelstoreOperationWaiter) QueryOp() (interface{}, error) { + if w == nil { + return nil, fmt.Errorf("Cannot query operation, it's unset or nil.") + } + // Returns the proper get. + url := fmt.Sprintf("%s%s", w.Config.ParallelstoreBasePath, w.CommonOperationWaiter.Op.Name) + + return transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: w.Config, + Method: "GET", + Project: w.Project, + RawURL: url, + UserAgent: w.UserAgent, + }) +} + +func createParallelstoreWaiter(config *transport_tpg.Config, op map[string]interface{}, project, activity, userAgent string) (*ParallelstoreOperationWaiter, error) { + w := &ParallelstoreOperationWaiter{ + Config: config, + UserAgent: userAgent, + Project: project, + } + if err := w.CommonOperationWaiter.SetOp(op); err != nil { + return nil, err + } + return w, nil +} + +// nolint: deadcode,unused +func ParallelstoreOperationWaitTimeWithResponse(config *transport_tpg.Config, op map[string]interface{}, response *map[string]interface{}, project, activity, userAgent string, timeout time.Duration) error { + w, err := createParallelstoreWaiter(config, op, project, activity, userAgent) + if err != nil { + return err + } + if err := tpgresource.OperationWait(w, activity, timeout, config.PollInterval); err != nil { + return err + } + rawResponse := []byte(w.CommonOperationWaiter.Op.Response) + if len(rawResponse) == 0 { + return errors.New("`resource` not set in operation response") + } + return json.Unmarshal(rawResponse, response) +} + +func ParallelstoreOperationWaitTime(config *transport_tpg.Config, op map[string]interface{}, project, activity, userAgent string, timeout time.Duration) error { + if val, ok := op["name"]; !ok || val == "" { + // This was a synchronous call - there is no operation to wait for. + return nil + } + w, err := createParallelstoreWaiter(config, op, project, activity, userAgent) + if err != nil { + // If w is nil, the op was synchronous. + return err + } + return tpgresource.OperationWait(w, activity, timeout, config.PollInterval) +} diff --git a/google-beta/services/parallelstore/resource_parallelstore_instance.go b/google-beta/services/parallelstore/resource_parallelstore_instance.go new file mode 100644 index 0000000000..0d1aa9a085 --- /dev/null +++ b/google-beta/services/parallelstore/resource_parallelstore_instance.go @@ -0,0 +1,670 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package parallelstore + +import ( + "fmt" + "log" + "net/http" + "reflect" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func ResourceParallelstoreInstance() *schema.Resource { + return &schema.Resource{ + Create: resourceParallelstoreInstanceCreate, + Read: resourceParallelstoreInstanceRead, + Update: resourceParallelstoreInstanceUpdate, + Delete: resourceParallelstoreInstanceDelete, + + Importer: &schema.ResourceImporter{ + State: resourceParallelstoreInstanceImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + Update: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(20 * time.Minute), + }, + + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + + Schema: map[string]*schema.Schema{ + "capacity_gib": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Immutable. Storage capacity of Parallelstore instance in Gibibytes (GiB).`, + }, + "instance_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The logical name of the Parallelstore instance in the user project with the following restrictions: + +* Must contain only lowercase letters, numbers, and hyphens. +* Must start with a letter. +* Must be between 1-63 characters. +* Must end with a number or a letter. +* Must be unique within the customer project/ location`, + }, + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Part of 'parent'. See documentation of 'projectsId'.`, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: `The description of the instance. 2048 characters or less.`, + }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: `Cloud Labels are a flexible and lightweight mechanism for organizing cloud +resources into groups that reflect a customer's organizational needs and +deployment strategies. Cloud Labels can be used to filter collections of +resources. They can be used to control how resource metrics are aggregated. +And they can be used as arguments to policy management rules (e.g. route, +firewall, load balancing, etc.). + + * Label keys must be between 1 and 63 characters long and must conform to + the following regular expression: 'a-z{0,62}'. + * Label values must be between 0 and 63 characters long and must conform + to the regular expression '[a-z0-9_-]{0,63}'. + * No more than 64 labels can be associated with a given resource. + +See https://goo.gl/xmQnxf for more information on and examples of labels. + +If you plan to use labels in your own code, please note that additional +characters may be allowed in the future. Therefore, you are advised to use +an internal label representation, such as JSON, which doesn't rely upon +specific characters being disallowed. For example, representing labels +as the string: name + "_" + value would prove problematic if we were to +allow "_" in a future release. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "network": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Immutable. The name of the Google Compute Engine +[VPC network](https://cloud.google.com/vpc/docs/vpc) to which the +instance is connected.`, + }, + "reserved_ip_range": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Immutable. Contains the id of the allocated IP address range associated with the +private service access connection for example, "test-default" associated +with IP range 10.0.0.0/29. If no range id is provided all ranges will be +considered.`, + }, + "access_points": { + Type: schema.TypeList, + Computed: true, + Description: `List of access_points. +Contains a list of IPv4 addresses used for client side configuration.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + Description: `The time when the instance was created.`, + }, + "daos_version": { + Type: schema.TypeString, + Computed: true, + Description: `The version of DAOS software running in the instance`, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_reserved_ip_range": { + Type: schema.TypeString, + Computed: true, + Description: `Immutable. Contains the id of the allocated IP address range associated with the +private service access connection for example, "test-default" associated +with IP range 10.0.0.0/29. This field is populated by the service and +and contains the value currently used by the service.`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Description: `The resource name of the instance, in the format +'projects/{project}/locations/{location}/instances/{instance_id}'`, + }, + "state": { + Type: schema.TypeString, + Computed: true, + Description: `The instance state. + Possible values: + STATE_UNSPECIFIED +CREATING +ACTIVE +DELETING +FAILED`, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "update_time": { + Type: schema.TypeString, + Computed: true, + Description: `The time when the instance was updated.`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func resourceParallelstoreInstanceCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + obj := make(map[string]interface{}) + descriptionProp, err := expandParallelstoreInstanceDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + capacityGibProp, err := expandParallelstoreInstanceCapacityGib(d.Get("capacity_gib"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("capacity_gib"); !tpgresource.IsEmptyValue(reflect.ValueOf(capacityGibProp)) && (ok || !reflect.DeepEqual(v, capacityGibProp)) { + obj["capacityGib"] = capacityGibProp + } + networkProp, err := expandParallelstoreInstanceNetwork(d.Get("network"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("network"); !tpgresource.IsEmptyValue(reflect.ValueOf(networkProp)) && (ok || !reflect.DeepEqual(v, networkProp)) { + obj["network"] = networkProp + } + reservedIpRangeProp, err := expandParallelstoreInstanceReservedIpRange(d.Get("reserved_ip_range"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("reserved_ip_range"); !tpgresource.IsEmptyValue(reflect.ValueOf(reservedIpRangeProp)) && (ok || !reflect.DeepEqual(v, reservedIpRangeProp)) { + obj["reservedIpRange"] = reservedIpRangeProp + } + labelsProp, err := expandParallelstoreInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ParallelstoreBasePath}}projects/{{project}}/locations/{{location}}/instances?instanceId={{instance_id}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Creating new Instance: %#v", obj) + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for Instance: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, + }) + if err != nil { + return fmt.Errorf("Error creating Instance: %s", err) + } + + // Store the ID now + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + // Use the resource in the operation response to populate + // identity fields and d.Id() before read + var opRes map[string]interface{} + err = ParallelstoreOperationWaitTimeWithResponse( + config, res, &opRes, project, "Creating Instance", userAgent, + d.Timeout(schema.TimeoutCreate)) + if err != nil { + // The resource didn't actually create + d.SetId("") + + return fmt.Errorf("Error waiting to create Instance: %s", err) + } + + if err := d.Set("name", flattenParallelstoreInstanceName(opRes["name"], d, config)); err != nil { + return err + } + + // This may have caused the ID to update - update it if so. + id, err = tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") + if err != nil { + return fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + log.Printf("[DEBUG] Finished creating Instance %q: %#v", d.Id(), res) + + return resourceParallelstoreInstanceRead(d, meta) +} + +func resourceParallelstoreInstanceRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ParallelstoreBasePath}}projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for Instance: %s", err) + } + billingProject = project + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ParallelstoreInstance %q", d.Id())) + } + + if err := d.Set("project", project); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + + if err := d.Set("name", flattenParallelstoreInstanceName(res["name"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("description", flattenParallelstoreInstanceDescription(res["description"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("state", flattenParallelstoreInstanceState(res["state"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("create_time", flattenParallelstoreInstanceCreateTime(res["createTime"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("update_time", flattenParallelstoreInstanceUpdateTime(res["updateTime"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("labels", flattenParallelstoreInstanceLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("capacity_gib", flattenParallelstoreInstanceCapacityGib(res["capacityGib"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("daos_version", flattenParallelstoreInstanceDaosVersion(res["daosVersion"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("access_points", flattenParallelstoreInstanceAccessPoints(res["accessPoints"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("network", flattenParallelstoreInstanceNetwork(res["network"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("reserved_ip_range", flattenParallelstoreInstanceReservedIpRange(res["reservedIpRange"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_reserved_ip_range", flattenParallelstoreInstanceEffectiveReservedIpRange(res["effectiveReservedIpRange"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("terraform_labels", flattenParallelstoreInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenParallelstoreInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + + return nil +} + +func resourceParallelstoreInstanceUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for Instance: %s", err) + } + billingProject = project + + obj := make(map[string]interface{}) + descriptionProp, err := expandParallelstoreInstanceDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + labelsProp, err := expandParallelstoreInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ParallelstoreBasePath}}projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) + updateMask := []string{} + + if d.HasChange("description") { + updateMask = append(updateMask, "description") + } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } + // updateMask is a URL parameter but not present in the schema, so ReplaceVars + // won't set it + url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + // if updateMask is empty we are not updating anything so skip the post + if len(updateMask) > 0 { + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "PATCH", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, + }) + + if err != nil { + return fmt.Errorf("Error updating Instance %q: %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Finished updating Instance %q: %#v", d.Id(), res) + } + + err = ParallelstoreOperationWaitTime( + config, res, project, "Updating Instance", userAgent, + d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return err + } + } + + return resourceParallelstoreInstanceRead(d, meta) +} + +func resourceParallelstoreInstanceDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + billingProject := "" + + project, err := tpgresource.GetProject(d, config) + if err != nil { + return fmt.Errorf("Error fetching project for Instance: %s", err) + } + billingProject = project + + url, err := tpgresource.ReplaceVars(d, config, "{{ParallelstoreBasePath}}projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") + if err != nil { + return err + } + + var obj map[string]interface{} + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + headers := make(http.Header) + + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, + }) + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, "Instance") + } + + err = ParallelstoreOperationWaitTime( + config, res, project, "Deleting Instance", userAgent, + d.Timeout(schema.TimeoutDelete)) + + if err != nil { + return err + } + + log.Printf("[DEBUG] Finished deleting Instance %q: %#v", d.Id(), res) + return nil +} + +func resourceParallelstoreInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + config := meta.(*transport_tpg.Config) + if err := tpgresource.ParseImportId([]string{ + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + }, d, config); err != nil { + return nil, err + } + + // Replace import id for the resource id + id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") + if err != nil { + return nil, fmt.Errorf("Error constructing id: %s", err) + } + d.SetId(id) + + return []*schema.ResourceData{d}, nil +} + +func flattenParallelstoreInstanceName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceUpdateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenParallelstoreInstanceCapacityGib(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceDaosVersion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceAccessPoints(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceNetwork(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceReservedIpRange(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceEffectiveReservedIpRange(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenParallelstoreInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenParallelstoreInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandParallelstoreInstanceDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandParallelstoreInstanceCapacityGib(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandParallelstoreInstanceNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandParallelstoreInstanceReservedIpRange(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandParallelstoreInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google-beta/services/backupdr/resource_backup_dr_management_server_generated_test.go b/google-beta/services/parallelstore/resource_parallelstore_instance_generated_test.go similarity index 51% rename from google-beta/services/backupdr/resource_backup_dr_management_server_generated_test.go rename to google-beta/services/parallelstore/resource_parallelstore_instance_generated_test.go index a143bae2fa..7aab688410 100644 --- a/google-beta/services/backupdr/resource_backup_dr_management_server_generated_test.go +++ b/google-beta/services/parallelstore/resource_parallelstore_instance_generated_test.go @@ -15,7 +15,7 @@ // // ---------------------------------------------------------------------------- -package backupdr_test +package parallelstore_test import ( "fmt" @@ -26,62 +26,84 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" ) -func TestAccBackupDRManagementServer_backupDrManagementServerTestExample(t *testing.T) { +func TestAccParallelstoreInstance_parallelstoreInstanceBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "vpc-network-1"), "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), - CheckDestroy: testAccCheckBackupDRManagementServerDestroyProducer(t), + CheckDestroy: testAccCheckParallelstoreInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccBackupDRManagementServer_backupDrManagementServerTestExample(context), + Config: testAccParallelstoreInstance_parallelstoreInstanceBasicExample(context), }, { - ResourceName: "google_backup_dr_management_server.ms-console", + ResourceName: "google_parallelstore_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "name"}, + ImportStateVerifyIgnore: []string{"location", "instance_id", "labels", "terraform_labels"}, }, }, }) } -func testAccBackupDRManagementServer_backupDrManagementServerTestExample(context map[string]interface{}) string { +func testAccParallelstoreInstance_parallelstoreInstanceBasicExample(context map[string]interface{}) string { return acctest.Nprintf(` -data "google_compute_network" "default" { +resource "google_parallelstore_instance" "instance" { + instance_id = "instance%{random_suffix}" + location = "us-central1-a" + description = "test instance" + capacity_gib = 12000 + network = google_compute_network.network.name + + labels = { + test = "value" + } provider = google-beta - name = "%{network_name}" + depends_on = [google_service_networking_connection.default] } -resource "google_backup_dr_management_server" "ms-console" { +resource "google_compute_network" "network" { + name = "network%{random_suffix}" + auto_create_subnetworks = true + mtu = 8896 + provider = google-beta +} + + + +# Create an IP address +resource "google_compute_global_address" "private_ip_alloc" { + name = "address%{random_suffix}" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 24 + network = google_compute_network.network.id + provider = google-beta +} + +# Create a private connection +resource "google_service_networking_connection" "default" { + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] provider = google-beta - location = "us-central1" - name = "tf-test-ms-console%{random_suffix}" - type = "BACKUP_RESTORE" - networks { - network = data.google_compute_network.default.id - peering_mode = "PRIVATE_SERVICE_ACCESS" - } } `, context) } -func testAccCheckBackupDRManagementServerDestroyProducer(t *testing.T) func(s *terraform.State) error { +func testAccCheckParallelstoreInstanceDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { - if rs.Type != "google_backup_dr_management_server" { + if rs.Type != "google_parallelstore_instance" { continue } if strings.HasPrefix(name, "data.") { @@ -90,7 +112,7 @@ func testAccCheckBackupDRManagementServerDestroyProducer(t *testing.T) func(s *t config := acctest.GoogleProviderConfig(t) - url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{BackupDRBasePath}}projects/{{project}}/locations/{{location}}/managementServers/{{name}}") + url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{ParallelstoreBasePath}}projects/{{project}}/locations/{{location}}/instances/{{instance_id}}") if err != nil { return err } @@ -109,7 +131,7 @@ func testAccCheckBackupDRManagementServerDestroyProducer(t *testing.T) func(s *t UserAgent: config.UserAgent, }) if err == nil { - return fmt.Errorf("BackupDRManagementServer still exists at %s", url) + return fmt.Errorf("ParallelstoreInstance still exists at %s", url) } } diff --git a/google-beta/services/parallelstore/resource_parallelstore_instance_sweeper.go b/google-beta/services/parallelstore/resource_parallelstore_instance_sweeper.go new file mode 100644 index 0000000000..540a8f5d10 --- /dev/null +++ b/google-beta/services/parallelstore/resource_parallelstore_instance_sweeper.go @@ -0,0 +1,143 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// ---------------------------------------------------------------------------- +// +// *** AUTO GENERATED CODE *** Type: MMv1 *** +// +// ---------------------------------------------------------------------------- +// +// This file is automatically generated by Magic Modules and manual +// changes will be clobbered when the file is regenerated. +// +// Please read more about how to change this file in +// .github/CONTRIBUTING.md. +// +// ---------------------------------------------------------------------------- + +package parallelstore + +import ( + "context" + "log" + "strings" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/sweeper" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func init() { + sweeper.AddTestSweepers("ParallelstoreInstance", testSweepParallelstoreInstance) +} + +// At the time of writing, the CI only passes us-central1 as the region +func testSweepParallelstoreInstance(region string) error { + resourceName := "ParallelstoreInstance" + log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) + + config, err := sweeper.SharedConfigForRegion(region) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) + return err + } + + err = config.LoadAndValidate(context.Background()) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) + return err + } + + t := &testing.T{} + billingId := envvar.GetTestBillingAccountFromEnv(t) + + // Setup variables to replace in list template + d := &tpgresource.ResourceDataMock{ + FieldsInSchema: map[string]interface{}{ + "project": config.Project, + "region": region, + "location": region, + "zone": "-", + "billing_account": billingId, + }, + } + + listTemplate := strings.Split("https://parallelstore.googleapis.com/v1beta/projects/{{project}}/locations/{{location}}/instances", "?")[0] + listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) + return nil + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + Project: config.Project, + RawURL: listUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) + return nil + } + + resourceList, ok := res["instances"] + if !ok { + log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") + return nil + } + + rl := resourceList.([]interface{}) + + log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) + // Keep count of items that aren't sweepable for logging. + nonPrefixCount := 0 + for _, ri := range rl { + obj := ri.(map[string]interface{}) + var name string + // Id detected in the delete URL, attempt to use id. + if obj["id"] != nil { + name = tpgresource.GetResourceNameFromSelfLink(obj["id"].(string)) + } else if obj["name"] != nil { + name = tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) + } else { + log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) + return nil + } + // Skip resources that shouldn't be sweeped + if !sweeper.IsSweepableTestResource(name) { + nonPrefixCount++ + continue + } + + deleteTemplate := "https://parallelstore.googleapis.com/v1beta/projects/{{project}}/locations/{{location}}/instances/{{instance_id}}" + deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) + return nil + } + deleteUrl = deleteUrl + name + + // Don't wait on operations as we may have a lot to delete + _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "DELETE", + Project: config.Project, + RawURL: deleteUrl, + UserAgent: config.UserAgent, + }) + if err != nil { + log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) + } else { + log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) + } + } + + if nonPrefixCount > 0 { + log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) + } + + return nil +} diff --git a/google-beta/services/parallelstore/resource_parallelstore_instance_test.go b/google-beta/services/parallelstore/resource_parallelstore_instance_test.go new file mode 100644 index 0000000000..4bc4704da8 --- /dev/null +++ b/google-beta/services/parallelstore/resource_parallelstore_instance_test.go @@ -0,0 +1,135 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package parallelstore_test + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" +) + +func TestAccParallelstoreInstance_parallelstoreInstanceBasicExample_update(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckParallelstoreInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccParallelstoreInstance_parallelstoreInstanceBasicExample_basic(context), + }, + { + ResourceName: "google_parallelstore_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "instance_id", "labels", "terraform_labels"}, + }, + { + Config: testAccParallelstoreInstance_parallelstoreInstanceBasicExample_update(context), + }, + { + ResourceName: "google_parallelstore_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"location", "instance_id", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccParallelstoreInstance_parallelstoreInstanceBasicExample_basic(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_parallelstore_instance" "instance" { + instance_id = "instance%{random_suffix}" + location = "us-central1-a" + description = "test instance" + capacity_gib = 12000 + network = google_compute_network.network.name + reserved_ip_range = google_compute_global_address.private_ip_alloc.name + labels = { + test = "value" + } + provider = google-beta + depends_on = [google_service_networking_connection.default] +} + +resource "google_compute_network" "network" { + name = "network%{random_suffix}" + auto_create_subnetworks = true + mtu = 8896 + provider = google-beta +} + + + +# Create an IP address +resource "google_compute_global_address" "private_ip_alloc" { + name = "address%{random_suffix}" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 24 + network = google_compute_network.network.id + provider = google-beta +} + +# Create a private connection +resource "google_service_networking_connection" "default" { + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] + provider = google-beta +} +`, context) +} + +func testAccParallelstoreInstance_parallelstoreInstanceBasicExample_update(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_parallelstore_instance" "instance" { + instance_id = "instance%{random_suffix}" + location = "us-central1-a" + description = "test instance updated" + capacity_gib = 12000 + network = google_compute_network.network.name + + labels = { + test = "value23" + } + provider = google-beta + depends_on = [google_service_networking_connection.default] +} + +resource "google_compute_network" "network" { + name = "network%{random_suffix}" + auto_create_subnetworks = true + mtu = 8896 + provider = google-beta +} + + + +# Create an IP address +resource "google_compute_global_address" "private_ip_alloc" { + name = "address%{random_suffix}" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 24 + network = google_compute_network.network.id + provider = google-beta +} + +# Create a private connection +resource "google_service_networking_connection" "default" { + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] + provider = google-beta +} +`, context) +} diff --git a/google-beta/services/privateca/privateca_ca_utils.go b/google-beta/services/privateca/privateca_ca_utils.go index f5305fe623..c897728983 100644 --- a/google-beta/services/privateca/privateca_ca_utils.go +++ b/google-beta/services/privateca/privateca_ca_utils.go @@ -230,12 +230,14 @@ func activateSubCAWithFirstPartyIssuer(config *transport_tpg.Config, d *schema.R return fmt.Errorf("Error creating Certificate: %s", err) } signedCACert := res["pemCertificate"] + signerCertChain := res["pemCertificateChain"] // 4. activate sub CA with the signed CA cert. activateObj := make(map[string]interface{}) activateObj["pemCaCertificate"] = signedCACert activateObj["subordinateConfig"] = make(map[string]interface{}) - activateObj["subordinateConfig"].(map[string]interface{})["certificateAuthority"] = issuer + activateObj["subordinateConfig"].(map[string]interface{})["pemIssuerChain"] = make(map[string]interface{}) + activateObj["subordinateConfig"].(map[string]interface{})["pemIssuerChain"].(map[string]interface{})["pemCertificates"] = signerCertChain activateUrl, err := tpgresource.ReplaceVars(d, config, "{{PrivatecaBasePath}}projects/{{project}}/locations/{{location}}/caPools/{{pool}}/certificateAuthorities/{{certificate_authority_id}}:activate") if err != nil { diff --git a/google-beta/services/privateca/resource_privateca_ca_pool.go b/google-beta/services/privateca/resource_privateca_ca_pool.go index 03fabf5d85..d31e2377bf 100644 --- a/google-beta/services/privateca/resource_privateca_ca_pool.go +++ b/google-beta/services/privateca/resource_privateca_ca_pool.go @@ -20,6 +20,7 @@ package privateca import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -673,6 +674,7 @@ func resourcePrivatecaCaPoolCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -681,6 +683,7 @@ func resourcePrivatecaCaPoolCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating CaPool: %s", err) @@ -743,12 +746,14 @@ func resourcePrivatecaCaPoolRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PrivatecaCaPool %q", d.Id())) @@ -821,6 +826,7 @@ func resourcePrivatecaCaPoolUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating CaPool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("issuance_policy") { @@ -856,6 +862,7 @@ func resourcePrivatecaCaPoolUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -903,6 +910,8 @@ func resourcePrivatecaCaPoolDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting CaPool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -912,6 +921,7 @@ func resourcePrivatecaCaPoolDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "CaPool") diff --git a/google-beta/services/privateca/resource_privateca_certificate.go b/google-beta/services/privateca/resource_privateca_certificate.go index 5a4514328d..59da7a3f22 100644 --- a/google-beta/services/privateca/resource_privateca_certificate.go +++ b/google-beta/services/privateca/resource_privateca_certificate.go @@ -20,6 +20,7 @@ package privateca import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -624,6 +625,23 @@ leading period (like '.example.com')`, }, }, }, + "subject_key_id": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CA service, which was not generated using method (1) described in RFC 5280 section 4.2.1.2..`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The value of the KeyId in lowercase hexidecimal.`, + }, + }, + }, + }, }, }, ExactlyOneOf: []string{"pem_csr", "config"}, @@ -1340,6 +1358,7 @@ func resourcePrivatecaCertificateCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) // Only include linked certificate authority if the user specified it if p, ok := d.GetOk("certificate_authority"); ok { url, err = transport_tpg.AddQueryParams(url, map[string]string{"issuingCertificateAuthorityId": p.(string)}) @@ -1355,6 +1374,7 @@ func resourcePrivatecaCertificateCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Certificate: %s", err) @@ -1397,12 +1417,14 @@ func resourcePrivatecaCertificateRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PrivatecaCertificate %q", d.Id())) @@ -1487,6 +1509,7 @@ func resourcePrivatecaCertificateUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Certificate %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("effective_labels") { @@ -1514,6 +1537,7 @@ func resourcePrivatecaCertificateUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1554,6 +1578,8 @@ func resourcePrivatecaCertificateDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Certificate %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1563,6 +1589,7 @@ func resourcePrivatecaCertificateDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Certificate") @@ -2310,6 +2337,8 @@ func flattenPrivatecaCertificateConfig(v interface{}, d *schema.ResourceData, co flattenPrivatecaCertificateConfigX509Config(original["x509Config"], d, config) transformed["subject_config"] = flattenPrivatecaCertificateConfigSubjectConfig(original["subjectConfig"], d, config) + transformed["subject_key_id"] = + flattenPrivatecaCertificateConfigSubjectKeyId(original["subjectKeyId"], d, config) transformed["public_key"] = flattenPrivatecaCertificateConfigPublicKey(original["publicKey"], d, config) return []interface{}{transformed} @@ -2444,6 +2473,23 @@ func flattenPrivatecaCertificateConfigSubjectConfigSubjectAltNameIpAddresses(v i return v } +func flattenPrivatecaCertificateConfigSubjectKeyId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["key_id"] = + flattenPrivatecaCertificateConfigSubjectKeyIdKeyId(original["keyId"], d, config) + return []interface{}{transformed} +} +func flattenPrivatecaCertificateConfigSubjectKeyIdKeyId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenPrivatecaCertificateConfigPublicKey(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -2521,6 +2567,13 @@ func expandPrivatecaCertificateConfig(v interface{}, d tpgresource.TerraformReso transformed["subjectConfig"] = transformedSubjectConfig } + transformedSubjectKeyId, err := expandPrivatecaCertificateConfigSubjectKeyId(original["subject_key_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSubjectKeyId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["subjectKeyId"] = transformedSubjectKeyId + } + transformedPublicKey, err := expandPrivatecaCertificateConfigPublicKey(original["public_key"], d, config) if err != nil { return nil, err @@ -2766,6 +2819,29 @@ func expandPrivatecaCertificateConfigSubjectConfigSubjectAltNameIpAddresses(v in return v, nil } +func expandPrivatecaCertificateConfigSubjectKeyId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedKeyId, err := expandPrivatecaCertificateConfigSubjectKeyIdKeyId(original["key_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedKeyId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["keyId"] = transformedKeyId + } + + return transformed, nil +} + +func expandPrivatecaCertificateConfigSubjectKeyIdKeyId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandPrivatecaCertificateConfigPublicKey(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { diff --git a/google-beta/services/privateca/resource_privateca_certificate_authority.go b/google-beta/services/privateca/resource_privateca_certificate_authority.go index 8b478b9564..a4ce08c986 100644 --- a/google-beta/services/privateca/resource_privateca_certificate_authority.go +++ b/google-beta/services/privateca/resource_privateca_certificate_authority.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -590,6 +591,23 @@ leading period (like '.example.com')`, }, }, }, + "subject_key_id": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CA service, which was not generated using method (1) described in RFC 5280 section 4.2.1.2..`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The value of the KeyId in lowercase hexidecimal.`, + }, + }, + }, + }, }, }, }, @@ -709,6 +727,7 @@ and usability purposes only. The resource name is in the format }, "pem_issuer_chain": { Type: schema.TypeList, + Computed: true, Optional: true, Description: `Contains the PEM certificate chain for the issuers of this CertificateAuthority, but not pem certificate for this CA itself.`, @@ -909,6 +928,7 @@ func resourcePrivatecaCertificateAuthorityCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) // Drop `subordinateConfig` as it can not be set during CA creation. // It can be used to activate CA during post_create or pre_update. delete(obj, "subordinateConfig") @@ -920,6 +940,7 @@ func resourcePrivatecaCertificateAuthorityCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating CertificateAuthority: %s", err) @@ -1018,12 +1039,14 @@ func resourcePrivatecaCertificateAuthorityRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PrivatecaCertificateAuthority %q", d.Id())) @@ -1135,6 +1158,7 @@ func resourcePrivatecaCertificateAuthorityUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating CertificateAuthority %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("subordinate_config") { @@ -1209,6 +1233,7 @@ func resourcePrivatecaCertificateAuthorityUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1256,6 +1281,7 @@ func resourcePrivatecaCertificateAuthorityDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) if d.Get("deletion_protection").(bool) { return fmt.Errorf("cannot destroy CertificateAuthority without setting deletion_protection=false and running `terraform apply`") } @@ -1297,6 +1323,7 @@ func resourcePrivatecaCertificateAuthorityDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "CertificateAuthority") @@ -1359,12 +1386,30 @@ func flattenPrivatecaCertificateAuthorityConfig(v interface{}, d *schema.Resourc return nil } transformed := make(map[string]interface{}) + transformed["subject_key_id"] = + flattenPrivatecaCertificateAuthorityConfigSubjectKeyId(original["subjectKeyId"], d, config) transformed["x509_config"] = flattenPrivatecaCertificateAuthorityConfigX509Config(original["x509Config"], d, config) transformed["subject_config"] = flattenPrivatecaCertificateAuthorityConfigSubjectConfig(original["subjectConfig"], d, config) return []interface{}{transformed} } +func flattenPrivatecaCertificateAuthorityConfigSubjectKeyId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["key_id"] = + flattenPrivatecaCertificateAuthorityConfigSubjectKeyIdKeyId(original["keyId"], d, config) + return []interface{}{transformed} +} +func flattenPrivatecaCertificateAuthorityConfigSubjectKeyIdKeyId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} func flattenPrivatecaCertificateAuthorityConfigX509Config(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { @@ -1538,7 +1583,7 @@ func flattenPrivatecaCertificateAuthoritySubordinateConfig(v interface{}, d *sch return []interface{}{transformed} } func flattenPrivatecaCertificateAuthoritySubordinateConfigCertificateAuthority(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + return d.Get("subordinate_config.0.certificate_authority") } func flattenPrivatecaCertificateAuthoritySubordinateConfigPemIssuerChain(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1648,6 +1693,13 @@ func expandPrivatecaCertificateAuthorityConfig(v interface{}, d tpgresource.Terr original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) + transformedSubjectKeyId, err := expandPrivatecaCertificateAuthorityConfigSubjectKeyId(original["subject_key_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSubjectKeyId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["subjectKeyId"] = transformedSubjectKeyId + } + transformedX509Config, err := expandPrivatecaCertificateAuthorityConfigX509Config(original["x509_config"], d, config) if err != nil { return nil, err @@ -1665,6 +1717,29 @@ func expandPrivatecaCertificateAuthorityConfig(v interface{}, d tpgresource.Terr return transformed, nil } +func expandPrivatecaCertificateAuthorityConfigSubjectKeyId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedKeyId, err := expandPrivatecaCertificateAuthorityConfigSubjectKeyIdKeyId(original["key_id"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedKeyId); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["keyId"] = transformedKeyId + } + + return transformed, nil +} + +func expandPrivatecaCertificateAuthorityConfigSubjectKeyIdKeyId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandPrivatecaCertificateAuthorityConfigX509Config(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { if v == nil { return v, nil diff --git a/google-beta/services/privateca/resource_privateca_certificate_authority_generated_test.go b/google-beta/services/privateca/resource_privateca_certificate_authority_generated_test.go index 574c7e227d..739ce3a5b2 100644 --- a/google-beta/services/privateca/resource_privateca_certificate_authority_generated_test.go +++ b/google-beta/services/privateca/resource_privateca_certificate_authority_generated_test.go @@ -134,7 +134,7 @@ func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthoritySubordina ResourceName: "google_privateca_certificate_authority.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection", "labels", "terraform_labels"}, + ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection", "subordinate_config.0.certificate_authority", "labels", "terraform_labels"}, }, }, }) @@ -239,6 +239,92 @@ resource "google_privateca_certificate_authority" "default" { `, context) } +func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityCustomSkiExample(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + context := map[string]interface{}{ + "kms_key_name": acctest.BootstrapKMSKeyWithPurposeInLocation(t, "ASYMMETRIC_SIGN", "us-central1").CryptoKey.Name, + "pool_name": acctest.BootstrapSharedCaPoolInLocation(t, "us-central1"), + "pool_location": "us-central1", + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckPrivatecaCertificateAuthorityDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityCustomSkiExample(context), + }, + { + ResourceName: "google_privateca_certificate_authority.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityCustomSkiExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_privateca_certificate_authority" "default" { + // This example assumes this pool already exists. + // Pools cannot be deleted in normal test circumstances, so we depend on static pools + pool = "%{pool_name}" + certificate_authority_id = "tf-test-my-certificate-authority%{random_suffix}" + location = "%{pool_location}" + deletion_protection = "%{deletion_protection}" + config { + subject_config { + subject { + organization = "HashiCorp" + common_name = "my-certificate-authority" + } + subject_alt_name { + dns_names = ["hashicorp.com"] + } + } + subject_key_id { + key_id = "4cf3372289b1d411b999dbb9ebcd44744b6b2fca" + } + x509_config { + ca_options { + is_ca = true + max_issuer_path_length = 10 + } + key_usage { + base_key_usage { + digital_signature = true + content_commitment = true + key_encipherment = false + data_encipherment = true + key_agreement = true + cert_sign = true + crl_sign = true + decipher_only = true + } + extended_key_usage { + server_auth = true + client_auth = false + email_protection = true + code_signing = true + time_stamping = true + } + } + } + } + lifetime = "86400s" + key_spec { + cloud_kms_key_version = "%{kms_key_name}/cryptoKeyVersions/1" + } +} +`, context) +} + func testAccCheckPrivatecaCertificateAuthorityDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google-beta/services/privateca/resource_privateca_certificate_authority_test.go b/google-beta/services/privateca/resource_privateca_certificate_authority_test.go index 60f48ddb06..53a689427c 100644 --- a/google-beta/services/privateca/resource_privateca_certificate_authority_test.go +++ b/google-beta/services/privateca/resource_privateca_certificate_authority_test.go @@ -122,6 +122,33 @@ func TestAccPrivatecaCertificateAuthority_rootCaManageDesiredState(t *testing.T) }) } +func TestAccPrivatecaCertificateAuthority_subordinateCaActivatedByFirstPartyIssuerOnCreation(t *testing.T) { + t.Parallel() + acctest.SkipIfVcr(t) + + random_suffix := acctest.RandString(t, 10) + context := map[string]interface{}{ + "root_location": "us-central1", + "sub_location": "australia-southeast1", + "random_suffix": random_suffix, + } + + resourceName := "google_privateca_certificate_authority.sub-1" + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckPrivatecaCertificateAuthorityDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccPrivatecaCertificateAuthority_privatecaCertificateAuthoritySubordinateWithFirstPartyIssuer(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "state", "ENABLED"), + ), + }, + }, + }) +} + func testAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityBasicRoot(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_privateca_certificate_authority" "default" { @@ -287,3 +314,139 @@ resource "google_privateca_certificate_authority" "default" { } `, context) } + +// testAccPrivatecaCertificateAuthority_privatecaCertificateAuthoritySubordinateWithFirstPartyIssuer provides a config +// which contains +// * A CaPool for root CA +// * A root CA +// * A CaPool for sub CA +// * A subordinate CA which should be activated by the above root CA +func testAccPrivatecaCertificateAuthority_privatecaCertificateAuthoritySubordinateWithFirstPartyIssuer(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_privateca_ca_pool" "root-pool" { + name = "root-pool-%{random_suffix}" + location = "%{root_location}" + tier = "ENTERPRISE" + publishing_options { + publish_ca_cert = true + publish_crl = true + } +} + +resource "google_privateca_certificate_authority" "root-1" { + pool = google_privateca_ca_pool.root-pool.name + certificate_authority_id = "tf-test-my-certificate-authority-root-%{random_suffix}" + location = "%{root_location}" + config { + subject_config { + subject { + organization = "HashiCorp" + common_name = "my-certificate-authority" + } + subject_alt_name { + dns_names = ["hashicorp.com"] + } + } + x509_config { + ca_options { + is_ca = true + max_issuer_path_length = 10 + } + key_usage { + base_key_usage { + digital_signature = true + content_commitment = true + key_encipherment = false + data_encipherment = true + key_agreement = true + cert_sign = true + crl_sign = true + decipher_only = true + } + extended_key_usage { + server_auth = true + client_auth = false + email_protection = true + code_signing = true + time_stamping = true + } + } + } + } + lifetime = "86400s" + key_spec { + algorithm = "RSA_PKCS1_4096_SHA256" + } + + // Disable CA deletion related safe checks for easier cleanup. + deletion_protection = false + skip_grace_period = true + ignore_active_certificates_on_deletion = true +} + +resource "google_privateca_ca_pool" "sub-pool" { + name = "sub-pool-%{random_suffix}" + location = "%{sub_location}" + tier = "ENTERPRISE" + publishing_options { + publish_ca_cert = true + publish_crl = true + } +} + +resource "google_privateca_certificate_authority" "sub-1" { + pool = google_privateca_ca_pool.sub-pool.name + certificate_authority_id = "tf-test-my-certificate-authority-sub-%{random_suffix}" + location = "%{sub_location}" + subordinate_config { + certificate_authority = google_privateca_certificate_authority.root-1.name + } + config { + subject_config { + subject { + organization = "HashiCorp" + common_name = "my-certificate-authority" + } + subject_alt_name { + dns_names = ["hashicorp.com"] + } + } + x509_config { + ca_options { + is_ca = true + max_issuer_path_length = 10 + } + key_usage { + base_key_usage { + digital_signature = true + content_commitment = true + key_encipherment = false + data_encipherment = true + key_agreement = true + cert_sign = true + crl_sign = true + decipher_only = true + } + extended_key_usage { + server_auth = true + client_auth = false + email_protection = true + code_signing = true + time_stamping = true + } + } + } + } + lifetime = "86400s" + key_spec { + algorithm = "RSA_PKCS1_4096_SHA256" + } + type = "SUBORDINATE" + + // Disable CA deletion related safe checks for easier cleanup. + deletion_protection = false + skip_grace_period = true + ignore_active_certificates_on_deletion = true +} +`, context) +} diff --git a/google-beta/services/privateca/resource_privateca_certificate_generated_test.go b/google-beta/services/privateca/resource_privateca_certificate_generated_test.go index 02ba7a51a4..24b39addac 100644 --- a/google-beta/services/privateca/resource_privateca_certificate_generated_test.go +++ b/google-beta/services/privateca/resource_privateca_certificate_generated_test.go @@ -531,6 +531,128 @@ resource "google_privateca_certificate" "default" { `, context) } +func TestAccPrivatecaCertificate_privatecaCertificateCustomSkiExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckPrivatecaCertificateDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccPrivatecaCertificate_privatecaCertificateCustomSkiExample(context), + }, + { + ResourceName: "google_privateca_certificate.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccPrivatecaCertificate_privatecaCertificateCustomSkiExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_privateca_ca_pool" "default" { + location = "us-central1" + name = "tf-test-my-pool%{random_suffix}" + tier = "ENTERPRISE" +} + +resource "google_privateca_certificate_authority" "default" { + location = "us-central1" + pool = google_privateca_ca_pool.default.name + certificate_authority_id = "my-authority" + config { + subject_config { + subject { + organization = "HashiCorp" + common_name = "my-certificate-authority" + } + subject_alt_name { + dns_names = ["hashicorp.com"] + } + } + x509_config { + ca_options { + is_ca = true + } + key_usage { + base_key_usage { + digital_signature = true + cert_sign = true + crl_sign = true + } + extended_key_usage { + server_auth = true + } + } + } + } + lifetime = "86400s" + key_spec { + algorithm = "RSA_PKCS1_4096_SHA256" + } + + // Disable CA deletion related safe checks for easier cleanup. + deletion_protection = false + skip_grace_period = true + ignore_active_certificates_on_deletion = true +} + + +resource "google_privateca_certificate" "default" { + location = "us-central1" + pool = google_privateca_ca_pool.default.name + name = "tf-test-my-certificate%{random_suffix}" + lifetime = "860s" + config { + subject_config { + subject { + common_name = "san1.example.com" + country_code = "us" + organization = "google" + organizational_unit = "enterprise" + locality = "mountain view" + province = "california" + street_address = "1600 amphitheatre parkway" + postal_code = "94109" + } + } + subject_key_id { + key_id = "4cf3372289b1d411b999dbb9ebcd44744b6b2fca" + } + x509_config { + ca_options { + is_ca = false + } + key_usage { + base_key_usage { + crl_sign = true + } + extended_key_usage { + server_auth = true + } + } + } + public_key { + format = "PEM" + key = filebase64("test-fixtures/rsa_public.pem") + } + } + // Certificates require an authority to exist in the pool, though they don't + // need to be explicitly connected to it + depends_on = [google_privateca_certificate_authority.default] +} +`, context) +} + func testAccCheckPrivatecaCertificateDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google-beta/services/publicca/resource_public_ca_external_account_key.go b/google-beta/services/publicca/resource_public_ca_external_account_key.go index 860589a4d9..236d76e7b3 100644 --- a/google-beta/services/publicca/resource_public_ca_external_account_key.go +++ b/google-beta/services/publicca/resource_public_ca_external_account_key.go @@ -20,6 +20,7 @@ package publicca import ( "fmt" "log" + "net/http" "time" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" @@ -109,6 +110,7 @@ func resourcePublicCAExternalAccountKeyCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -117,6 +119,7 @@ func resourcePublicCAExternalAccountKeyCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ExternalAccountKey: %s", err) diff --git a/google-beta/services/pubsub/resource_pubsub_schema.go b/google-beta/services/pubsub/resource_pubsub_schema.go index d5ae6fe735..4b2678a79a 100644 --- a/google-beta/services/pubsub/resource_pubsub_schema.go +++ b/google-beta/services/pubsub/resource_pubsub_schema.go @@ -20,6 +20,7 @@ package pubsub import ( "fmt" "log" + "net/http" "reflect" "time" @@ -135,6 +136,7 @@ func resourcePubsubSchemaCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -143,6 +145,7 @@ func resourcePubsubSchemaCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Schema: %s", err) @@ -227,12 +230,14 @@ func resourcePubsubSchemaRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PubsubSchema %q", d.Id())) @@ -301,6 +306,7 @@ func resourcePubsubSchemaUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Schema %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -315,6 +321,7 @@ func resourcePubsubSchemaUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -353,6 +360,8 @@ func resourcePubsubSchemaDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Schema %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -362,6 +371,7 @@ func resourcePubsubSchemaDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Schema") diff --git a/google-beta/services/pubsub/resource_pubsub_subscription.go b/google-beta/services/pubsub/resource_pubsub_subscription.go index ea4bc8069f..af9aac7b86 100644 --- a/google-beta/services/pubsub/resource_pubsub_subscription.go +++ b/google-beta/services/pubsub/resource_pubsub_subscription.go @@ -20,6 +20,7 @@ package pubsub import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -621,6 +622,7 @@ func resourcePubsubSubscriptionCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PUT", @@ -629,6 +631,7 @@ func resourcePubsubSubscriptionCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Subscription: %s", err) @@ -718,12 +721,14 @@ func resourcePubsubSubscriptionRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PubsubSubscription %q", d.Id())) @@ -882,6 +887,7 @@ func resourcePubsubSubscriptionUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Subscription %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("bigquery_config") { @@ -949,6 +955,7 @@ func resourcePubsubSubscriptionUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -989,6 +996,8 @@ func resourcePubsubSubscriptionDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Subscription %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -998,6 +1007,7 @@ func resourcePubsubSubscriptionDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Subscription") diff --git a/google-beta/services/pubsub/resource_pubsub_topic.go b/google-beta/services/pubsub/resource_pubsub_topic.go index 6a90eb2e30..7b4ca56c04 100644 --- a/google-beta/services/pubsub/resource_pubsub_topic.go +++ b/google-beta/services/pubsub/resource_pubsub_topic.go @@ -20,6 +20,7 @@ package pubsub import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -291,6 +292,7 @@ func resourcePubsubTopicCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PUT", @@ -299,6 +301,7 @@ func resourcePubsubTopicCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.PubsubTopicProjectNotReady}, }) if err != nil { @@ -390,12 +393,14 @@ func resourcePubsubTopicRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.PubsubTopicProjectNotReady}, }) if err != nil { @@ -501,6 +506,7 @@ func resourcePubsubTopicUpdate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Updating Topic %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("kms_key_name") { @@ -548,6 +554,7 @@ func resourcePubsubTopicUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.PubsubTopicProjectNotReady}, }) @@ -589,6 +596,8 @@ func resourcePubsubTopicDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Topic %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -598,6 +607,7 @@ func resourcePubsubTopicDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.PubsubTopicProjectNotReady}, }) if err != nil { diff --git a/google-beta/services/pubsublite/resource_pubsub_lite_reservation.go b/google-beta/services/pubsublite/resource_pubsub_lite_reservation.go index a836930177..2ff9e70637 100644 --- a/google-beta/services/pubsublite/resource_pubsub_lite_reservation.go +++ b/google-beta/services/pubsublite/resource_pubsub_lite_reservation.go @@ -20,6 +20,7 @@ package pubsublite import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -117,6 +118,7 @@ func resourcePubsubLiteReservationCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -125,6 +127,7 @@ func resourcePubsubLiteReservationCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Reservation: %s", err) @@ -167,12 +170,14 @@ func resourcePubsubLiteReservationRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PubsubLiteReservation %q", d.Id())) @@ -218,6 +223,7 @@ func resourcePubsubLiteReservationUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating Reservation %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("throughput_capacity") { @@ -245,6 +251,7 @@ func resourcePubsubLiteReservationUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -285,6 +292,8 @@ func resourcePubsubLiteReservationDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Reservation %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -294,6 +303,7 @@ func resourcePubsubLiteReservationDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Reservation") diff --git a/google-beta/services/pubsublite/resource_pubsub_lite_subscription.go b/google-beta/services/pubsublite/resource_pubsub_lite_subscription.go index 34f73154f4..871d1f93ea 100644 --- a/google-beta/services/pubsublite/resource_pubsub_lite_subscription.go +++ b/google-beta/services/pubsublite/resource_pubsub_lite_subscription.go @@ -20,6 +20,7 @@ package pubsublite import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -151,6 +152,7 @@ func resourcePubsubLiteSubscriptionCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -159,6 +161,7 @@ func resourcePubsubLiteSubscriptionCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Subscription: %s", err) @@ -201,12 +204,14 @@ func resourcePubsubLiteSubscriptionRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PubsubLiteSubscription %q", d.Id())) @@ -260,6 +265,7 @@ func resourcePubsubLiteSubscriptionUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Subscription %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("delivery_config") { @@ -287,6 +293,7 @@ func resourcePubsubLiteSubscriptionUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -327,6 +334,8 @@ func resourcePubsubLiteSubscriptionDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Subscription %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -336,6 +345,7 @@ func resourcePubsubLiteSubscriptionDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Subscription") diff --git a/google-beta/services/pubsublite/resource_pubsub_lite_topic.go b/google-beta/services/pubsublite/resource_pubsub_lite_topic.go index 67c099112f..2353e69a12 100644 --- a/google-beta/services/pubsublite/resource_pubsub_lite_topic.go +++ b/google-beta/services/pubsublite/resource_pubsub_lite_topic.go @@ -20,6 +20,7 @@ package pubsublite import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -208,6 +209,7 @@ func resourcePubsubLiteTopicCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -216,6 +218,7 @@ func resourcePubsubLiteTopicCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Topic: %s", err) @@ -258,12 +261,14 @@ func resourcePubsubLiteTopicRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PubsubLiteTopic %q", d.Id())) @@ -332,6 +337,7 @@ func resourcePubsubLiteTopicUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Topic %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("partition_config") { @@ -367,6 +373,7 @@ func resourcePubsubLiteTopicUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -407,6 +414,8 @@ func resourcePubsubLiteTopicDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Topic %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -416,6 +425,7 @@ func resourcePubsubLiteTopicDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Topic") diff --git a/google-beta/services/redis/resource_redis_cluster.go b/google-beta/services/redis/resource_redis_cluster.go index fc42e9dac9..2feca8317b 100644 --- a/google-beta/services/redis/resource_redis_cluster.go +++ b/google-beta/services/redis/resource_redis_cluster.go @@ -20,6 +20,7 @@ package redis import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -94,6 +95,15 @@ projects/{network_project_id_or_number}/global/networks/{network_id}.`, Description: `Optional. The authorization mode of the Redis cluster. If not provided, auth feature is disabled for the cluster. Default value: "AUTH_MODE_DISABLED" Possible values: ["AUTH_MODE_UNSPECIFIED", "AUTH_MODE_IAM_AUTH", "AUTH_MODE_DISABLED"]`, Default: "AUTH_MODE_DISABLED", }, + "node_type": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"REDIS_SHARED_CORE_NANO", "REDIS_HIGHMEM_MEDIUM", "REDIS_HIGHMEM_XLARGE", "REDIS_STANDARD_SMALL", ""}), + Description: `The nodeType for the Redis cluster. +If not provided, REDIS_HIGHMEM_MEDIUM will be used as default Possible values: ["REDIS_SHARED_CORE_NANO", "REDIS_HIGHMEM_MEDIUM", "REDIS_HIGHMEM_XLARGE", "REDIS_STANDARD_SMALL"]`, + }, "region": { Type: schema.TypeString, Computed: true, @@ -161,6 +171,11 @@ projects/{network_project_id}/global/networks/{network_id}.`, }, }, }, + "precise_size_gb": { + Type: schema.TypeFloat, + Computed: true, + Description: `Output only. Redis memory precise size in GB for the entire cluster.`, + }, "psc_connections": { Type: schema.TypeList, Computed: true, @@ -270,6 +285,12 @@ func resourceRedisClusterCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("transit_encryption_mode"); !tpgresource.IsEmptyValue(reflect.ValueOf(transitEncryptionModeProp)) && (ok || !reflect.DeepEqual(v, transitEncryptionModeProp)) { obj["transitEncryptionMode"] = transitEncryptionModeProp } + nodeTypeProp, err := expandRedisClusterNodeType(d.Get("node_type"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("node_type"); !tpgresource.IsEmptyValue(reflect.ValueOf(nodeTypeProp)) && (ok || !reflect.DeepEqual(v, nodeTypeProp)) { + obj["nodeType"] = nodeTypeProp + } pscConfigsProp, err := expandRedisClusterPscConfigs(d.Get("psc_configs"), d, config) if err != nil { return err @@ -308,6 +329,7 @@ func resourceRedisClusterCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -316,6 +338,7 @@ func resourceRedisClusterCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Cluster: %s", err) @@ -368,12 +391,14 @@ func resourceRedisClusterRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("RedisCluster %q", d.Id())) @@ -406,6 +431,9 @@ func resourceRedisClusterRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("transit_encryption_mode", flattenRedisClusterTransitEncryptionMode(res["transitEncryptionMode"], d, config)); err != nil { return fmt.Errorf("Error reading Cluster: %s", err) } + if err := d.Set("node_type", flattenRedisClusterNodeType(res["nodeType"], d, config)); err != nil { + return fmt.Errorf("Error reading Cluster: %s", err) + } if err := d.Set("discovery_endpoints", flattenRedisClusterDiscoveryEndpoints(res["discoveryEndpoints"], d, config)); err != nil { return fmt.Errorf("Error reading Cluster: %s", err) } @@ -421,6 +449,9 @@ func resourceRedisClusterRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("size_gb", flattenRedisClusterSizeGb(res["sizeGb"], d, config)); err != nil { return fmt.Errorf("Error reading Cluster: %s", err) } + if err := d.Set("precise_size_gb", flattenRedisClusterPreciseSizeGb(res["preciseSizeGb"], d, config)); err != nil { + return fmt.Errorf("Error reading Cluster: %s", err) + } if err := d.Set("shard_count", flattenRedisClusterShardCount(res["shardCount"], d, config)); err != nil { return fmt.Errorf("Error reading Cluster: %s", err) } @@ -469,6 +500,7 @@ func resourceRedisClusterUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Cluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("psc_configs") { @@ -504,6 +536,7 @@ func resourceRedisClusterUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -551,6 +584,8 @@ func resourceRedisClusterDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Cluster %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -560,6 +595,7 @@ func resourceRedisClusterDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Cluster") @@ -618,6 +654,10 @@ func flattenRedisClusterTransitEncryptionMode(v interface{}, d *schema.ResourceD return v } +func flattenRedisClusterNodeType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenRedisClusterDiscoveryEndpoints(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -814,6 +854,10 @@ func flattenRedisClusterSizeGb(v interface{}, d *schema.ResourceData, config *tr return v // let terraform core handle it otherwise } +func flattenRedisClusterPreciseSizeGb(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenRedisClusterShardCount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { // Handles the string fixed64 format if strVal, ok := v.(string); ok { @@ -839,6 +883,10 @@ func expandRedisClusterTransitEncryptionMode(v interface{}, d tpgresource.Terraf return v, nil } +func expandRedisClusterNodeType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandRedisClusterPscConfigs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) diff --git a/google-beta/services/redis/resource_redis_cluster_generated_test.go b/google-beta/services/redis/resource_redis_cluster_generated_test.go index 83588d78ad..7d603d0979 100644 --- a/google-beta/services/redis/resource_redis_cluster_generated_test.go +++ b/google-beta/services/redis/resource_redis_cluster_generated_test.go @@ -66,6 +66,7 @@ resource "google_redis_cluster" "cluster-ha" { } region = "us-central1" replica_count = 1 + node_type = "REDIS_SHARED_CORE_NANO" transit_encryption_mode = "TRANSIT_ENCRYPTION_MODE_DISABLED" authorization_mode = "AUTH_MODE_DISABLED" depends_on = [ diff --git a/google-beta/services/redis/resource_redis_cluster_test.go b/google-beta/services/redis/resource_redis_cluster_test.go index 20166c3e5d..5dbb974711 100644 --- a/google-beta/services/redis/resource_redis_cluster_test.go +++ b/google-beta/services/redis/resource_redis_cluster_test.go @@ -10,6 +10,34 @@ import ( "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" ) +func TestAccRedisCluster_createClusterWithNodeType(t *testing.T) { + t.Parallel() + + name := fmt.Sprintf("tf-test-%d", acctest.RandInt(t)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderBetaFactories(t), + CheckDestroy: testAccCheckRedisClusterDestroyProducer(t), + Steps: []resource.TestStep{ + { + // create cluster with replica count 1 + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 3, true /*nodeType = */, "REDIS_STANDARD_SMALL"), + }, + { + ResourceName: "google_redis_cluster.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"psc_configs"}, + }, + { + // clean up the resource + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 0 /* shardCount = */, 3, false /*nodeType = */, "REDIS_STANDARD_SMALL"), + }, + }, + }) +} + // Validate that replica count is updated for the cluster func TestAccRedisCluster_updateReplicaCount(t *testing.T) { t.Parallel() @@ -23,7 +51,7 @@ func TestAccRedisCluster_updateReplicaCount(t *testing.T) { Steps: []resource.TestStep{ { // create cluster with replica count 1 - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 3, true), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 3, true /* nodeType = */, ""), }, { ResourceName: "google_redis_cluster.test", @@ -33,7 +61,7 @@ func TestAccRedisCluster_updateReplicaCount(t *testing.T) { }, { // update replica count to 2 - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 2 /* shardCount = */, 3, true), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 2 /* shardCount = */, 3, true /*nodeType = */, ""), }, { ResourceName: "google_redis_cluster.test", @@ -43,11 +71,11 @@ func TestAccRedisCluster_updateReplicaCount(t *testing.T) { }, { // clean up the resource - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 2 /* shardCount = */, 3, false), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 2 /* shardCount = */, 3, false /*nodeType = */, ""), }, { // update replica count to 0 - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 0 /* shardCount = */, 3, true), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 0 /* shardCount = */, 3, true /*nodeType = */, ""), }, { ResourceName: "google_redis_cluster.test", @@ -57,7 +85,7 @@ func TestAccRedisCluster_updateReplicaCount(t *testing.T) { }, { // clean up the resource - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 0 /* shardCount = */, 3, false), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 0 /* shardCount = */, 3, false /*nodeType = */, ""), }, }, }) @@ -76,7 +104,7 @@ func TestAccRedisCluster_updateShardCount(t *testing.T) { Steps: []resource.TestStep{ { // create cluster with shard count 3 - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 3, true), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 3, true /*nodeType = */, ""), }, { ResourceName: "google_redis_cluster.test", @@ -86,7 +114,7 @@ func TestAccRedisCluster_updateShardCount(t *testing.T) { }, { // update shard count to 5 - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 5, true), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 5, true /*nodeType = */, ""), }, { ResourceName: "google_redis_cluster.test", @@ -96,13 +124,13 @@ func TestAccRedisCluster_updateShardCount(t *testing.T) { }, { // clean up the resource - Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 5, false), + Config: createOrUpdateRedisCluster(name /* replicaCount = */, 1 /* shardCount = */, 5, false /* nodeType = */, ""), }, }, }) } -func createOrUpdateRedisCluster(name string, replicaCount int, shardCount int, preventDestroy bool) string { +func createOrUpdateRedisCluster(name string, replicaCount int, shardCount int, preventDestroy bool, nodeType string) string { lifecycleBlock := "" if preventDestroy { lifecycleBlock = ` @@ -116,6 +144,7 @@ resource "google_redis_cluster" "test" { name = "%s" replica_count = %d shard_count = %d + node_type = "%s" region = "us-central1" psc_configs { network = google_compute_network.producer_net.id @@ -151,5 +180,5 @@ resource "google_compute_network" "producer_net" { name = "%s" auto_create_subnetworks = false } -`, name, replicaCount, shardCount, lifecycleBlock, name, name, name) +`, name, replicaCount, shardCount, nodeType, lifecycleBlock, name, name, name) } diff --git a/google-beta/services/redis/resource_redis_instance.go b/google-beta/services/redis/resource_redis_instance.go index 63faa7ad3e..f8039ccc7f 100644 --- a/google-beta/services/redis/resource_redis_instance.go +++ b/google-beta/services/redis/resource_redis_instance.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "regexp" "strconv" @@ -730,6 +731,7 @@ func resourceRedisInstanceCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -738,6 +740,7 @@ func resourceRedisInstanceCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -790,12 +793,14 @@ func resourceRedisInstanceRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("RedisInstance %q", d.Id())) @@ -1010,6 +1015,7 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("auth_enabled") { @@ -1073,6 +1079,7 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1106,6 +1113,8 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -1119,6 +1128,7 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Instance %q: %s", d.Id(), err) @@ -1166,6 +1176,8 @@ func resourceRedisInstanceDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1175,6 +1187,7 @@ func resourceRedisInstanceDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/resourcemanager/data_source_google_active_folder.go b/google-beta/services/resourcemanager/data_source_google_active_folder.go index 1d759b91ae..bf7d4fadab 100644 --- a/google-beta/services/resourcemanager/data_source_google_active_folder.go +++ b/google-beta/services/resourcemanager/data_source_google_active_folder.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" resourceManagerV3 "google.golang.org/api/cloudresourcemanager/v3" ) @@ -28,6 +29,13 @@ func DataSourceGoogleActiveFolder() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "api_method": { + Type: schema.TypeString, + Optional: true, + Description: "Provides the REST method through which to find the folder. LIST is recommended as it is strongly consistent.", + Default: "LIST", + ValidateFunc: verify.ValidateEnum([]string{"LIST", "SEARCH"}), + }, }, } } @@ -42,24 +50,43 @@ func dataSourceGoogleActiveFolderRead(d *schema.ResourceData, meta interface{}) var folderMatch *resourceManagerV3.Folder parent := d.Get("parent").(string) displayName := d.Get("display_name").(string) - token := "" + apiMethod := d.Get("api_method").(string) + + if apiMethod == "LIST" { + token := "" + + for paginate := true; paginate; { + resp, err := config.NewResourceManagerV3Client(userAgent).Folders.List().Parent(parent).PageSize(300).PageToken(token).Do() + if err != nil { + return fmt.Errorf("error reading folder list: %s", err) + } - for paginate := true; paginate; { - resp, err := config.NewResourceManagerV3Client(userAgent).Folders.List().Parent(parent).PageSize(300).PageToken(token).Do() + for _, folder := range resp.Folders { + if folder.DisplayName == displayName && folder.State == "ACTIVE" { + if folderMatch != nil { + return fmt.Errorf("more than one matching folder found") + } + folderMatch = folder + } + } + token = resp.NextPageToken + paginate = token != "" + } + } else { + queryString := fmt.Sprintf("lifecycleState=ACTIVE AND parent=%s AND displayName=\"%s\"", parent, displayName) + searchRequest := config.NewResourceManagerV3Client(userAgent).Folders.Search() + searchRequest.Query(queryString) + searchResponse, err := searchRequest.Do() if err != nil { - return fmt.Errorf("error reading folder list: %s", err) + return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Folder Not Found : %s", displayName)) } - for _, folder := range resp.Folders { - if folder.DisplayName == displayName && folder.State == "ACTIVE" { - if folderMatch != nil { - return fmt.Errorf("more than one matching folder found") - } + for _, folder := range searchResponse.Folders { + if folder.DisplayName == displayName { folderMatch = folder + break } } - token = resp.NextPageToken - paginate = token != "" } if folderMatch == nil { diff --git a/google-beta/services/resourcemanager/data_source_google_active_folder_test.go b/google-beta/services/resourcemanager/data_source_google_active_folder_test.go index 8313074617..d16376f84b 100644 --- a/google-beta/services/resourcemanager/data_source_google_active_folder_test.go +++ b/google-beta/services/resourcemanager/data_source_google_active_folder_test.go @@ -32,6 +32,29 @@ func TestAccDataSourceGoogleActiveFolder_default(t *testing.T) { }) } +func TestAccDataSourceGoogleActiveFolder_Search(t *testing.T) { + org := envvar.GetTestOrgFromEnv(t) + + parent := fmt.Sprintf("organizations/%s", org) + displayName := "tf-test-" + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + ExternalProviders: map[string]resource.ExternalProvider{ + "time": {}, + }, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGoogleActiveFolderConfig_Search(parent, displayName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceGoogleActiveFolderCheck("data.google_active_folder.my_folder", "google_folder.foobar"), + ), + }, + }, + }) +} + func TestAccDataSourceGoogleActiveFolder_space(t *testing.T) { org := envvar.GetTestOrgFromEnv(t) @@ -115,3 +138,26 @@ data "google_active_folder" "my_folder" { } `, parent, displayName) } + +func testAccDataSourceGoogleActiveFolderConfig_Search(parent string, displayName string) string { + return fmt.Sprintf(` +resource "google_folder" "foobar" { + parent = "%s" + display_name = "%s" +} + +# Wait after folder creation to limit eventual consistency errors. +resource "time_sleep" "wait_120_seconds" { + depends_on = [google_folder.foobar] + create_duration = "120s" +} + + +data "google_active_folder" "my_folder" { + depends_on = [time_sleep.wait_120_seconds] + parent = google_folder.foobar.parent + display_name = google_folder.foobar.display_name + api_method = "SEARCH" +} +`, parent, displayName) +} diff --git a/google-beta/services/resourcemanager/data_source_google_service_account_access_token_test.go b/google-beta/services/resourcemanager/data_source_google_service_account_access_token_test.go index 9221441600..426ee23075 100644 --- a/google-beta/services/resourcemanager/data_source_google_service_account_access_token_test.go +++ b/google-beta/services/resourcemanager/data_source_google_service_account_access_token_test.go @@ -36,7 +36,7 @@ func TestAccDataSourceGoogleServiceAccountAccessToken_basic(t *testing.T) { resourceName := "data.google_service_account_access_token.default" serviceAccount := envvar.GetTestServiceAccountFromEnv(t) - targetServiceAccountEmail := acctest.BootstrapServiceAccount(t, envvar.GetTestProjectFromEnv(), serviceAccount) + targetServiceAccountEmail := acctest.BootstrapServiceAccount(t, "acctoken", serviceAccount) acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, diff --git a/google-beta/services/resourcemanager/data_source_google_service_account_id_token_test.go b/google-beta/services/resourcemanager/data_source_google_service_account_id_token_test.go index cda4ec88db..7e737971c1 100644 --- a/google-beta/services/resourcemanager/data_source_google_service_account_id_token_test.go +++ b/google-beta/services/resourcemanager/data_source_google_service_account_id_token_test.go @@ -75,7 +75,7 @@ func TestAccDataSourceGoogleServiceAccountIdToken_impersonation(t *testing.T) { resourceName := "data.google_service_account_id_token.default" serviceAccount := envvar.GetTestServiceAccountFromEnv(t) - targetServiceAccountEmail := acctest.BootstrapServiceAccount(t, envvar.GetTestProjectFromEnv(), serviceAccount) + targetServiceAccountEmail := acctest.BootstrapServiceAccount(t, "idtoken-imp", serviceAccount) resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, diff --git a/google-beta/services/resourcemanager/data_source_google_service_account_jwt_test.go b/google-beta/services/resourcemanager/data_source_google_service_account_jwt_test.go index 9388af1580..8df73c1a4d 100644 --- a/google-beta/services/resourcemanager/data_source_google_service_account_jwt_test.go +++ b/google-beta/services/resourcemanager/data_source_google_service_account_jwt_test.go @@ -102,7 +102,7 @@ func TestAccDataSourceGoogleServiceAccountJwt(t *testing.T) { resourceName := "data.google_service_account_jwt.default" serviceAccount := envvar.GetTestServiceAccountFromEnv(t) - targetServiceAccountEmail := acctest.BootstrapServiceAccount(t, envvar.GetTestProjectFromEnv(), serviceAccount) + targetServiceAccountEmail := acctest.BootstrapServiceAccount(t, "jwt", serviceAccount) staticTime := time.Now() diff --git a/google-beta/services/resourcemanager/resource_google_project_iam_member_remove.go b/google-beta/services/resourcemanager/resource_google_project_iam_member_remove.go new file mode 100644 index 0000000000..dc09d60241 --- /dev/null +++ b/google-beta/services/resourcemanager/resource_google_project_iam_member_remove.go @@ -0,0 +1,134 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package resourcemanager + +import ( + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgiamresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + cloudresourcemanager "google.golang.org/api/cloudresourcemanager/v1" +) + +func ResourceGoogleProjectIamMemberRemove() *schema.Resource { + return &schema.Resource{ + Create: resourceGoogleProjectIamMemberRemoveCreate, + Read: resourceGoogleProjectIamMemberRemoveRead, + Delete: resourceGoogleProjectIamMemberRemoveDelete, + + Schema: map[string]*schema.Schema{ + "project": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + Description: `The project id of the target project.`, + }, + "role": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + Description: `The target role that should be removed.`, + }, + "member": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + Description: `The IAM principal that should not have the target role.`, + }, + }, + UseJSONNumber: true, + } +} + +func resourceGoogleProjectIamMemberRemoveCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + + project := d.Get("project").(string) + role := d.Get("role").(string) + member := d.Get("member").(string) + + found := false + iamPolicy, err := config.NewResourceManagerClient(config.UserAgent).Projects.GetIamPolicy(project, + &cloudresourcemanager.GetIamPolicyRequest{ + Options: &cloudresourcemanager.GetPolicyOptions{ + RequestedPolicyVersion: tpgiamresource.IamPolicyVersion, + }, + }).Do() + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, d.Id()) + } + + for i := 0; i < len(iamPolicy.Bindings); i++ { + if role == iamPolicy.Bindings[i].Role { + for j := 0; j < len(iamPolicy.Bindings[i].Members); j++ { + if member == iamPolicy.Bindings[i].Members[j] { + found = true + iamPolicy.Bindings[i].Members = append(iamPolicy.Bindings[i].Members[:j], iamPolicy.Bindings[i].Members[j+1:]...) + break + } + } + } + } + + if found == false { + fmt.Printf("[DEBUG] Could not find Member %s with the corresponding role %s. No removal necessary", member, role) + } else { + updateRequest := &cloudresourcemanager.SetIamPolicyRequest{ + Policy: iamPolicy, + UpdateMask: "bindings", + } + _, err = config.NewResourceManagerClient(config.UserAgent).Projects.SetIamPolicy(project, updateRequest).Do() + if err != nil { + return fmt.Errorf("cannot update IAM policy on project %s: %v", project, err) + } + } + + d.SetId(fmt.Sprintf("%s/%s/%s", project, member, role)) + + return resourceGoogleProjectIamMemberRemoveRead(d, meta) +} + +func resourceGoogleProjectIamMemberRemoveRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + + project := d.Get("project").(string) + role := d.Get("role").(string) + member := d.Get("member").(string) + + found := false + iamPolicy, err := config.NewResourceManagerClient(config.UserAgent).Projects.GetIamPolicy(project, + &cloudresourcemanager.GetIamPolicyRequest{ + Options: &cloudresourcemanager.GetPolicyOptions{ + RequestedPolicyVersion: tpgiamresource.IamPolicyVersion, + }, + }).Do() + if err != nil { + return transport_tpg.HandleNotFoundError(err, d, d.Id()) + } + + for i := 0; i < len(iamPolicy.Bindings); i++ { + if role == iamPolicy.Bindings[i].Role { + for j := 0; j < len(iamPolicy.Bindings[i].Members); j++ { + if member == iamPolicy.Bindings[i].Members[j] { + found = true + break + } + } + } + } + + if found { + fmt.Printf("[DEBUG] found membership in project's policy %v, removing from state", d.Id()) + d.SetId("") + } + + return nil +} + +func resourceGoogleProjectIamMemberRemoveDelete(d *schema.ResourceData, meta interface{}) error { + fmt.Printf("[DEBUG] clearing resource %v from state", d.Id()) + d.SetId("") + + return nil +} diff --git a/google-beta/services/resourcemanager/resource_google_project_iam_member_remove_test.go b/google-beta/services/resourcemanager/resource_google_project_iam_member_remove_test.go new file mode 100644 index 0000000000..df7ff371ef --- /dev/null +++ b/google-beta/services/resourcemanager/resource_google_project_iam_member_remove_test.go @@ -0,0 +1,271 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package resourcemanager_test + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccProjectIamMemberRemove_basic(t *testing.T) { + t.Parallel() + + org := envvar.GetTestOrgFromEnv(t) + randomSuffix := acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t), + ExternalProviders: map[string]resource.ExternalProvider{ + "time": {}, + }, + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleProjectIamMemberRemove_basic(randomSuffix, org), + ExpectNonEmptyPlan: true, // Due to adding in binding, then removing in remove resource + }, + { + Config: testAccCheckGoogleProjectIamMemberRemove_basic2(randomSuffix, org), + PlanOnly: true, // binding expects the membership to be removed. Any diff will fail the test due to PlanOnly. + }, + }, + }) +} + +func TestAccProjectIamMemberRemove_multipleMembersInBinding(t *testing.T) { + t.Parallel() + + org := envvar.GetTestOrgFromEnv(t) + randomSuffix := acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t), + ExternalProviders: map[string]resource.ExternalProvider{ + "time": {}, + }, + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleProjectIamMemberRemove_multipleMemberBinding(randomSuffix, org), + ExpectNonEmptyPlan: true, // Due to adding in binding, then removing in remove resource + }, + { + Config: testAccCheckGoogleProjectIamMemberRemove_multipleMemberBinding2(randomSuffix, org), + PlanOnly: true, // binding expects the membership to be removed. Any diff will fail the test due to PlanOnly. + }, + }, + }) +} + +func TestAccProjectIamMemberRemove_memberInMultipleBindings(t *testing.T) { + t.Parallel() + + org := envvar.GetTestOrgFromEnv(t) + randomSuffix := acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckGoogleProjectIamCustomRoleDestroyProducer(t), + ExternalProviders: map[string]resource.ExternalProvider{ + "time": {}, + }, + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleProjectIamMemberRemove_multipleMemberBinding(randomSuffix, org), + ExpectNonEmptyPlan: true, // Due to adding in binding, then removing in remove resource + }, + { + Config: testAccCheckGoogleProjectIamMemberRemove_multipleMemberBinding2(randomSuffix, org), + PlanOnly: true, // binding expects the membership to be removed. Any diff will fail the test due to PlanOnly. + }, + }, + }) +} + +func testAccCheckGoogleProjectIamMemberRemove_basic(randomSuffix, org string) string { + return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "tf-test-%s" + name = "tf-test-%s" + org_id = "%s" +} + +resource "google_project_iam_binding" "bar" { + project = google_project.project.project_id + members = ["user:gterraformtest1@gmail.com"] + role = "roles/editor" +} + +resource "time_sleep" "wait_20s" { + depends_on = [google_project_iam_binding.bar] + create_duration = "20s" +} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.project.project_id + member = "user:gterraformtest1@gmail.com" + depends_on = [time_sleep.wait_20s] +} +`, randomSuffix, randomSuffix, org) +} + +func testAccCheckGoogleProjectIamMemberRemove_basic2(randomSuffix, org string) string { + return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "tf-test-%s" + name = "tf-test-%s" + org_id = "%s" +} + +resource "google_project_iam_binding" "bar" { + project = google_project.project.project_id + members = [] + role = "roles/editor" +} + +resource "time_sleep" "wait_20s" { + depends_on = [google_project_iam_binding.bar] + create_duration = "20s" +} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.project.project_id + member = "user:gterraformtest1@gmail.com" + depends_on = [time_sleep.wait_20s] +} +`, randomSuffix, randomSuffix, org) +} + +func testAccCheckGoogleProjectIamMemberRemove_multipleMemberBinding(random_suffix, org string) string { + return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "tf-test-%s" + name = "tf-test-%s" + org_id = "%s" +} + +resource "google_project_iam_binding" "bar" { + project = google_project.project.project_id + members = ["user:gterraformtest1@gmail.com", "user:gterraformtest2@gmail.com"] + role = "roles/editor" +} + +resource "time_sleep" "wait_20s" { + depends_on = [google_project_iam_binding.bar] + create_duration = "20s" +} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.project.project_id + member = "user:gterraformtest1@gmail.com" + depends_on = [time_sleep.wait_20s] +} +`, random_suffix, random_suffix, org) +} + +func testAccCheckGoogleProjectIamMemberRemove_multipleMemberBinding2(random_suffix, org string) string { + return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "tf-test-%s" + name = "tf-test-%s" + org_id = "%s" +} + +resource "google_project_iam_binding" "bar" { + project = google_project.project.project_id + members = ["user:gterraformtest2@gmail.com"] + role = "roles/editor" +} + +resource "time_sleep" "wait_20s" { + depends_on = [google_project_iam_binding.bar] + create_duration = "20s" +} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.project.project_id + member = "user:gterraformtest1@gmail.com" + depends_on = [time_sleep.wait_20s] +} +`, random_suffix, random_suffix, org) +} + +func testAccProjectIamMemberRemove_memberInMultipleBindings(random_suffix, org string) string { + return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "tf-test-%s" + name = "tf-test-%s" + org_id = "%s" +} + +resource "google_project_iam_binding" "bar" { + project = google_project.project.project_id + members = ["user:gterraformtest1@gmail.com"] + role = "roles/editor" +} + +resource "google_project_iam_binding" "baz" { + project = google_project.project.project_id + members = ["user:gterraformtest1@gmail.com"] + role = "roles/viewer" +} + +resource "time_sleep" "wait_20s" { + depends_on = [google_project_iam_binding.bar, google_project_iam_binding.baz] + create_duration = "20s" +} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.project.project_id + member = "user:gterraformtest1@gmail.com" + depends_on = [time_sleep.wait_20s] +} +`, random_suffix, random_suffix, org) +} + +func testAccProjectIamMemberRemove_memberInMultipleBindings2(random_suffix, org string) string { + return fmt.Sprintf(` +resource "google_project" "project" { + project_id = "tf-test-%s" + name = "tf-test-%s" + org_id = "%s" +} + +resource "google_project_iam_binding" "bar" { + project = google_project.project.project_id + members = [] + role = "roles/editor" +} + +resource "google_project_iam_binding" "baz" { + project = google_project.project.project_id + members = ["user:gterraformtest1@gmail.com"] + role = "roles/viewer" +} + +resource "time_sleep" "wait_20s" { + depends_on = [google_project_iam_binding.bar, google_project_iam_binding.baz] + create_duration = "20s" +} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.project.project_id + member = "user:gterraformtest1@gmail.com" + depends_on = [time_sleep.wait_20s] +} +`, random_suffix, random_suffix, org) +} diff --git a/google-beta/services/resourcemanager/resource_google_service_account.go b/google-beta/services/resourcemanager/resource_google_service_account.go index 5ec39ea2fb..63b79347e5 100644 --- a/google-beta/services/resourcemanager/resource_google_service_account.go +++ b/google-beta/services/resourcemanager/resource_google_service_account.go @@ -268,21 +268,16 @@ func resourceGoogleServiceAccountUpdate(d *schema.ResourceData, meta interface{} if err != nil { return err } - - if len(updateMask) == 0 { - return nil - } - } else if d.HasChange("disabled") && d.Get("disabled").(bool) { _, err = config.NewIamClient(userAgent).Projects.ServiceAccounts.Disable(d.Id(), &iam.DisableServiceAccountRequest{}).Do() if err != nil { return err } + } - if len(updateMask) == 0 { - return nil - } + if len(updateMask) == 0 { + return nil } _, err = config.NewIamClient(userAgent).Projects.ServiceAccounts.Patch(d.Id(), diff --git a/google-beta/services/resourcemanager/resource_google_service_account_test.go b/google-beta/services/resourcemanager/resource_google_service_account_test.go index ed95988335..4c902847ce 100644 --- a/google-beta/services/resourcemanager/resource_google_service_account_test.go +++ b/google-beta/services/resourcemanager/resource_google_service_account_test.go @@ -121,7 +121,7 @@ func TestAccServiceAccount_createIgnoreAlreadyExists(t *testing.T) { }, // The second step creates a new resource that duplicates with the existing service account. { - Config: testAccServiceAccountCreateIgnoreAlreadyExists(accountId, displayName, desc), + Config: testAccServiceAccountDuplicateIgnoreAlreadyExists(accountId, displayName, desc), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( "google_service_account.duplicate", "member", "serviceAccount:"+expectedEmail), @@ -131,6 +131,50 @@ func TestAccServiceAccount_createIgnoreAlreadyExists(t *testing.T) { }) } +// Test setting create_ignore_already_exists on an existing resource +func TestAccServiceAccount_existingResourceCreateIgnoreAlreadyExists(t *testing.T) { + t.Parallel() + + project := envvar.GetTestProjectFromEnv() + accountId := "a" + acctest.RandString(t, 10) + displayName := "Terraform Test" + desc := "test description" + + expectedEmail := fmt.Sprintf("%s@%s.iam.gserviceaccount.com", accountId, project) + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + // The first step creates a new resource with create_ignore_already_exists=false + { + Config: testAccServiceAccountCreateIgnoreAlreadyExists(accountId, displayName, desc, false), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "google_service_account.acceptance", "project", project), + resource.TestCheckResourceAttr( + "google_service_account.acceptance", "member", "serviceAccount:"+expectedEmail), + ), + }, + { + ResourceName: "google_service_account.acceptance", + ImportStateId: fmt.Sprintf("projects/%s/serviceAccounts/%s", project, expectedEmail), + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"create_ignore_already_exists"}, // Import leaves this field out when false + }, + // The second step updates the resource to have create_ignore_already_exists=true + { + Config: testAccServiceAccountCreateIgnoreAlreadyExists(accountId, displayName, desc, true), + Check: resource.ComposeTestCheckFunc(resource.TestCheckResourceAttr( + "google_service_account.acceptance", "project", project), + resource.TestCheckResourceAttr( + "google_service_account.acceptance", "member", "serviceAccount:"+expectedEmail), + ), + }, + }, + }) +} + func TestAccServiceAccount_Disabled(t *testing.T) { t.Parallel() @@ -209,7 +253,18 @@ resource "google_service_account" "acceptance" { `, account, name, desc) } -func testAccServiceAccountCreateIgnoreAlreadyExists(account, name, desc string) string { +func testAccServiceAccountCreateIgnoreAlreadyExists(account, name, desc string, ignore_already_exists bool) string { + return fmt.Sprintf(` +resource "google_service_account" "acceptance" { + account_id = "%v" + display_name = "%v" + description = "%v" + create_ignore_already_exists = %t +} +`, account, name, desc, ignore_already_exists) +} + +func testAccServiceAccountDuplicateIgnoreAlreadyExists(account, name, desc string) string { return fmt.Sprintf(` resource "google_service_account" "acceptance" { account_id = "%v" diff --git a/google-beta/services/resourcemanager/resource_resource_manager_lien.go b/google-beta/services/resourcemanager/resource_resource_manager_lien.go index 9bfde40432..04edf3ca72 100644 --- a/google-beta/services/resourcemanager/resource_resource_manager_lien.go +++ b/google-beta/services/resourcemanager/resource_resource_manager_lien.go @@ -20,6 +20,7 @@ package resourcemanager import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -144,6 +145,7 @@ func resourceResourceManagerLienCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -152,6 +154,7 @@ func resourceResourceManagerLienCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Lien: %s", err) @@ -203,12 +206,14 @@ func resourceResourceManagerLienRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ResourceManagerLien %q", d.Id())) @@ -281,6 +286,7 @@ func resourceResourceManagerLienDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) // log the old URL to make the ineffassign linter happy // in theory, we should find a way to disable the default URL and not construct // both, but that's a problem for another day. Today, we cheat. @@ -299,6 +305,7 @@ func resourceResourceManagerLienDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Lien") diff --git a/google-beta/services/secretmanager/resource_secret_manager_secret.go b/google-beta/services/secretmanager/resource_secret_manager_secret.go index f329404f65..3e0736ab5c 100644 --- a/google-beta/services/secretmanager/resource_secret_manager_secret.go +++ b/google-beta/services/secretmanager/resource_secret_manager_secret.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -286,6 +287,15 @@ An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, Elem: &schema.Schema{Type: schema.TypeString}, }, + "version_destroy_ttl": { + Type: schema.TypeString, + Optional: true, + Description: `Secret Version TTL after destruction request. +This is a part of the delayed delete feature on Secret Version. +For secret with versionDestroyTtl>0, version destruction doesn't happen immediately +on calling destroy instead the version goes to a disabled state and +the actual destruction happens after this TTL expires.`, + }, "create_time": { Type: schema.TypeString, Computed: true, @@ -341,6 +351,12 @@ func resourceSecretManagerSecretCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("version_aliases"); !tpgresource.IsEmptyValue(reflect.ValueOf(versionAliasesProp)) && (ok || !reflect.DeepEqual(v, versionAliasesProp)) { obj["versionAliases"] = versionAliasesProp } + versionDestroyTtlProp, err := expandSecretManagerSecretVersionDestroyTtl(d.Get("version_destroy_ttl"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("version_destroy_ttl"); !tpgresource.IsEmptyValue(reflect.ValueOf(versionDestroyTtlProp)) && (ok || !reflect.DeepEqual(v, versionDestroyTtlProp)) { + obj["versionDestroyTtl"] = versionDestroyTtlProp + } replicationProp, err := expandSecretManagerSecretReplication(d.Get("replication"), d, config) if err != nil { return err @@ -403,6 +419,7 @@ func resourceSecretManagerSecretCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -411,6 +428,7 @@ func resourceSecretManagerSecretCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Secret: %s", err) @@ -456,12 +474,14 @@ func resourceSecretManagerSecretRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecretManagerSecret %q", d.Id())) @@ -486,6 +506,9 @@ func resourceSecretManagerSecretRead(d *schema.ResourceData, meta interface{}) e if err := d.Set("version_aliases", flattenSecretManagerSecretVersionAliases(res["versionAliases"], d, config)); err != nil { return fmt.Errorf("Error reading Secret: %s", err) } + if err := d.Set("version_destroy_ttl", flattenSecretManagerSecretVersionDestroyTtl(res["versionDestroyTtl"], d, config)); err != nil { + return fmt.Errorf("Error reading Secret: %s", err) + } if err := d.Set("replication", flattenSecretManagerSecretReplication(res["replication"], d, config)); err != nil { return fmt.Errorf("Error reading Secret: %s", err) } @@ -533,6 +556,12 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("version_aliases"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, versionAliasesProp)) { obj["versionAliases"] = versionAliasesProp } + versionDestroyTtlProp, err := expandSecretManagerSecretVersionDestroyTtl(d.Get("version_destroy_ttl"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("version_destroy_ttl"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, versionDestroyTtlProp)) { + obj["versionDestroyTtl"] = versionDestroyTtlProp + } topicsProp, err := expandSecretManagerSecretTopics(d.Get("topics"), d, config) if err != nil { return err @@ -576,12 +605,17 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Secret %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("version_aliases") { updateMask = append(updateMask, "versionAliases") } + if d.HasChange("version_destroy_ttl") { + updateMask = append(updateMask, "versionDestroyTtl") + } + if d.HasChange("topics") { updateMask = append(updateMask, "topics") } @@ -650,6 +684,7 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -690,6 +725,8 @@ func resourceSecretManagerSecretDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Secret %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -699,6 +736,7 @@ func resourceSecretManagerSecretDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Secret") @@ -770,6 +808,10 @@ func flattenSecretManagerSecretVersionAliases(v interface{}, d *schema.ResourceD return v } +func flattenSecretManagerSecretVersionDestroyTtl(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenSecretManagerSecretReplication(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -948,6 +990,10 @@ func expandSecretManagerSecretVersionAliases(v interface{}, d tpgresource.Terraf return m, nil } +func expandSecretManagerSecretVersionDestroyTtl(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandSecretManagerSecretReplication(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { diff --git a/google-beta/services/secretmanager/resource_secret_manager_secret_generated_test.go b/google-beta/services/secretmanager/resource_secret_manager_secret_generated_test.go index e2c4ae96ea..b314a006e8 100644 --- a/google-beta/services/secretmanager/resource_secret_manager_secret_generated_test.go +++ b/google-beta/services/secretmanager/resource_secret_manager_secret_generated_test.go @@ -127,6 +127,45 @@ resource "google_secret_manager_secret" "secret-with-annotations" { `, context) } +func TestAccSecretManagerSecret_secretWithVersionDestroyTtlExample(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckSecretManagerSecretDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccSecretManagerSecret_secretWithVersionDestroyTtlExample(context), + }, + { + ResourceName: "google_secret_manager_secret.secret-with-version-destroy-ttl", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ttl", "secret_id", "labels", "annotations", "terraform_labels"}, + }, + }, + }) +} + +func testAccSecretManagerSecret_secretWithVersionDestroyTtlExample(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_secret_manager_secret" "secret-with-version-destroy-ttl" { + secret_id = "secret%{random_suffix}" + + version_destroy_ttl = "2592000s" + + replication { + auto {} + } +} +`, context) +} + func TestAccSecretManagerSecret_secretWithAutomaticCmekExample(t *testing.T) { t.Parallel() diff --git a/google-beta/services/secretmanager/resource_secret_manager_secret_test.go b/google-beta/services/secretmanager/resource_secret_manager_secret_test.go index 6cdb892b8a..3905bbf1f1 100644 --- a/google-beta/services/secretmanager/resource_secret_manager_secret_test.go +++ b/google-beta/services/secretmanager/resource_secret_manager_secret_test.go @@ -380,6 +380,49 @@ func TestAccSecretManagerSecret_ttlUpdate(t *testing.T) { }) } +func TestAccSecretManagerSecret_versionDestroyTtlUpdate(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckSecretManagerSecretDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccSecretManagerSecret_withoutVersionDestroyTtl(context), + }, + { + ResourceName: "google_secret_manager_secret.secret-basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, + }, + { + Config: testAccSecretManagerSecret_versionDestroyTtlUpdate(context), + }, + { + ResourceName: "google_secret_manager_secret.secret-basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, + }, + { + Config: testAccSecretManagerSecret_withoutVersionDestroyTtl(context), + }, + { + ResourceName: "google_secret_manager_secret.secret-basic", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, + }, + }, + }) +} + func TestAccSecretManagerSecret_updateBetweenTtlAndExpireTime(t *testing.T) { t.Parallel() @@ -1105,6 +1148,55 @@ resource "google_secret_manager_secret" "secret-basic" { `, context) } +func testAccSecretManagerSecret_withoutVersionDestroyTtl(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_secret_manager_secret" "secret-basic" { + secret_id = "tf-test-secret-%{random_suffix}" + + labels = { + label = "my-label" + } + + replication { + user_managed { + replicas { + location = "us-central1" + } + replicas { + location = "us-east1" + } + } + } +} +`, context) +} + +func testAccSecretManagerSecret_versionDestroyTtlUpdate(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_secret_manager_secret" "secret-basic" { + secret_id = "tf-test-secret-%{random_suffix}" + + labels = { + label = "my-label" + } + + replication { + user_managed { + replicas { + location = "us-central1" + } + replicas { + location = "us-east1" + } + } + } + + version_destroy_ttl = "86400s" + +} +`, context) +} + func testAccSecretManagerSecret_expireTime(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_secret_manager_secret" "secret-basic" { diff --git a/google-beta/services/secretmanager/resource_secret_manager_secret_version.go b/google-beta/services/secretmanager/resource_secret_manager_secret_version.go index 53014c7a90..0522673db2 100644 --- a/google-beta/services/secretmanager/resource_secret_manager_secret_version.go +++ b/google-beta/services/secretmanager/resource_secret_manager_secret_version.go @@ -21,6 +21,7 @@ import ( "encoding/base64" "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -151,6 +152,7 @@ func resourceSecretManagerSecretVersionCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -159,6 +161,7 @@ func resourceSecretManagerSecretVersionCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SecretVersion: %s", err) @@ -213,6 +216,7 @@ func resourceSecretManagerSecretVersionRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) // Explicitly set the field to default value if unset if _, ok := d.GetOkExists("is_secret_data_base64"); !ok { if err := d.Set("is_secret_data_base64", false); err != nil { @@ -225,6 +229,7 @@ func resourceSecretManagerSecretVersionRead(d *schema.ResourceData, meta interfa Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecretManagerSecretVersion %q", d.Id())) @@ -314,6 +319,7 @@ func resourceSecretManagerSecretVersionDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) deletionPolicy := d.Get("deletion_policy") if deletionPolicy == "ABANDON" { @@ -334,6 +340,7 @@ func resourceSecretManagerSecretVersionDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SecretVersion") diff --git a/google-beta/services/securesourcemanager/resource_secure_source_manager_instance.go b/google-beta/services/securesourcemanager/resource_secure_source_manager_instance.go index 679ce71eef..21ef3af21f 100644 --- a/google-beta/services/securesourcemanager/resource_secure_source_manager_instance.go +++ b/google-beta/services/securesourcemanager/resource_secure_source_manager_instance.go @@ -20,6 +20,7 @@ package securesourcemanager import ( "fmt" "log" + "net/http" "reflect" "time" @@ -239,6 +240,7 @@ func resourceSecureSourceManagerInstanceCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -247,6 +249,7 @@ func resourceSecureSourceManagerInstanceCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -299,12 +302,14 @@ func resourceSecureSourceManagerInstanceRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecureSourceManagerInstance %q", d.Id())) @@ -383,6 +388,8 @@ func resourceSecureSourceManagerInstanceDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -392,6 +399,7 @@ func resourceSecureSourceManagerInstanceDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/securitycenter/resource_scc_event_threat_detection_custom_module.go b/google-beta/services/securitycenter/resource_scc_event_threat_detection_custom_module.go index aedb8cb471..be28bcbe55 100644 --- a/google-beta/services/securitycenter/resource_scc_event_threat_detection_custom_module.go +++ b/google-beta/services/securitycenter/resource_scc_event_threat_detection_custom_module.go @@ -21,6 +21,7 @@ import ( "encoding/json" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -160,6 +161,7 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleCreate(d *schema.Reso billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -168,6 +170,7 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleCreate(d *schema.Reso UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating EventThreatDetectionCustomModule: %s", err) @@ -207,12 +210,14 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleRead(d *schema.Resour billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterEventThreatDetectionCustomModule %q", d.Id())) @@ -285,6 +290,7 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleUpdate(d *schema.Reso } log.Printf("[DEBUG] Updating EventThreatDetectionCustomModule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("config") { @@ -320,6 +326,7 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleUpdate(d *schema.Reso UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -361,6 +368,8 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleDelete(d *schema.Reso billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting EventThreatDetectionCustomModule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -370,6 +379,7 @@ func resourceSecurityCenterEventThreatDetectionCustomModuleDelete(d *schema.Reso UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "EventThreatDetectionCustomModule") diff --git a/google-beta/services/securitycenter/resource_scc_folder_custom_module.go b/google-beta/services/securitycenter/resource_scc_folder_custom_module.go index 83674ad9a9..080da5bbf4 100644 --- a/google-beta/services/securitycenter/resource_scc_folder_custom_module.go +++ b/google-beta/services/securitycenter/resource_scc_folder_custom_module.go @@ -20,6 +20,7 @@ package securitycenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -291,6 +292,7 @@ func resourceSecurityCenterFolderCustomModuleCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -299,6 +301,7 @@ func resourceSecurityCenterFolderCustomModuleCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FolderCustomModule: %s", err) @@ -338,12 +341,14 @@ func resourceSecurityCenterFolderCustomModuleRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterFolderCustomModule %q", d.Id())) @@ -410,6 +415,7 @@ func resourceSecurityCenterFolderCustomModuleUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating FolderCustomModule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enablement_state") { @@ -441,6 +447,7 @@ func resourceSecurityCenterFolderCustomModuleUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -482,6 +489,8 @@ func resourceSecurityCenterFolderCustomModuleDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FolderCustomModule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -491,6 +500,7 @@ func resourceSecurityCenterFolderCustomModuleDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FolderCustomModule") diff --git a/google-beta/services/securitycenter/resource_scc_mute_config.go b/google-beta/services/securitycenter/resource_scc_mute_config.go index 7ad223825a..1238189aad 100644 --- a/google-beta/services/securitycenter/resource_scc_mute_config.go +++ b/google-beta/services/securitycenter/resource_scc_mute_config.go @@ -20,6 +20,7 @@ package securitycenter import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -144,6 +145,7 @@ func resourceSecurityCenterMuteConfigCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -152,6 +154,7 @@ func resourceSecurityCenterMuteConfigCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating MuteConfig: %s", err) @@ -191,12 +194,14 @@ func resourceSecurityCenterMuteConfigRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterMuteConfig %q", d.Id())) @@ -253,6 +258,7 @@ func resourceSecurityCenterMuteConfigUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating MuteConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -284,6 +290,7 @@ func resourceSecurityCenterMuteConfigUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -318,6 +325,8 @@ func resourceSecurityCenterMuteConfigDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MuteConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -327,6 +336,7 @@ func resourceSecurityCenterMuteConfigDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "MuteConfig") diff --git a/google-beta/services/securitycenter/resource_scc_notification_config.go b/google-beta/services/securitycenter/resource_scc_notification_config.go index 99af790aa2..f0d134cfbf 100644 --- a/google-beta/services/securitycenter/resource_scc_notification_config.go +++ b/google-beta/services/securitycenter/resource_scc_notification_config.go @@ -20,6 +20,7 @@ package securitycenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -170,6 +171,7 @@ func resourceSecurityCenterNotificationConfigCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -178,6 +180,7 @@ func resourceSecurityCenterNotificationConfigCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NotificationConfig: %s", err) @@ -235,12 +238,14 @@ func resourceSecurityCenterNotificationConfigRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterNotificationConfig %q", d.Id())) @@ -300,6 +305,7 @@ func resourceSecurityCenterNotificationConfigUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating NotificationConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -335,6 +341,7 @@ func resourceSecurityCenterNotificationConfigUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -369,6 +376,8 @@ func resourceSecurityCenterNotificationConfigDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NotificationConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -378,6 +387,7 @@ func resourceSecurityCenterNotificationConfigDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NotificationConfig") diff --git a/google-beta/services/securitycenter/resource_scc_organization_custom_module.go b/google-beta/services/securitycenter/resource_scc_organization_custom_module.go index 12714b6acc..a72c9d2173 100644 --- a/google-beta/services/securitycenter/resource_scc_organization_custom_module.go +++ b/google-beta/services/securitycenter/resource_scc_organization_custom_module.go @@ -20,6 +20,7 @@ package securitycenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -291,6 +292,7 @@ func resourceSecurityCenterOrganizationCustomModuleCreate(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -299,6 +301,7 @@ func resourceSecurityCenterOrganizationCustomModuleCreate(d *schema.ResourceData UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating OrganizationCustomModule: %s", err) @@ -338,12 +341,14 @@ func resourceSecurityCenterOrganizationCustomModuleRead(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterOrganizationCustomModule %q", d.Id())) @@ -410,6 +415,7 @@ func resourceSecurityCenterOrganizationCustomModuleUpdate(d *schema.ResourceData } log.Printf("[DEBUG] Updating OrganizationCustomModule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enablement_state") { @@ -441,6 +447,7 @@ func resourceSecurityCenterOrganizationCustomModuleUpdate(d *schema.ResourceData UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -482,6 +489,8 @@ func resourceSecurityCenterOrganizationCustomModuleDelete(d *schema.ResourceData billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting OrganizationCustomModule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -491,6 +500,7 @@ func resourceSecurityCenterOrganizationCustomModuleDelete(d *schema.ResourceData UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "OrganizationCustomModule") diff --git a/google-beta/services/securitycenter/resource_scc_project_custom_module.go b/google-beta/services/securitycenter/resource_scc_project_custom_module.go index 869a660ff4..d13da3120f 100644 --- a/google-beta/services/securitycenter/resource_scc_project_custom_module.go +++ b/google-beta/services/securitycenter/resource_scc_project_custom_module.go @@ -20,6 +20,7 @@ package securitycenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -302,6 +303,7 @@ func resourceSecurityCenterProjectCustomModuleCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -310,6 +312,7 @@ func resourceSecurityCenterProjectCustomModuleCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ProjectCustomModule: %s", err) @@ -355,12 +358,14 @@ func resourceSecurityCenterProjectCustomModuleRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterProjectCustomModule %q", d.Id())) @@ -437,6 +442,7 @@ func resourceSecurityCenterProjectCustomModuleUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating ProjectCustomModule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enablement_state") { @@ -468,6 +474,7 @@ func resourceSecurityCenterProjectCustomModuleUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -515,6 +522,8 @@ func resourceSecurityCenterProjectCustomModuleDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ProjectCustomModule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -524,6 +533,7 @@ func resourceSecurityCenterProjectCustomModuleDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ProjectCustomModule") diff --git a/google-beta/services/securitycenter/resource_scc_source.go b/google-beta/services/securitycenter/resource_scc_source.go index 34eca9a9f0..67c08487b9 100644 --- a/google-beta/services/securitycenter/resource_scc_source.go +++ b/google-beta/services/securitycenter/resource_scc_source.go @@ -20,6 +20,7 @@ package securitycenter import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -118,6 +119,7 @@ func resourceSecurityCenterSourceCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -126,6 +128,7 @@ func resourceSecurityCenterSourceCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Source: %s", err) @@ -183,12 +186,14 @@ func resourceSecurityCenterSourceRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityCenterSource %q", d.Id())) @@ -236,6 +241,7 @@ func resourceSecurityCenterSourceUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Source %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -267,6 +273,7 @@ func resourceSecurityCenterSourceUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/securityposture/resource_securityposture_posture.go b/google-beta/services/securityposture/resource_securityposture_posture.go index 0c489b3c17..c298752ded 100644 --- a/google-beta/services/securityposture/resource_securityposture_posture.go +++ b/google-beta/services/securityposture/resource_securityposture_posture.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -679,6 +680,7 @@ func resourceSecurityposturePostureCreate(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -687,6 +689,7 @@ func resourceSecurityposturePostureCreate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Posture: %s", err) @@ -733,12 +736,14 @@ func resourceSecurityposturePostureRead(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityposturePosture %q", d.Id())) @@ -816,6 +821,7 @@ func resourceSecurityposturePostureUpdate(d *schema.ResourceData, meta interface } log.Printf("[DEBUG] Updating Posture %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("state") { @@ -855,6 +861,7 @@ func resourceSecurityposturePostureUpdate(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -896,6 +903,8 @@ func resourceSecurityposturePostureDelete(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Posture %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -905,6 +914,7 @@ func resourceSecurityposturePostureDelete(d *schema.ResourceData, meta interface UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Posture") diff --git a/google-beta/services/securityposture/resource_securityposture_posture_deployment.go b/google-beta/services/securityposture/resource_securityposture_posture_deployment.go index dd1f1d3f7d..68c4050fdc 100644 --- a/google-beta/services/securityposture/resource_securityposture_posture_deployment.go +++ b/google-beta/services/securityposture/resource_securityposture_posture_deployment.go @@ -20,6 +20,7 @@ package securityposture import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -195,6 +196,7 @@ func resourceSecurityposturePostureDeploymentCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -203,6 +205,7 @@ func resourceSecurityposturePostureDeploymentCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating PostureDeployment: %s", err) @@ -249,12 +252,14 @@ func resourceSecurityposturePostureDeploymentRead(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityposturePostureDeployment %q", d.Id())) @@ -338,6 +343,7 @@ func resourceSecurityposturePostureDeploymentUpdate(d *schema.ResourceData, meta } log.Printf("[DEBUG] Updating PostureDeployment %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("posture_id") { @@ -373,6 +379,7 @@ func resourceSecurityposturePostureDeploymentUpdate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -414,6 +421,8 @@ func resourceSecurityposturePostureDeploymentDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PostureDeployment %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -423,6 +432,7 @@ func resourceSecurityposturePostureDeploymentDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "PostureDeployment") diff --git a/google-beta/services/securityposture/resource_securityposture_posture_deployment_generated_test.go b/google-beta/services/securityposture/resource_securityposture_posture_deployment_generated_test.go index 4e524afddc..2073f5bee8 100644 --- a/google-beta/services/securityposture/resource_securityposture_posture_deployment_generated_test.go +++ b/google-beta/services/securityposture/resource_securityposture_posture_deployment_generated_test.go @@ -61,14 +61,14 @@ func TestAccSecurityposturePostureDeployment_securityposturePostureDeploymentBas func testAccSecurityposturePostureDeployment_securityposturePostureDeploymentBasicExample(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_securityposture_posture" "posture_1" { - posture_id = "posture_1" - parent = "organizations/%{org_id}" - location = "global" - state = "ACTIVE" + posture_id = "tf_test_posture_1%{random_suffix}" + parent = "organizations/%{org_id}" + location = "global" + state = "ACTIVE" description = "a new posture" policy_sets { policy_set_id = "org_policy_set" - description = "set of org policies" + description = "set of org policies" policies { policy_id = "policy_1" constraint { @@ -84,13 +84,13 @@ resource "google_securityposture_posture" "posture_1" { } resource "google_securityposture_posture_deployment" "postureDeployment" { - posture_deployment_id = "posture_deployment_1" - parent = "organizations/%{org_id}" - location = "global" - description = "a new posture deployment" - target_resource = "projects/%{project_number}" - posture_id = google_securityposture_posture.posture_1.name - posture_revision_id = google_securityposture_posture.posture_1.revision_id + posture_deployment_id = "tf_test_posture_deployment_1%{random_suffix}" + parent = "organizations/%{org_id}" + location = "global" + description = "a new posture deployment" + target_resource = "projects/%{project_number}" + posture_id = google_securityposture_posture.posture_1.name + posture_revision_id = google_securityposture_posture.posture_1.revision_id } `, context) } diff --git a/google-beta/services/securityposture/resource_securityposture_posture_generated_test.go b/google-beta/services/securityposture/resource_securityposture_posture_generated_test.go index 09061b51d4..3c63037e51 100644 --- a/google-beta/services/securityposture/resource_securityposture_posture_generated_test.go +++ b/google-beta/services/securityposture/resource_securityposture_posture_generated_test.go @@ -60,7 +60,7 @@ func TestAccSecurityposturePosture_securityposturePostureBasicExample(t *testing func testAccSecurityposturePosture_securityposturePostureBasicExample(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_securityposture_posture" "posture1"{ - posture_id = "posture_example" + posture_id = "tf_test_posture_example%{random_suffix}" parent = "organizations/%{org_id}" location = "global" state = "ACTIVE" @@ -77,8 +77,8 @@ resource "google_securityposture_posture" "posture1"{ enforce = true condition { description = "condition description" - expression = "resource.matchTag('org_id/tag_key_short_name,'tag_value_short_name')" - title = "a CEL condition" + expression = "resource.matchTag('org_id/tag_key_short_name,'tag_value_short_name')" + title = "a CEL condition" } } } @@ -89,9 +89,9 @@ resource "google_securityposture_posture" "posture1"{ constraint { org_policy_constraint_custom { custom_constraint { - name = "organizations/%{org_id}/customConstraints/custom.disableGkeAutoUpgrade" - display_name = "Disable GKE auto upgrade" - description = "Only allow GKE NodePool resource to be created or updated if AutoUpgrade is not enabled where this custom constraint is enforced." + name = "organizations/%{org_id}/customConstraints/custom.disableGkeAutoUpgrade" + display_name = "Disable GKE auto upgrade" + description = "Only allow GKE NodePool resource to be created or updated if AutoUpgrade is not enabled where this custom constraint is enforced." action_type = "ALLOW" condition = "resource.management.autoUpgrade == false" method_types = ["CREATE", "UPDATE"] diff --git a/google-beta/services/securityscanner/resource_security_scanner_scan_config.go b/google-beta/services/securityscanner/resource_security_scanner_scan_config.go index 10c0034569..1a9cb5dd74 100644 --- a/google-beta/services/securityscanner/resource_security_scanner_scan_config.go +++ b/google-beta/services/securityscanner/resource_security_scanner_scan_config.go @@ -20,6 +20,7 @@ package securityscanner import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -293,6 +294,7 @@ func resourceSecurityScannerScanConfigCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -301,6 +303,7 @@ func resourceSecurityScannerScanConfigCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ScanConfig: %s", err) @@ -364,12 +367,14 @@ func resourceSecurityScannerScanConfigRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SecurityScannerScanConfig %q", d.Id())) @@ -490,6 +495,7 @@ func resourceSecurityScannerScanConfigUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating ScanConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -549,6 +555,7 @@ func resourceSecurityScannerScanConfigUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -589,6 +596,8 @@ func resourceSecurityScannerScanConfigDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ScanConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -598,6 +607,7 @@ func resourceSecurityScannerScanConfigDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ScanConfig") diff --git a/google-beta/services/servicedirectory/resource_service_directory_endpoint.go b/google-beta/services/servicedirectory/resource_service_directory_endpoint.go index 6ad3eaaa2b..a4afac78b8 100644 --- a/google-beta/services/servicedirectory/resource_service_directory_endpoint.go +++ b/google-beta/services/servicedirectory/resource_service_directory_endpoint.go @@ -20,6 +20,7 @@ package servicedirectory import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -149,6 +150,7 @@ func resourceServiceDirectoryEndpointCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -157,6 +159,7 @@ func resourceServiceDirectoryEndpointCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Endpoint: %s", err) @@ -196,12 +199,14 @@ func resourceServiceDirectoryEndpointRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ServiceDirectoryEndpoint %q", d.Id())) @@ -261,6 +266,7 @@ func resourceServiceDirectoryEndpointUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating Endpoint %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("address") { @@ -296,6 +302,7 @@ func resourceServiceDirectoryEndpointUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -330,6 +337,8 @@ func resourceServiceDirectoryEndpointDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Endpoint %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -339,6 +348,7 @@ func resourceServiceDirectoryEndpointDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Endpoint") diff --git a/google-beta/services/servicedirectory/resource_service_directory_namespace.go b/google-beta/services/servicedirectory/resource_service_directory_namespace.go index 59bc49874c..0d7639e1eb 100644 --- a/google-beta/services/servicedirectory/resource_service_directory_namespace.go +++ b/google-beta/services/servicedirectory/resource_service_directory_namespace.go @@ -20,6 +20,7 @@ package servicedirectory import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -147,6 +148,7 @@ func resourceServiceDirectoryNamespaceCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -155,6 +157,7 @@ func resourceServiceDirectoryNamespaceCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Namespace: %s", err) @@ -200,12 +203,14 @@ func resourceServiceDirectoryNamespaceRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ServiceDirectoryNamespace %q", d.Id())) @@ -260,6 +265,7 @@ func resourceServiceDirectoryNamespaceUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating Namespace %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("effective_labels") { @@ -287,6 +293,7 @@ func resourceServiceDirectoryNamespaceUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -327,6 +334,8 @@ func resourceServiceDirectoryNamespaceDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Namespace %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -336,6 +345,7 @@ func resourceServiceDirectoryNamespaceDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Namespace") diff --git a/google-beta/services/servicedirectory/resource_service_directory_service.go b/google-beta/services/servicedirectory/resource_service_directory_service.go index a0680fc729..361fc1700e 100644 --- a/google-beta/services/servicedirectory/resource_service_directory_service.go +++ b/google-beta/services/servicedirectory/resource_service_directory_service.go @@ -20,6 +20,7 @@ package servicedirectory import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -110,6 +111,7 @@ func resourceServiceDirectoryServiceCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -118,6 +120,7 @@ func resourceServiceDirectoryServiceCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Service: %s", err) @@ -157,12 +160,14 @@ func resourceServiceDirectoryServiceRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ServiceDirectoryService %q", d.Id())) @@ -201,6 +206,7 @@ func resourceServiceDirectoryServiceUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating Service %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("metadata") { @@ -228,6 +234,7 @@ func resourceServiceDirectoryServiceUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -262,6 +269,8 @@ func resourceServiceDirectoryServiceDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Service %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -271,6 +280,7 @@ func resourceServiceDirectoryServiceDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Service") diff --git a/google-beta/services/serviceusage/resource_service_usage_consumer_quota_override.go b/google-beta/services/serviceusage/resource_service_usage_consumer_quota_override.go index a2ce3de326..d640afbd00 100644 --- a/google-beta/services/serviceusage/resource_service_usage_consumer_quota_override.go +++ b/google-beta/services/serviceusage/resource_service_usage_consumer_quota_override.go @@ -20,6 +20,7 @@ package serviceusage import ( "fmt" "log" + "net/http" "reflect" "time" @@ -148,6 +149,7 @@ func resourceServiceUsageConsumerQuotaOverrideCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -156,6 +158,7 @@ func resourceServiceUsageConsumerQuotaOverrideCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ConsumerQuotaOverride: %s", err) @@ -232,12 +235,14 @@ func resourceServiceUsageConsumerQuotaOverrideRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ServiceUsageConsumerQuotaOverride %q", d.Id())) @@ -301,6 +306,7 @@ func resourceServiceUsageConsumerQuotaOverrideUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating ConsumerQuotaOverride %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -315,6 +321,7 @@ func resourceServiceUsageConsumerQuotaOverrideUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -361,6 +368,8 @@ func resourceServiceUsageConsumerQuotaOverrideDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ConsumerQuotaOverride %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -370,6 +379,7 @@ func resourceServiceUsageConsumerQuotaOverrideDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ConsumerQuotaOverride") diff --git a/google-beta/services/sourcerepo/resource_sourcerepo_repository.go b/google-beta/services/sourcerepo/resource_sourcerepo_repository.go index 8201fa2694..475e928a75 100644 --- a/google-beta/services/sourcerepo/resource_sourcerepo_repository.go +++ b/google-beta/services/sourcerepo/resource_sourcerepo_repository.go @@ -21,6 +21,7 @@ import ( "bytes" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -172,6 +173,7 @@ func resourceSourceRepoRepositoryCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -180,6 +182,7 @@ func resourceSourceRepoRepositoryCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Repository: %s", err) @@ -228,12 +231,14 @@ func resourceSourceRepoRepositoryRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SourceRepoRepository %q", d.Id())) @@ -293,6 +298,7 @@ func resourceSourceRepoRepositoryUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Repository %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("pubsub_configs") { @@ -320,6 +326,7 @@ func resourceSourceRepoRepositoryUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -360,6 +367,8 @@ func resourceSourceRepoRepositoryDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Repository %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -369,6 +378,7 @@ func resourceSourceRepoRepositoryDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Repository") diff --git a/google-beta/services/spanner/resource_spanner_database.go b/google-beta/services/spanner/resource_spanner_database.go index 1787fd15b8..7f4daef12f 100644 --- a/google-beta/services/spanner/resource_spanner_database.go +++ b/google-beta/services/spanner/resource_spanner_database.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -277,6 +278,7 @@ func resourceSpannerDatabaseCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -285,6 +287,7 @@ func resourceSpannerDatabaseCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Database: %s", err) @@ -466,12 +469,14 @@ func resourceSpannerDatabaseRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SpannerDatabase %q", d.Id())) @@ -558,6 +563,7 @@ func resourceSpannerDatabaseUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Database %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("enable_drop_protection") { @@ -603,6 +609,7 @@ func resourceSpannerDatabaseUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -647,6 +654,8 @@ func resourceSpannerDatabaseUpdate(d *schema.ResourceData, meta interface{}) err return err } + headers := make(http.Header) + if obj["statements"] != nil { if len(obj["statements"].([]string)) == 0 { // Return early to avoid making an API call that errors, @@ -678,6 +687,7 @@ func resourceSpannerDatabaseUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Database %q: %s", d.Id(), err) @@ -725,6 +735,7 @@ func resourceSpannerDatabaseDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) if d.Get("deletion_protection").(bool) { return fmt.Errorf("cannot destroy instance without setting deletion_protection=false and running `terraform apply`") } @@ -738,6 +749,7 @@ func resourceSpannerDatabaseDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Database") diff --git a/google-beta/services/spanner/resource_spanner_instance.go b/google-beta/services/spanner/resource_spanner_instance.go index 7bcf976f79..4fefb807f7 100644 --- a/google-beta/services/spanner/resource_spanner_instance.go +++ b/google-beta/services/spanner/resource_spanner_instance.go @@ -20,6 +20,7 @@ package spanner import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -368,6 +369,7 @@ func resourceSpannerInstanceCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -376,6 +378,7 @@ func resourceSpannerInstanceCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -455,12 +458,14 @@ func resourceSpannerInstanceRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SpannerInstance %q", d.Id())) @@ -580,6 +585,7 @@ func resourceSpannerInstanceUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) if resourceSpannerInstanceVirtualUpdate(d, ResourceSpannerInstance().Schema) { if d.Get("force_destroy") != nil { if err := d.Set("force_destroy", d.Get("force_destroy")); err != nil { @@ -602,6 +608,7 @@ func resourceSpannerInstanceUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -648,6 +655,8 @@ func resourceSpannerInstanceDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + if d.Get("force_destroy").(bool) { backupsUrl, err := tpgresource.ReplaceVars(d, config, "{{SpannerBasePath}}projects/{{project}}/instances/{{name}}/backups") if err != nil { @@ -681,6 +690,7 @@ func resourceSpannerInstanceDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/sql/resource_sql_database.go b/google-beta/services/sql/resource_sql_database.go index 8ff69520b4..60b6a7b16f 100644 --- a/google-beta/services/sql/resource_sql_database.go +++ b/google-beta/services/sql/resource_sql_database.go @@ -20,6 +20,7 @@ package sql import ( "fmt" "log" + "net/http" "reflect" "time" @@ -170,6 +171,7 @@ func resourceSQLDatabaseCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -178,6 +180,7 @@ func resourceSQLDatabaseCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Database: %s", err) @@ -230,12 +233,14 @@ func resourceSQLDatabaseRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(transformSQLDatabaseReadError(err), d, fmt.Sprintf("SQLDatabase %q", d.Id())) @@ -324,6 +329,7 @@ func resourceSQLDatabaseUpdate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Updating Database %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -338,6 +344,7 @@ func resourceSQLDatabaseUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -391,6 +398,7 @@ func resourceSQLDatabaseDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) if deletionPolicy := d.Get("deletion_policy"); deletionPolicy == "ABANDON" { // Allows for database to be abandoned without deletion to avoid deletion failing // for Postgres databases in some circumstances due to existing SQL users @@ -406,6 +414,7 @@ func resourceSQLDatabaseDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Database") diff --git a/google-beta/services/sql/resource_sql_database_instance.go b/google-beta/services/sql/resource_sql_database_instance.go index c86d0101c3..1a9487b541 100644 --- a/google-beta/services/sql/resource_sql_database_instance.go +++ b/google-beta/services/sql/resource_sql_database_instance.go @@ -397,6 +397,11 @@ is set to true. Defaults to ZONAL.`, Default: 0, Description: `The maximum size, in GB, to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit.`, }, + "enable_google_ml_integration": { + Type: schema.TypeBool, + Optional: true, + Description: `Enables Vertex AI Integration.`, + }, "disk_size": { Type: schema.TypeInt, Optional: true, @@ -1268,7 +1273,7 @@ func expandSqlDatabaseInstanceSettings(configured []interface{}, databaseVersion Tier: _settings["tier"].(string), Edition: _settings["edition"].(string), AdvancedMachineFeatures: expandSqlServerAdvancedMachineFeatures(_settings["advanced_machine_features"].([]interface{})), - ForceSendFields: []string{"StorageAutoResize"}, + ForceSendFields: []string{"StorageAutoResize", "EnableGoogleMlIntegration"}, ActivationPolicy: _settings["activation_policy"].(string), ActiveDirectoryConfig: expandActiveDirectoryConfig(_settings["active_directory_config"].([]interface{})), DenyMaintenancePeriods: expandDenyMaintenancePeriod(_settings["deny_maintenance_period"].([]interface{})), @@ -1281,6 +1286,7 @@ func expandSqlDatabaseInstanceSettings(configured []interface{}, databaseVersion DataDiskType: _settings["disk_type"].(string), PricingPlan: _settings["pricing_plan"].(string), DeletionProtectionEnabled: _settings["deletion_protection_enabled"].(bool), + EnableGoogleMlIntegration: _settings["enable_google_ml_integration"].(bool), UserLabels: tpgresource.ConvertStringMap(_settings["user_labels"].(map[string]interface{})), BackupConfiguration: expandBackupConfiguration(_settings["backup_configuration"].([]interface{})), DatabaseFlags: expandDatabaseFlags(_settings["database_flags"].(*schema.Set).List()), @@ -1932,6 +1938,11 @@ func resourceSqlDatabaseInstanceUpdate(d *schema.ResourceData, meta interface{}) instance.InstanceType = d.Get("instance_type").(string) } + // Database Version is required for all calls with Google ML integration enabled or it will be rejected by the API. + if d.Get("settings.0.enable_google_ml_integration").(bool) { + instance.DatabaseVersion = databaseVersion + } + err = transport_tpg.Retry(transport_tpg.RetryOptions{ RetryFunc: func() (rerr error) { op, rerr = config.NewSqlAdminClient(userAgent).Instances.Update(project, d.Get("name").(string), instance).Do() @@ -2099,6 +2110,8 @@ func flattenSettings(settings *sqladmin.Settings, d *schema.ResourceData) []map[ data["disk_autoresize"] = settings.StorageAutoResize data["disk_autoresize_limit"] = settings.StorageAutoResizeLimit + data["enable_google_ml_integration"] = settings.EnableGoogleMlIntegration + if settings.UserLabels != nil { data["user_labels"] = settings.UserLabels } diff --git a/google-beta/services/sql/resource_sql_database_instance_test.go b/google-beta/services/sql/resource_sql_database_instance_test.go index 571df0424e..aa67228918 100644 --- a/google-beta/services/sql/resource_sql_database_instance_test.go +++ b/google-beta/services/sql/resource_sql_database_instance_test.go @@ -1355,6 +1355,48 @@ func TestAccSqlDatabaseInstance_PointInTimeRecoveryEnabledForSqlServer(t *testin }) } +func TestAccSqlDatabaseInstance_EnableGoogleMlIntegration(t *testing.T) { + t.Parallel() + + masterID := acctest.RandInt(t) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testGoogleSqlDatabaseInstance_EnableGoogleMlIntegration(masterID, true, "POSTGRES_14", "db-custom-2-13312"), + }, + { + ResourceName: "google_sql_database_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection", "root_password"}, + }, + // Test that updates to other settings work after google-ml-integration is enabled + { + Config: testGoogleSqlDatabaseInstance_EnableGoogleMlIntegration(masterID, true, "POSTGRES_14", "db-custom-2-10240"), + }, + { + ResourceName: "google_sql_database_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection", "root_password"}, + }, + { + Config: testGoogleSqlDatabaseInstance_EnableGoogleMlIntegration(masterID, false, "POSTGRES_14", "db-custom-2-10240"), + }, + { + ResourceName: "google_sql_database_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection", "root_password"}, + }, + }, + }) +} + func TestAccSqlDatabaseInstance_insights(t *testing.T) { t.Parallel() @@ -3859,6 +3901,22 @@ resource "google_sql_database_instance" "instance" { `, masterID, dbVersion, masterID, pointInTimeRecoveryEnabled) } +func testGoogleSqlDatabaseInstance_EnableGoogleMlIntegration(masterID int, enableGoogleMlIntegration bool, dbVersion string, tier string) string { + return fmt.Sprintf(` +resource "google_sql_database_instance" "instance" { + name = "tf-test-%d" + region = "us-central1" + database_version = "%s" + deletion_protection = false + root_password = "rand-pwd-%d" + settings { + tier = "%s" + enable_google_ml_integration = %t + } +} +`, masterID, dbVersion, masterID, tier, enableGoogleMlIntegration) +} + func testGoogleSqlDatabaseInstance_BackupRetention(masterID int) string { return fmt.Sprintf(` resource "google_sql_database_instance" "instance" { diff --git a/google-beta/services/sql/resource_sql_source_representation_instance.go b/google-beta/services/sql/resource_sql_source_representation_instance.go index 38d2b6be1a..d7cb053531 100644 --- a/google-beta/services/sql/resource_sql_source_representation_instance.go +++ b/google-beta/services/sql/resource_sql_source_representation_instance.go @@ -20,6 +20,7 @@ package sql import ( "fmt" "log" + "net/http" "reflect" "strconv" "strings" @@ -198,6 +199,7 @@ func resourceSQLSourceRepresentationInstanceCreate(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -206,6 +208,7 @@ func resourceSQLSourceRepresentationInstanceCreate(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating SourceRepresentationInstance: %s", err) @@ -258,12 +261,14 @@ func resourceSQLSourceRepresentationInstanceRead(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SQLSourceRepresentationInstance %q", d.Id())) @@ -340,6 +345,8 @@ func resourceSQLSourceRepresentationInstanceDelete(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting SourceRepresentationInstance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -349,6 +356,7 @@ func resourceSQLSourceRepresentationInstanceDelete(d *schema.ResourceData, meta UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "SourceRepresentationInstance") diff --git a/google-beta/services/sql/resource_sql_user_test.go b/google-beta/services/sql/resource_sql_user_test.go index 50bf752fca..be7b54caa5 100644 --- a/google-beta/services/sql/resource_sql_user_test.go +++ b/google-beta/services/sql/resource_sql_user_test.go @@ -138,6 +138,7 @@ func TestAccSqlUser_postgres(t *testing.T) { } func TestAccSqlUser_postgresIAM(t *testing.T) { + t.Skipf("Skipping test %s due to https://github.com/hashicorp/terraform-provider-google/issues/16704", t.Name()) t.Parallel() instance := fmt.Sprintf("tf-test-%d", acctest.RandInt(t)) diff --git a/google-beta/services/storage/data_source_google_storage_bucket.go b/google-beta/services/storage/data_source_google_storage_bucket.go index 99860f9872..9b119b1b18 100644 --- a/google-beta/services/storage/data_source_google_storage_bucket.go +++ b/google-beta/services/storage/data_source_google_storage_bucket.go @@ -14,6 +14,7 @@ func DataSourceGoogleStorageBucket() *schema.Resource { dsSchema := tpgresource.DatasourceSchemaFromResourceSchema(ResourceStorageBucket().Schema) + tpgresource.AddOptionalFieldsToSchema(dsSchema, "project") tpgresource.AddRequiredFieldsToSchema(dsSchema, "name") return &schema.Resource{ diff --git a/google-beta/services/storage/data_source_google_storage_bucket_objects.go b/google-beta/services/storage/data_source_google_storage_bucket_objects.go new file mode 100644 index 0000000000..68dd1bf92b --- /dev/null +++ b/google-beta/services/storage/data_source_google_storage_bucket_objects.go @@ -0,0 +1,156 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package storage + +import ( + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" +) + +func DataSourceGoogleStorageBucketObjects() *schema.Resource { + return &schema.Resource{ + Read: datasourceGoogleStorageBucketObjectsRead, + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeString, + Required: true, + }, + "match_glob": { + Type: schema.TypeString, + Optional: true, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + }, + "bucket_objects": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "content_type": { + Type: schema.TypeString, + Computed: true, + }, + "media_link": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "self_link": { + Type: schema.TypeString, + Computed: true, + }, + "storage_class": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func datasourceGoogleStorageBucketObjectsRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + params := make(map[string]string) + bucketObjects := make([]map[string]interface{}, 0) + + for { + bucket := d.Get("bucket").(string) + url := fmt.Sprintf("https://storage.googleapis.com/storage/v1/b/%s/o", bucket) + + if v, ok := d.GetOk("match_glob"); ok { + params["matchGlob"] = v.(string) + } + + if v, ok := d.GetOk("prefix"); ok { + params["prefix"] = v.(string) + } + + url, err := transport_tpg.AddQueryParams(url, params) + if err != nil { + return err + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "GET", + RawURL: url, + UserAgent: userAgent, + }) + if err != nil { + return fmt.Errorf("Error retrieving bucket objects: %s", err) + } + + pageBucketObjects := flattenDatasourceGoogleBucketObjectsList(res["items"]) + bucketObjects = append(bucketObjects, pageBucketObjects...) + + pToken, ok := res["nextPageToken"] + if ok && pToken != nil && pToken.(string) != "" { + params["pageToken"] = pToken.(string) + } else { + break + } + } + + if err := d.Set("bucket_objects", bucketObjects); err != nil { + return fmt.Errorf("Error retrieving bucket_objects: %s", err) + } + + d.SetId(d.Get("bucket").(string)) + + return nil +} + +func flattenDatasourceGoogleBucketObjectsList(v interface{}) []map[string]interface{} { + if v == nil { + return make([]map[string]interface{}, 0) + } + + ls := v.([]interface{}) + bucketObjects := make([]map[string]interface{}, 0, len(ls)) + for _, raw := range ls { + o := raw.(map[string]interface{}) + + var mContentType, mMediaLink, mName, mSelfLink, mStorageClass interface{} + if oContentType, ok := o["contentType"]; ok { + mContentType = oContentType + } + if oMediaLink, ok := o["mediaLink"]; ok { + mMediaLink = oMediaLink + } + if oName, ok := o["name"]; ok { + mName = oName + } + if oSelfLink, ok := o["selfLink"]; ok { + mSelfLink = oSelfLink + } + if oStorageClass, ok := o["storageClass"]; ok { + mStorageClass = oStorageClass + } + bucketObjects = append(bucketObjects, map[string]interface{}{ + "content_type": mContentType, + "media_link": mMediaLink, + "name": mName, + "self_link": mSelfLink, + "storage_class": mStorageClass, + }) + } + + return bucketObjects +} diff --git a/google-beta/services/storage/data_source_google_storage_bucket_objects_test.go b/google-beta/services/storage/data_source_google_storage_bucket_objects_test.go new file mode 100644 index 0000000000..e1c1c01ba9 --- /dev/null +++ b/google-beta/services/storage/data_source_google_storage_bucket_objects_test.go @@ -0,0 +1,114 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package storage_test + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" +) + +func TestAccDataSourceGoogleStorageBucketObjects_basic(t *testing.T) { + t.Parallel() + + project := envvar.GetTestProjectFromEnv() + bucket := "tf-bucket-object-test-" + acctest.RandString(t, 10) + + context := map[string]interface{}{ + "bucket": bucket, + "project": project, + "object_0_name": "bee", + "object_1_name": "fly", + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccCheckGoogleStorageBucketObjectsConfig(context), + Check: resource.ComposeTestCheckFunc( + // Test schema + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.0.content_type"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.0.media_link"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.0.name"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.0.self_link"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.0.storage_class"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.1.content_type"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.1.media_link"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.1.name"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.1.self_link"), + resource.TestCheckResourceAttrSet("data.google_storage_bucket_objects.my_insects", "bucket_objects.1.storage_class"), + // Test content + resource.TestCheckResourceAttr("data.google_storage_bucket_objects.my_insects", "bucket", context["bucket"].(string)), + resource.TestCheckResourceAttr("data.google_storage_bucket_objects.my_insects", "bucket_objects.0.name", context["object_0_name"].(string)), + resource.TestCheckResourceAttr("data.google_storage_bucket_objects.my_insects", "bucket_objects.1.name", context["object_1_name"].(string)), + // Test match_glob + resource.TestCheckResourceAttr("data.google_storage_bucket_objects.my_bee_glob", "bucket_objects.0.name", context["object_0_name"].(string)), + // Test prefix + resource.TestCheckResourceAttr("data.google_storage_bucket_objects.my_fly_prefix", "bucket_objects.0.name", context["object_1_name"].(string)), + ), + }, + }, + }) +} + +func testAccCheckGoogleStorageBucketObjectsConfig(context map[string]interface{}) string { + return fmt.Sprintf(` +resource "google_storage_bucket" "my_insect_cage" { + force_destroy = true + location = "EU" + name = "%s" + project = "%s" + uniform_bucket_level_access = true +} + +resource "google_storage_bucket_object" "bee" { + bucket = google_storage_bucket.my_insect_cage.name + content = "bzzzzzt" + name = "%s" +} + +resource "google_storage_bucket_object" "fly" { + bucket = google_storage_bucket.my_insect_cage.name + content = "zzzzzt" + name = "%s" +} + +data "google_storage_bucket_objects" "my_insects" { + bucket = google_storage_bucket.my_insect_cage.name + + depends_on = [ + google_storage_bucket_object.bee, + google_storage_bucket_object.fly, + ] +} + +data "google_storage_bucket_objects" "my_bee_glob" { + bucket = google_storage_bucket.my_insect_cage.name + match_glob = "b*" + + depends_on = [ + google_storage_bucket_object.bee, + ] +} + +data "google_storage_bucket_objects" "my_fly_prefix" { + bucket = google_storage_bucket.my_insect_cage.name + prefix = "f" + + depends_on = [ + google_storage_bucket_object.fly, + ] +}`, + context["bucket"].(string), + context["project"].(string), + context["object_0_name"].(string), + context["object_1_name"].(string), + ) +} diff --git a/google-beta/services/storage/data_source_google_storage_bucket_test.go b/google-beta/services/storage/data_source_google_storage_bucket_test.go index 681c36cf29..712b1ed67e 100644 --- a/google-beta/services/storage/data_source_google_storage_bucket_test.go +++ b/google-beta/services/storage/data_source_google_storage_bucket_test.go @@ -3,17 +3,19 @@ package storage_test import ( - "fmt" "testing" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" ) func TestAccDataSourceGoogleStorageBucket_basic(t *testing.T) { t.Parallel() - bucket := "tf-bucket-" + acctest.RandString(t, 10) + context := map[string]interface{}{ + "bucket_name": "tf-bucket-" + acctest.RandString(t, 10), + } acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -21,7 +23,7 @@ func TestAccDataSourceGoogleStorageBucket_basic(t *testing.T) { CheckDestroy: testAccStorageBucketDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccDataSourceGoogleStorageBucketConfig(bucket), + Config: testAccDataSourceGoogleStorageBucketConfig(context), Check: resource.ComposeTestCheckFunc( acctest.CheckDataSourceStateMatchesResourceStateWithIgnores("data.google_storage_bucket.bar", "google_storage_bucket.foo", map[string]struct{}{"force_destroy": {}}), ), @@ -30,18 +32,80 @@ func TestAccDataSourceGoogleStorageBucket_basic(t *testing.T) { }) } -func testAccDataSourceGoogleStorageBucketConfig(bucketName string) string { - return fmt.Sprintf(` +// Test that the data source can take a project argument, which is used as a way to avoid using Compute API to +// get project id for the project number returned from the Storage API. +func TestAccDataSourceGoogleStorageBucket_avoidComputeAPI(t *testing.T) { + // Cannot use t.Parallel() if using t.Setenv + + project := envvar.GetTestProjectFromEnv() + + context := map[string]interface{}{ + "bucket_name": "tf-bucket-" + acctest.RandString(t, 10), + "real_project_id": project, + "incorrect_project_id": "foobar", + } + + // Unset ENV so no provider default is available to the data source + t.Setenv("GOOGLE_PROJECT", "") + + acctest.VcrTest(t, resource.TestCase{ + // Removed PreCheck because it wants to enforce GOOGLE_PROJECT being set + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccStorageBucketDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGoogleStorageBucketConfig_setProjectInConfig(context), + Check: resource.ComposeTestCheckFunc( + // We ignore project to show that the project argument on the data source is retained and isn't impacted + acctest.CheckDataSourceStateMatchesResourceStateWithIgnores("data.google_storage_bucket.bar", "google_storage_bucket.foo", map[string]struct{}{"force_destroy": {}, "project": {}}), + + resource.TestCheckResourceAttrSet( + "google_storage_bucket.foo", "project_number"), + resource.TestCheckResourceAttr( + "google_storage_bucket.foo", "project", context["real_project_id"].(string)), + + resource.TestCheckResourceAttrSet( + "data.google_storage_bucket.bar", "project_number"), + resource.TestCheckResourceAttr( + "data.google_storage_bucket.bar", "project", context["incorrect_project_id"].(string)), + ), + }, + }, + }) +} + +func testAccDataSourceGoogleStorageBucketConfig(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_storage_bucket" "foo" { + name = "%{bucket_name}" + location = "US" +} + +data "google_storage_bucket" "bar" { + name = google_storage_bucket.foo.name + depends_on = [ + google_storage_bucket.foo, + ] +} +`, context) +} + +func testAccDataSourceGoogleStorageBucketConfig_setProjectInConfig(context map[string]interface{}) string { + return acctest.Nprintf(` resource "google_storage_bucket" "foo" { - name = "%s" + project = "%{real_project_id}" + name = "%{bucket_name}" location = "US" } +// The project argument here doesn't help the provider retrieve data about the bucket +// It only serves to stop the data source using the compute API to convert the project number to an id data "google_storage_bucket" "bar" { + project = "%{incorrect_project_id}" name = google_storage_bucket.foo.name depends_on = [ google_storage_bucket.foo, ] } -`, bucketName) +`, context) } diff --git a/google-beta/services/storage/resource_storage_bucket.go b/google-beta/services/storage/resource_storage_bucket.go index cddda72447..0142400abf 100644 --- a/google-beta/services/storage/resource_storage_bucket.go +++ b/google-beta/services/storage/resource_storage_bucket.go @@ -9,6 +9,7 @@ import ( "fmt" "log" "math" + "regexp" "runtime" "strconv" "strings" @@ -16,6 +17,7 @@ import ( "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/verify" "github.com/gammazero/workerpool" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" @@ -58,10 +60,11 @@ func ResourceStorageBucket() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `The name of the bucket.`, + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the bucket.`, + ValidateFunc: verify.ValidateGCSName, }, "encryption": { @@ -94,10 +97,11 @@ func ResourceStorageBucket() *schema.Resource { }, "labels": { - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Description: `A set of key/value label pairs to assign to the bucket.`, + Type: schema.TypeMap, + ValidateFunc: labelKeyValidator, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs to assign to the bucket.`, }, "terraform_labels": { @@ -132,6 +136,12 @@ func ResourceStorageBucket() *schema.Resource { Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, }, + "project_number": { + Type: schema.TypeInt, + Computed: true, + Description: `The project number of the project in which the resource belongs.`, + }, + "self_link": { Type: schema.TypeString, Computed: true, @@ -513,6 +523,22 @@ func ResourceStorageBucket() *schema.Resource { const resourceDataplexGoogleLabelPrefix = "goog-dataplex" const resourceDataplexGoogleProvidedLabelPrefix = "labels." + resourceDataplexGoogleLabelPrefix +var labelKeyRegex = regexp.MustCompile(`^[a-z0-9_-]{1,63}$`) + +func labelKeyValidator(val interface{}, key string) (warns []string, errs []error) { + if val == nil { + return + } + + m := val.(map[string]interface{}) + for k := range m { + if !labelKeyRegex.MatchString(k) { + errs = append(errs, fmt.Errorf("%q is an invalid label key. See https://cloud.google.com/storage/docs/tags-and-labels#bucket-labels", k)) + } + } + return +} + func resourceDataplexLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { if strings.HasPrefix(k, resourceDataplexGoogleProvidedLabelPrefix) && new == "" { return true @@ -555,9 +581,6 @@ func resourceStorageBucketCreate(d *schema.ResourceData, meta interface{}) error // Get the bucket and location bucket := d.Get("name").(string) - if err := tpgresource.CheckGCSName(bucket); err != nil { - return err - } location := d.Get("location").(string) // Create a bucket, setting the labels, location and name. @@ -1719,6 +1742,9 @@ func setStorageBucket(d *schema.ResourceData, config *transport_tpg.Config, res if err := d.Set("url", fmt.Sprintf("gs://%s", bucket)); err != nil { return fmt.Errorf("Error setting url: %s", err) } + if err := d.Set("project_number", res.ProjectNumber); err != nil { + return fmt.Errorf("Error setting project_number: %s", err) + } if err := d.Set("storage_class", res.StorageClass); err != nil { return fmt.Errorf("Error setting storage_class: %s", err) } diff --git a/google-beta/services/storage/resource_storage_bucket_access_control.go b/google-beta/services/storage/resource_storage_bucket_access_control.go index 2f33f9ce6e..f0a8569935 100644 --- a/google-beta/services/storage/resource_storage_bucket_access_control.go +++ b/google-beta/services/storage/resource_storage_bucket_access_control.go @@ -20,6 +20,7 @@ package storage import ( "fmt" "log" + "net/http" "reflect" "time" @@ -143,6 +144,7 @@ func resourceStorageBucketAccessControlCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -151,6 +153,7 @@ func resourceStorageBucketAccessControlCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating BucketAccessControl: %s", err) @@ -187,12 +190,14 @@ func resourceStorageBucketAccessControlRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("StorageBucketAccessControl %q", d.Id())) @@ -259,6 +264,7 @@ func resourceStorageBucketAccessControlUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating BucketAccessControl %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -273,6 +279,7 @@ func resourceStorageBucketAccessControlUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -312,6 +319,8 @@ func resourceStorageBucketAccessControlDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting BucketAccessControl %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -321,6 +330,7 @@ func resourceStorageBucketAccessControlDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "BucketAccessControl") diff --git a/google-beta/services/storage/resource_storage_bucket_test.go b/google-beta/services/storage/resource_storage_bucket_test.go index 9676ef0691..271bcd05bd 100644 --- a/google-beta/services/storage/resource_storage_bucket_test.go +++ b/google-beta/services/storage/resource_storage_bucket_test.go @@ -35,6 +35,10 @@ func TestAccStorageBucket_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( "google_storage_bucket.bucket", "force_destroy", "false"), + resource.TestCheckResourceAttr( + "google_storage_bucket.bucket", "project", envvar.GetTestProjectFromEnv()), + resource.TestCheckResourceAttrSet( + "google_storage_bucket.bucket", "project_number"), ), }, { diff --git a/google-beta/services/storage/resource_storage_default_object_access_control.go b/google-beta/services/storage/resource_storage_default_object_access_control.go index 5ad1768746..40f4ee5a79 100644 --- a/google-beta/services/storage/resource_storage_default_object_access_control.go +++ b/google-beta/services/storage/resource_storage_default_object_access_control.go @@ -20,6 +20,7 @@ package storage import ( "fmt" "log" + "net/http" "reflect" "time" @@ -176,6 +177,7 @@ func resourceStorageDefaultObjectAccessControlCreate(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -184,6 +186,7 @@ func resourceStorageDefaultObjectAccessControlCreate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DefaultObjectAccessControl: %s", err) @@ -220,12 +223,14 @@ func resourceStorageDefaultObjectAccessControlRead(d *schema.ResourceData, meta billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("StorageDefaultObjectAccessControl %q", d.Id())) @@ -307,6 +312,7 @@ func resourceStorageDefaultObjectAccessControlUpdate(d *schema.ResourceData, met } log.Printf("[DEBUG] Updating DefaultObjectAccessControl %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -321,6 +327,7 @@ func resourceStorageDefaultObjectAccessControlUpdate(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -360,6 +367,8 @@ func resourceStorageDefaultObjectAccessControlDelete(d *schema.ResourceData, met billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DefaultObjectAccessControl %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -369,6 +378,7 @@ func resourceStorageDefaultObjectAccessControlDelete(d *schema.ResourceData, met UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DefaultObjectAccessControl") diff --git a/google-beta/services/storage/resource_storage_hmac_key.go b/google-beta/services/storage/resource_storage_hmac_key.go index 50353d10fa..6d95227112 100644 --- a/google-beta/services/storage/resource_storage_hmac_key.go +++ b/google-beta/services/storage/resource_storage_hmac_key.go @@ -20,6 +20,7 @@ package storage import ( "fmt" "log" + "net/http" "reflect" "time" @@ -138,6 +139,7 @@ func resourceStorageHmacKeyCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -146,6 +148,7 @@ func resourceStorageHmacKeyCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating HmacKey: %s", err) @@ -271,12 +274,14 @@ func resourceStorageHmacKeyRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("StorageHmacKey %q", d.Id())) @@ -372,6 +377,8 @@ func resourceStorageHmacKeyUpdate(d *schema.ResourceData, meta interface{}) erro return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -385,6 +392,7 @@ func resourceStorageHmacKeyUpdate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating HmacKey %q: %s", d.Id(), err) @@ -426,6 +434,7 @@ func resourceStorageHmacKeyDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) getUrl, err := tpgresource.ReplaceVars(d, config, "{{StorageBasePath}}projects/{{project}}/hmacKeys/{{access_id}}") if err != nil { return err @@ -475,6 +484,7 @@ func resourceStorageHmacKeyDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "HmacKey") diff --git a/google-beta/services/storage/resource_storage_object_access_control.go b/google-beta/services/storage/resource_storage_object_access_control.go index 809c783068..0503931201 100644 --- a/google-beta/services/storage/resource_storage_object_access_control.go +++ b/google-beta/services/storage/resource_storage_object_access_control.go @@ -20,6 +20,7 @@ package storage import ( "fmt" "log" + "net/http" "reflect" "time" @@ -176,6 +177,7 @@ func resourceStorageObjectAccessControlCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -184,6 +186,7 @@ func resourceStorageObjectAccessControlCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ObjectAccessControl: %s", err) @@ -220,12 +223,14 @@ func resourceStorageObjectAccessControlRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("StorageObjectAccessControl %q", d.Id())) @@ -310,6 +315,7 @@ func resourceStorageObjectAccessControlUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating ObjectAccessControl %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -324,6 +330,7 @@ func resourceStorageObjectAccessControlUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -363,6 +370,8 @@ func resourceStorageObjectAccessControlDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ObjectAccessControl %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -372,6 +381,7 @@ func resourceStorageObjectAccessControlDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ObjectAccessControl") diff --git a/google-beta/services/storageinsights/resource_storage_insights_report_config.go b/google-beta/services/storageinsights/resource_storage_insights_report_config.go index 23bf84a053..a2e22a1007 100644 --- a/google-beta/services/storageinsights/resource_storage_insights_report_config.go +++ b/google-beta/services/storageinsights/resource_storage_insights_report_config.go @@ -20,6 +20,7 @@ package storageinsights import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -279,6 +280,7 @@ func resourceStorageInsightsReportConfigCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -287,6 +289,7 @@ func resourceStorageInsightsReportConfigCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ReportConfig: %s", err) @@ -332,12 +335,14 @@ func resourceStorageInsightsReportConfigRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("StorageInsightsReportConfig %q", d.Id())) @@ -413,6 +418,7 @@ func resourceStorageInsightsReportConfigUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating ReportConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("frequency_options") { @@ -454,6 +460,7 @@ func resourceStorageInsightsReportConfigUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -494,6 +501,8 @@ func resourceStorageInsightsReportConfigDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ReportConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -503,6 +512,7 @@ func resourceStorageInsightsReportConfigDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ReportConfig") diff --git a/google-beta/services/storagetransfer/resource_storage_transfer_agent_pool.go b/google-beta/services/storagetransfer/resource_storage_transfer_agent_pool.go index b038437ada..ea730d8a5e 100644 --- a/google-beta/services/storagetransfer/resource_storage_transfer_agent_pool.go +++ b/google-beta/services/storagetransfer/resource_storage_transfer_agent_pool.go @@ -20,6 +20,7 @@ package storagetransfer import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -166,6 +167,7 @@ func resourceStorageTransferAgentPoolCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -174,6 +176,7 @@ func resourceStorageTransferAgentPoolCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating AgentPool: %s", err) @@ -220,12 +223,14 @@ func resourceStorageTransferAgentPoolRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("StorageTransferAgentPool %q", d.Id())) @@ -283,6 +288,7 @@ func resourceStorageTransferAgentPoolUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating AgentPool %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -317,6 +323,7 @@ func resourceStorageTransferAgentPoolUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -357,6 +364,8 @@ func resourceStorageTransferAgentPoolDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting AgentPool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -366,6 +375,7 @@ func resourceStorageTransferAgentPoolDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "AgentPool") diff --git a/google-beta/services/tags/data_source_tags_tag_keys.go b/google-beta/services/tags/data_source_tags_tag_keys.go new file mode 100644 index 0000000000..6610c74c65 --- /dev/null +++ b/google-beta/services/tags/data_source_tags_tag_keys.go @@ -0,0 +1,77 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package tags + +import ( + "fmt" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func DataSourceGoogleTagsTagKeys() *schema.Resource { + return &schema.Resource{ + Read: dataSourceGoogleTagsTagKeysRead, + + Schema: map[string]*schema.Schema{ + "parent": { + Type: schema.TypeString, + Required: true, + }, + "keys": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: tpgresource.DatasourceSchemaFromResourceSchema(ResourceTagsTagKey().Schema), + }, + }, + }, + } +} + +func dataSourceGoogleTagsTagKeysRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + parent := d.Get("parent").(string) + token := "" + + tagKeys := make([]map[string]interface{}, 0) + + for paginate := true; paginate; { + resp, err := config.NewResourceManagerV3Client(userAgent).TagKeys.List().Parent(parent).PageSize(300).PageToken(token).Do() + if err != nil { + return fmt.Errorf("error reading tag key list: %s", err) + } + + for _, tagKey := range resp.TagKeys { + + mappedData := map[string]interface{}{ + "name": tagKey.Name, + "namespaced_name": tagKey.NamespacedName, + "short_name": tagKey.ShortName, + "parent": tagKey.Parent, + "create_time": tagKey.CreateTime, + "update_time": tagKey.UpdateTime, + "description": tagKey.Description, + "purpose": tagKey.Purpose, + "purpose_data": tagKey.PurposeData, + } + tagKeys = append(tagKeys, mappedData) + } + token = resp.NextPageToken + paginate = token != "" + } + + d.SetId(parent) + if err := d.Set("keys", tagKeys); err != nil { + return fmt.Errorf("Error setting tag key name: %s", err) + } + + return nil +} diff --git a/google-beta/services/tags/data_source_tags_tag_keys_test.go b/google-beta/services/tags/data_source_tags_tag_keys_test.go new file mode 100644 index 0000000000..ab9cef9b1a --- /dev/null +++ b/google-beta/services/tags/data_source_tags_tag_keys_test.go @@ -0,0 +1,115 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package tags_test + +import ( + "fmt" + "regexp" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" +) + +func TestAccDataSourceGoogleTagsTagKeys_default(t *testing.T) { + org := envvar.GetTestOrgFromEnv(t) + + parent := fmt.Sprintf("organizations/%s", org) + shortName := "tf-test-" + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGoogleTagsTagKeysConfig(parent, shortName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceGoogleTagsTagKeysCheck("data.google_tags_tag_keys.my_tag_keys", "google_tags_tag_key.foobar"), + ), + }, + }, + }) +} + +func TestAccDataSourceGoogleTagsTagKeys_dot(t *testing.T) { + org := envvar.GetTestOrgFromEnv(t) + + parent := fmt.Sprintf("organizations/%s", org) + shortName := "terraform.test." + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGoogleTagsTagKeysConfig(parent, shortName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceGoogleTagsTagKeysCheck("data.google_tags_tag_keys.my_tag_keys", "google_tags_tag_key.foobar"), + ), + }, + }, + }) +} + +func testAccDataSourceGoogleTagsTagKeysCheck(data_source_name string, resource_name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + ds, ok := s.RootModule().Resources[data_source_name] + if !ok { + return fmt.Errorf("root module has no resource called %s", data_source_name) + } + + rs, ok := s.RootModule().Resources[resource_name] + if !ok { + return fmt.Errorf("can't find %s in state", resource_name) + } + + ds_attr := ds.Primary.Attributes + rs_attr := rs.Primary.Attributes + tag_key_attrs_to_test := []string{"parent", "short_name", "name", "namespaced_name", "create_time", "update_time", "description"} + re := regexp.MustCompile("[0-9]+") + index := "" + + for k := range ds_attr { + ds_a := fmt.Sprintf("keys.%s.%s", re.FindString(k), tag_key_attrs_to_test[1]) + if ds_attr[ds_a] == rs_attr[tag_key_attrs_to_test[1]] { + index = re.FindString(k) + break + } + } + + for _, attr_to_check := range tag_key_attrs_to_test { + data := "" + if attr_to_check == "name" { + data = strings.Split(ds_attr[fmt.Sprintf("keys.%s.%s", index, attr_to_check)], "/")[1] + } else { + data = ds_attr[fmt.Sprintf("keys.%s.%s", index, attr_to_check)] + } + if data != rs_attr[attr_to_check] { + return fmt.Errorf( + "%s is %s; want %s", + attr_to_check, + data, + rs_attr[attr_to_check], + ) + } + } + + return nil + } +} + +func testAccDataSourceGoogleTagsTagKeysConfig(parent string, shortName string) string { + return fmt.Sprintf(` +resource "google_tags_tag_key" "foobar" { + parent = "%s" + short_name = "%s" +} + +data "google_tags_tag_keys" "my_tag_keys" { + parent = google_tags_tag_key.foobar.parent +} +`, parent, shortName) +} diff --git a/google-beta/services/tags/data_source_tags_tag_values.go b/google-beta/services/tags/data_source_tags_tag_values.go new file mode 100644 index 0000000000..ebf649052f --- /dev/null +++ b/google-beta/services/tags/data_source_tags_tag_values.go @@ -0,0 +1,76 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package tags + +import ( + "fmt" + + "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" + transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func DataSourceGoogleTagsTagValues() *schema.Resource { + return &schema.Resource{ + Read: dataSourceGoogleTagsTagValuesRead, + + Schema: map[string]*schema.Schema{ + "parent": { + Type: schema.TypeString, + Required: true, + }, + "values": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: tpgresource.DatasourceSchemaFromResourceSchema(ResourceTagsTagValue().Schema), + }, + }, + }, + } +} + +func dataSourceGoogleTagsTagValuesRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*transport_tpg.Config) + userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) + if err != nil { + return err + } + + parent := d.Get("parent").(string) + token := "" + + tagValues := make([]map[string]interface{}, 0) + + for paginate := true; paginate; { + resp, err := config.NewResourceManagerV3Client(userAgent).TagValues.List().Parent(parent).PageSize(300).PageToken(token).Do() + if err != nil { + return fmt.Errorf("error reading tag value list: %s", err) + } + + for _, tagValue := range resp.TagValues { + mappedData := map[string]interface{}{ + "name": tagValue.Name, + "namespaced_name": tagValue.NamespacedName, + "short_name": tagValue.ShortName, + "parent": tagValue.Parent, + "create_time": tagValue.CreateTime, + "update_time": tagValue.UpdateTime, + "description": tagValue.Description, + } + + tagValues = append(tagValues, mappedData) + } + token = resp.NextPageToken + paginate = token != "" + } + + d.SetId(parent) + + if err := d.Set("values", tagValues); err != nil { + return fmt.Errorf("Error setting tag values: %s", err) + } + + return nil +} diff --git a/google-beta/services/tags/data_source_tags_tag_values_test.go b/google-beta/services/tags/data_source_tags_tag_values_test.go new file mode 100644 index 0000000000..613ae03b4c --- /dev/null +++ b/google-beta/services/tags/data_source_tags_tag_values_test.go @@ -0,0 +1,111 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package tags_test + +import ( + "fmt" + "strings" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" + "github.com/hashicorp/terraform-provider-google-beta/google-beta/envvar" +) + +func TestAccDataSourceGoogleTagsTagValues_default(t *testing.T) { + org := envvar.GetTestOrgFromEnv(t) + + parent := fmt.Sprintf("organizations/%s", org) + keyShortName := "tf-testkey-" + acctest.RandString(t, 10) + shortName := "tf-test-" + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGoogleTagsTagValuesConfig(parent, keyShortName, shortName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceGoogleTagsTagValuesCheck("data.google_tags_tag_values.my_tag_values", "google_tags_tag_value.norfqux"), + ), + }, + }, + }) +} + +func TestAccDataSourceGoogleTagsTagValues_dot(t *testing.T) { + org := envvar.GetTestOrgFromEnv(t) + + parent := fmt.Sprintf("organizations/%s", org) + keyShortName := "tf-testkey-" + acctest.RandString(t, 10) + shortName := "terraform.test." + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccDataSourceGoogleTagsTagValuesConfig(parent, keyShortName, shortName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceGoogleTagsTagValuesCheck("data.google_tags_tag_values.my_tag_values", "google_tags_tag_value.norfqux"), + ), + }, + }, + }) +} + +func testAccDataSourceGoogleTagsTagValuesCheck(data_source_name string, resource_name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + ds, ok := s.RootModule().Resources[data_source_name] + if !ok { + return fmt.Errorf("root module has no resource called %s", data_source_name) + } + + rs, ok := s.RootModule().Resources[resource_name] + if !ok { + return fmt.Errorf("can't find %s in state", resource_name) + } + + ds_attr := ds.Primary.Attributes + rs_attr := rs.Primary.Attributes + tag_value_attrs_to_test := []string{"parent", "name", "namespaced_name", "create_time", "update_time", "description"} + + for _, attr_to_check := range tag_value_attrs_to_test { + data := "" + if attr_to_check == "name" { + data = strings.Split(ds_attr[fmt.Sprintf("values.0.%s", attr_to_check)], "/")[1] + } else { + data = ds_attr[fmt.Sprintf("values.0.%s", attr_to_check)] + } + if data != rs_attr[attr_to_check] { + return fmt.Errorf( + "%s is %s; want %s", + attr_to_check, + data, + rs_attr[attr_to_check], + ) + } + } + + return nil + } +} + +func testAccDataSourceGoogleTagsTagValuesConfig(parent string, keyShortName string, shortName string) string { + return fmt.Sprintf(` +resource "google_tags_tag_key" "foobar" { + parent = "%s" + short_name = "%s" +} + +resource "google_tags_tag_value" "norfqux" { + parent = google_tags_tag_key.foobar.id + short_name = "%s" +} + +data "google_tags_tag_values" "my_tag_values" { + parent = google_tags_tag_value.norfqux.parent +} +`, parent, keyShortName, shortName) +} diff --git a/google-beta/services/tags/resource_tags_tag_binding.go b/google-beta/services/tags/resource_tags_tag_binding.go index ad6d181071..97c6ca9877 100644 --- a/google-beta/services/tags/resource_tags_tag_binding.go +++ b/google-beta/services/tags/resource_tags_tag_binding.go @@ -20,6 +20,7 @@ package tags import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -109,6 +110,7 @@ func resourceTagsTagBindingCreate(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -117,6 +119,7 @@ func resourceTagsTagBindingCreate(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TagBinding: %s", err) @@ -187,12 +190,14 @@ func resourceTagsTagBindingRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("TagsTagBinding %q", d.Id())) @@ -251,6 +256,8 @@ func resourceTagsTagBindingDelete(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TagBinding %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -260,6 +267,7 @@ func resourceTagsTagBindingDelete(d *schema.ResourceData, meta interface{}) erro UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TagBinding") diff --git a/google-beta/services/tags/resource_tags_tag_key.go b/google-beta/services/tags/resource_tags_tag_key.go index 05b4b24fc9..0691457cd3 100644 --- a/google-beta/services/tags/resource_tags_tag_key.go +++ b/google-beta/services/tags/resource_tags_tag_key.go @@ -20,6 +20,7 @@ package tags import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -178,6 +179,7 @@ func resourceTagsTagKeyCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -186,6 +188,7 @@ func resourceTagsTagKeyCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TagKey: %s", err) @@ -246,12 +249,14 @@ func resourceTagsTagKeyRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("TagsTagKey %q", d.Id())) @@ -315,6 +320,7 @@ func resourceTagsTagKeyUpdate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Updating TagKey %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -342,6 +348,7 @@ func resourceTagsTagKeyUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -390,6 +397,8 @@ func resourceTagsTagKeyDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TagKey %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -399,6 +408,7 @@ func resourceTagsTagKeyDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TagKey") diff --git a/google-beta/services/tags/resource_tags_tag_value.go b/google-beta/services/tags/resource_tags_tag_value.go index 9e62875784..3bcb7de856 100644 --- a/google-beta/services/tags/resource_tags_tag_value.go +++ b/google-beta/services/tags/resource_tags_tag_value.go @@ -20,6 +20,7 @@ package tags import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -145,6 +146,7 @@ func resourceTagsTagValueCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -153,6 +155,7 @@ func resourceTagsTagValueCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating TagValue: %s", err) @@ -213,12 +216,14 @@ func resourceTagsTagValueRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("TagsTagValue %q", d.Id())) @@ -279,6 +284,7 @@ func resourceTagsTagValueUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating TagValue %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -306,6 +312,7 @@ func resourceTagsTagValueUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -354,6 +361,8 @@ func resourceTagsTagValueDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting TagValue %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -363,6 +372,7 @@ func resourceTagsTagValueDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "TagValue") diff --git a/google-beta/services/tpu/resource_tpu_node.go b/google-beta/services/tpu/resource_tpu_node.go index 9fdc26910e..67c7df594b 100644 --- a/google-beta/services/tpu/resource_tpu_node.go +++ b/google-beta/services/tpu/resource_tpu_node.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "regexp" "time" @@ -339,6 +340,7 @@ func resourceTPUNodeCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -347,6 +349,7 @@ func resourceTPUNodeCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Node: %s", err) @@ -413,12 +416,14 @@ func resourceTPUNodeRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("TPUNode %q", d.Id())) @@ -503,6 +508,8 @@ func resourceTPUNodeUpdate(d *schema.ResourceData, meta interface{}) error { return err } + headers := make(http.Header) + // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { billingProject = bp @@ -516,6 +523,7 @@ func resourceTPUNodeUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error updating Node %q: %s", d.Id(), err) @@ -563,6 +571,8 @@ func resourceTPUNodeDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Node %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -572,6 +582,7 @@ func resourceTPUNodeDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Node") diff --git a/google-beta/services/tpuv2/resource_tpu_v2_vm.go b/google-beta/services/tpuv2/resource_tpu_v2_vm.go index de9a5492b1..37a3abf7e2 100644 --- a/google-beta/services/tpuv2/resource_tpu_v2_vm.go +++ b/google-beta/services/tpuv2/resource_tpu_v2_vm.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -568,6 +569,7 @@ func resourceTpuV2VmCreate(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -576,6 +578,7 @@ func resourceTpuV2VmCreate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Vm: %s", err) @@ -642,12 +645,14 @@ func resourceTpuV2VmRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("TpuV2Vm %q", d.Id())) @@ -786,6 +791,7 @@ func resourceTpuV2VmUpdate(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] Updating Vm %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -829,6 +835,7 @@ func resourceTpuV2VmUpdate(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -876,6 +883,8 @@ func resourceTpuV2VmDelete(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Vm %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -885,6 +894,7 @@ func resourceTpuV2VmDelete(d *schema.ResourceData, meta interface{}) error { UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Vm") diff --git a/google-beta/services/vertexai/iam_vertex_endpoint_test.go b/google-beta/services/vertexai/iam_vertex_endpoint_test.go index bfe1d91c31..80118ae20c 100644 --- a/google-beta/services/vertexai/iam_vertex_endpoint_test.go +++ b/google-beta/services/vertexai/iam_vertex_endpoint_test.go @@ -304,9 +304,9 @@ resource "google_compute_network" "vertex_network" { data "google_project" "project" {} resource "google_vertex_ai_endpoint_iam_binding" "foo" { -project = google_vertex_ai_endpoint.endpoint.project -location = google_vertex_ai_endpoint.endpoint.location -endpoint = google_vertex_ai_endpoint.endpoint.name + project = google_vertex_ai_endpoint.endpoint.project + location = google_vertex_ai_endpoint.endpoint.location + endpoint = google_vertex_ai_endpoint.endpoint.name role = "%{role}" members = ["user:admin@hashicorptest.com"] } diff --git a/google-beta/services/vertexai/resource_vertex_ai_dataset.go b/google-beta/services/vertexai/resource_vertex_ai_dataset.go index 022245cd73..b04e577d86 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_dataset.go +++ b/google-beta/services/vertexai/resource_vertex_ai_dataset.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -187,6 +188,7 @@ func resourceVertexAIDatasetCreate(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -195,6 +197,7 @@ func resourceVertexAIDatasetCreate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Dataset: %s", err) @@ -261,12 +264,14 @@ func resourceVertexAIDatasetRead(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIDataset %q", d.Id())) @@ -342,6 +347,7 @@ func resourceVertexAIDatasetUpdate(d *schema.ResourceData, meta interface{}) err } log.Printf("[DEBUG] Updating Dataset %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -373,6 +379,7 @@ func resourceVertexAIDatasetUpdate(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -413,6 +420,8 @@ func resourceVertexAIDatasetDelete(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Dataset %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -422,6 +431,7 @@ func resourceVertexAIDatasetDelete(d *schema.ResourceData, meta interface{}) err UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Dataset") diff --git a/google-beta/services/vertexai/resource_vertex_ai_deployment_resource_pool.go b/google-beta/services/vertexai/resource_vertex_ai_deployment_resource_pool.go index e56049df91..106f610dec 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_deployment_resource_pool.go +++ b/google-beta/services/vertexai/resource_vertex_ai_deployment_resource_pool.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "time" @@ -197,6 +198,7 @@ func resourceVertexAIDeploymentResourcePoolCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -205,6 +207,7 @@ func resourceVertexAIDeploymentResourcePoolCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating DeploymentResourcePool: %s", err) @@ -271,12 +274,14 @@ func resourceVertexAIDeploymentResourcePoolRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIDeploymentResourcePool %q", d.Id())) @@ -326,6 +331,8 @@ func resourceVertexAIDeploymentResourcePoolDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting DeploymentResourcePool %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -335,6 +342,7 @@ func resourceVertexAIDeploymentResourcePoolDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "DeploymentResourcePool") diff --git a/google-beta/services/vertexai/resource_vertex_ai_endpoint.go b/google-beta/services/vertexai/resource_vertex_ai_endpoint.go index c3bd338db2..94e3417a64 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_endpoint.go +++ b/google-beta/services/vertexai/resource_vertex_ai_endpoint.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -380,6 +381,7 @@ func resourceVertexAIEndpointCreate(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -388,6 +390,7 @@ func resourceVertexAIEndpointCreate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Endpoint: %s", err) @@ -450,12 +453,14 @@ func resourceVertexAIEndpointRead(d *schema.ResourceData, meta interface{}) erro billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIEndpoint %q", d.Id())) @@ -543,6 +548,7 @@ func resourceVertexAIEndpointUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Updating Endpoint %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -578,6 +584,7 @@ func resourceVertexAIEndpointUpdate(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -618,6 +625,8 @@ func resourceVertexAIEndpointDelete(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Endpoint %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -627,6 +636,7 @@ func resourceVertexAIEndpointDelete(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Endpoint") diff --git a/google-beta/services/vertexai/resource_vertex_ai_feature_group.go b/google-beta/services/vertexai/resource_vertex_ai_feature_group.go index 74bdc1d270..951d58af5e 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_feature_group.go +++ b/google-beta/services/vertexai/resource_vertex_ai_feature_group.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -204,6 +205,7 @@ func resourceVertexAIFeatureGroupCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -212,6 +214,7 @@ func resourceVertexAIFeatureGroupCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FeatureGroup: %s", err) @@ -278,12 +281,14 @@ func resourceVertexAIFeatureGroupRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeatureGroup %q", d.Id())) @@ -368,6 +373,7 @@ func resourceVertexAIFeatureGroupUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating FeatureGroup %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("name") { @@ -407,6 +413,7 @@ func resourceVertexAIFeatureGroupUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -454,6 +461,8 @@ func resourceVertexAIFeatureGroupDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FeatureGroup %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -463,6 +472,7 @@ func resourceVertexAIFeatureGroupDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FeatureGroup") diff --git a/google-beta/services/vertexai/resource_vertex_ai_feature_group_feature.go b/google-beta/services/vertexai/resource_vertex_ai_feature_group_feature.go index 468e9b10a0..a1e3590a27 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_feature_group_feature.go +++ b/google-beta/services/vertexai/resource_vertex_ai_feature_group_feature.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -172,6 +173,7 @@ func resourceVertexAIFeatureGroupFeatureCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -180,6 +182,7 @@ func resourceVertexAIFeatureGroupFeatureCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FeatureGroupFeature: %s", err) @@ -242,12 +245,14 @@ func resourceVertexAIFeatureGroupFeatureRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeatureGroupFeature %q", d.Id())) @@ -323,6 +328,7 @@ func resourceVertexAIFeatureGroupFeatureUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating FeatureGroupFeature %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -358,6 +364,7 @@ func resourceVertexAIFeatureGroupFeatureUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -405,6 +412,8 @@ func resourceVertexAIFeatureGroupFeatureDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FeatureGroupFeature %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -414,6 +423,7 @@ func resourceVertexAIFeatureGroupFeatureDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FeatureGroupFeature") diff --git a/google-beta/services/vertexai/resource_vertex_ai_feature_online_store.go b/google-beta/services/vertexai/resource_vertex_ai_feature_online_store.go index 201fbd00c4..c0e8c7b046 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_feature_online_store.go +++ b/google-beta/services/vertexai/resource_vertex_ai_feature_online_store.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -293,6 +294,7 @@ func resourceVertexAIFeatureOnlineStoreCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -301,6 +303,7 @@ func resourceVertexAIFeatureOnlineStoreCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FeatureOnlineStore: %s", err) @@ -363,12 +366,14 @@ func resourceVertexAIFeatureOnlineStoreRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeatureOnlineStore %q", d.Id())) @@ -471,6 +476,7 @@ func resourceVertexAIFeatureOnlineStoreUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating FeatureOnlineStore %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("bigtable") { @@ -514,6 +520,7 @@ func resourceVertexAIFeatureOnlineStoreUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -561,6 +568,8 @@ func resourceVertexAIFeatureOnlineStoreDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + if v, ok := d.GetOk("force_destroy"); ok { url, err = transport_tpg.AddQueryParams(url, map[string]string{"force": fmt.Sprintf("%v", v)}) if err != nil { @@ -577,6 +586,7 @@ func resourceVertexAIFeatureOnlineStoreDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FeatureOnlineStore") diff --git a/google-beta/services/vertexai/resource_vertex_ai_feature_online_store_featureview.go b/google-beta/services/vertexai/resource_vertex_ai_feature_online_store_featureview.go index 5579904d21..e594f3c3b9 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_feature_online_store_featureview.go +++ b/google-beta/services/vertexai/resource_vertex_ai_feature_online_store_featureview.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -319,6 +320,7 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -327,6 +329,7 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FeatureOnlineStoreFeatureview: %s", err) @@ -389,12 +392,14 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewRead(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeatureOnlineStoreFeatureview %q", d.Id())) @@ -482,6 +487,7 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating FeatureOnlineStoreFeatureview %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("sync_config") { @@ -521,6 +527,7 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -561,6 +568,8 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting FeatureOnlineStoreFeatureview %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -570,6 +579,7 @@ func resourceVertexAIFeatureOnlineStoreFeatureviewDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FeatureOnlineStoreFeatureview") diff --git a/google-beta/services/vertexai/resource_vertex_ai_featurestore.go b/google-beta/services/vertexai/resource_vertex_ai_featurestore.go index 4f70cc63dc..2c29ac85cb 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_featurestore.go +++ b/google-beta/services/vertexai/resource_vertex_ai_featurestore.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -232,6 +233,7 @@ func resourceVertexAIFeaturestoreCreate(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -240,6 +242,7 @@ func resourceVertexAIFeaturestoreCreate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Featurestore: %s", err) @@ -302,12 +305,14 @@ func resourceVertexAIFeaturestoreRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeaturestore %q", d.Id())) @@ -398,6 +403,7 @@ func resourceVertexAIFeaturestoreUpdate(d *schema.ResourceData, meta interface{} } log.Printf("[DEBUG] Updating Featurestore %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("online_serving_config") { @@ -437,6 +443,7 @@ func resourceVertexAIFeaturestoreUpdate(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -484,6 +491,8 @@ func resourceVertexAIFeaturestoreDelete(d *schema.ResourceData, meta interface{} billingProject = bp } + headers := make(http.Header) + if v, ok := d.GetOk("force_destroy"); ok { url, err = transport_tpg.AddQueryParams(url, map[string]string{"force": fmt.Sprintf("%v", v)}) if err != nil { @@ -500,6 +509,7 @@ func resourceVertexAIFeaturestoreDelete(d *schema.ResourceData, meta interface{} UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Featurestore") diff --git a/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype.go b/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype.go index 7797eae597..ef2f06ae8b 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype.go +++ b/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -283,6 +284,7 @@ func resourceVertexAIFeaturestoreEntitytypeCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) if v, ok := d.GetOk("featurestore"); ok { re := regexp.MustCompile("projects/([a-zA-Z0-9-]*)/(?:locations|regions)/([a-zA-Z0-9-]*)") switch { @@ -300,6 +302,7 @@ func resourceVertexAIFeaturestoreEntitytypeCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FeaturestoreEntitytype: %s", err) @@ -356,12 +359,14 @@ func resourceVertexAIFeaturestoreEntitytypeRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeaturestoreEntitytype %q", d.Id())) @@ -441,6 +446,7 @@ func resourceVertexAIFeaturestoreEntitytypeUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating FeaturestoreEntitytype %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -480,6 +486,7 @@ func resourceVertexAIFeaturestoreEntitytypeUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -515,6 +522,7 @@ func resourceVertexAIFeaturestoreEntitytypeDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) if v, ok := d.GetOk("featurestore"); ok { re := regexp.MustCompile("projects/([a-zA-Z0-9-]*)/(?:locations|regions)/([a-zA-Z0-9-]*)") switch { @@ -534,6 +542,7 @@ func resourceVertexAIFeaturestoreEntitytypeDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FeaturestoreEntitytype") diff --git a/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go b/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go index e75d577e99..10d7e6a04e 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go +++ b/google-beta/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "regexp" "strings" @@ -171,6 +172,7 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureCreate(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) if v, ok := d.GetOk("entitytype"); ok { re := regexp.MustCompile("projects/([a-zA-Z0-9-]*)/(?:locations|regions)/([a-zA-Z0-9-]*)") switch { @@ -188,6 +190,7 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureCreate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating FeaturestoreEntitytypeFeature: %s", err) @@ -244,12 +247,14 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureRead(d *schema.ResourceData, m billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIFeaturestoreEntitytypeFeature %q", d.Id())) @@ -314,6 +319,7 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureUpdate(d *schema.ResourceData, } log.Printf("[DEBUG] Updating FeaturestoreEntitytypeFeature %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -345,6 +351,7 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureUpdate(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -380,6 +387,7 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureDelete(d *schema.ResourceData, billingProject = bp } + headers := make(http.Header) if v, ok := d.GetOk("entitytype"); ok { re := regexp.MustCompile("projects/([a-zA-Z0-9-]*)/(?:locations|regions)/([a-zA-Z0-9-]*)") switch { @@ -399,6 +407,7 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureDelete(d *schema.ResourceData, UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "FeaturestoreEntitytypeFeature") diff --git a/google-beta/services/vertexai/resource_vertex_ai_index.go b/google-beta/services/vertexai/resource_vertex_ai_index.go index 37f83df5b9..b77f986160 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_index.go +++ b/google-beta/services/vertexai/resource_vertex_ai_index.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -359,6 +360,7 @@ func resourceVertexAIIndexCreate(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -367,6 +369,7 @@ func resourceVertexAIIndexCreate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Index: %s", err) @@ -433,12 +436,14 @@ func resourceVertexAIIndexRead(d *schema.ResourceData, meta interface{}) error { billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIIndex %q", d.Id())) @@ -538,6 +543,7 @@ func resourceVertexAIIndexUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] Updating Index %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -599,6 +605,7 @@ func resourceVertexAIIndexUpdate(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -646,6 +653,8 @@ func resourceVertexAIIndexDelete(d *schema.ResourceData, meta interface{}) error billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Index %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -655,6 +664,7 @@ func resourceVertexAIIndexDelete(d *schema.ResourceData, meta interface{}) error UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Index") diff --git a/google-beta/services/vertexai/resource_vertex_ai_index_endpoint.go b/google-beta/services/vertexai/resource_vertex_ai_index_endpoint.go index 89b0b7bd9e..b49cb9665d 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_index_endpoint.go +++ b/google-beta/services/vertexai/resource_vertex_ai_index_endpoint.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -236,6 +237,7 @@ func resourceVertexAIIndexEndpointCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -244,6 +246,7 @@ func resourceVertexAIIndexEndpointCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating IndexEndpoint: %s", err) @@ -310,12 +313,14 @@ func resourceVertexAIIndexEndpointRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIIndexEndpoint %q", d.Id())) @@ -403,6 +408,7 @@ func resourceVertexAIIndexEndpointUpdate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Updating IndexEndpoint %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -438,6 +444,7 @@ func resourceVertexAIIndexEndpointUpdate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -478,6 +485,8 @@ func resourceVertexAIIndexEndpointDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting IndexEndpoint %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -487,6 +496,7 @@ func resourceVertexAIIndexEndpointDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "IndexEndpoint") diff --git a/google-beta/services/vertexai/resource_vertex_ai_metadata_store.go b/google-beta/services/vertexai/resource_vertex_ai_metadata_store.go index 418ce791bb..9bc4bbdb01 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_metadata_store.go +++ b/google-beta/services/vertexai/resource_vertex_ai_metadata_store.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "time" @@ -162,6 +163,7 @@ func resourceVertexAIMetadataStoreCreate(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -170,6 +172,7 @@ func resourceVertexAIMetadataStoreCreate(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating MetadataStore: %s", err) @@ -232,12 +235,14 @@ func resourceVertexAIMetadataStoreRead(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAIMetadataStore %q", d.Id())) @@ -293,6 +298,8 @@ func resourceVertexAIMetadataStoreDelete(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting MetadataStore %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -302,6 +309,7 @@ func resourceVertexAIMetadataStoreDelete(d *schema.ResourceData, meta interface{ UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "MetadataStore") diff --git a/google-beta/services/vertexai/resource_vertex_ai_tensorboard.go b/google-beta/services/vertexai/resource_vertex_ai_tensorboard.go index 806aaac33f..e262435109 100644 --- a/google-beta/services/vertexai/resource_vertex_ai_tensorboard.go +++ b/google-beta/services/vertexai/resource_vertex_ai_tensorboard.go @@ -20,6 +20,7 @@ package vertexai import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -200,6 +201,7 @@ func resourceVertexAITensorboardCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -208,6 +210,7 @@ func resourceVertexAITensorboardCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Tensorboard: %s", err) @@ -274,12 +277,14 @@ func resourceVertexAITensorboardRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VertexAITensorboard %q", d.Id())) @@ -367,6 +372,7 @@ func resourceVertexAITensorboardUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Tensorboard %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -402,6 +408,7 @@ func resourceVertexAITensorboardUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -449,6 +456,8 @@ func resourceVertexAITensorboardDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Tensorboard %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -458,6 +467,7 @@ func resourceVertexAITensorboardDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Tensorboard") diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_cluster.go b/google-beta/services/vmwareengine/resource_vmwareengine_cluster.go index db1ea48818..4170ee2a8b 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_cluster.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_cluster.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -140,6 +141,7 @@ func resourceVmwareengineClusterCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -148,6 +150,7 @@ func resourceVmwareengineClusterCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -195,12 +198,14 @@ func resourceVmwareengineClusterRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -247,6 +252,7 @@ func resourceVmwareengineClusterUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Cluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("node_type_configs") { @@ -274,6 +280,7 @@ func resourceVmwareengineClusterUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -317,6 +324,8 @@ func resourceVmwareengineClusterDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Cluster %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -326,6 +335,7 @@ func resourceVmwareengineClusterDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_external_access_rule.go b/google-beta/services/vmwareengine/resource_vmwareengine_external_access_rule.go index 243c22d42c..200cf01055 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_external_access_rule.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_external_access_rule.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -240,6 +241,7 @@ func resourceVmwareengineExternalAccessRuleCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -248,6 +250,7 @@ func resourceVmwareengineExternalAccessRuleCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating ExternalAccessRule: %s", err) @@ -294,12 +297,14 @@ func resourceVmwareengineExternalAccessRuleRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VmwareengineExternalAccessRule %q", d.Id())) @@ -411,6 +416,7 @@ func resourceVmwareengineExternalAccessRuleUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating ExternalAccessRule %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -466,6 +472,7 @@ func resourceVmwareengineExternalAccessRuleUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -508,6 +515,8 @@ func resourceVmwareengineExternalAccessRuleDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ExternalAccessRule %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -517,6 +526,7 @@ func resourceVmwareengineExternalAccessRuleDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "ExternalAccessRule") diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_external_address.go b/google-beta/services/vmwareengine/resource_vmwareengine_external_address.go index 9e6ff1691a..543bc8d647 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_external_address.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_external_address.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -141,6 +142,7 @@ func resourceVmwareengineExternalAddressCreate(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -149,6 +151,7 @@ func resourceVmwareengineExternalAddressCreate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.ExternalIpServiceNotActive}, }) if err != nil { @@ -196,12 +199,14 @@ func resourceVmwareengineExternalAddressRead(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.ExternalIpServiceNotActive}, }) if err != nil { @@ -263,6 +268,7 @@ func resourceVmwareengineExternalAddressUpdate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Updating ExternalAddress %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("internal_ip") { @@ -294,6 +300,7 @@ func resourceVmwareengineExternalAddressUpdate(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.ExternalIpServiceNotActive}, }) @@ -337,6 +344,8 @@ func resourceVmwareengineExternalAddressDelete(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting ExternalAddress %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -346,6 +355,7 @@ func resourceVmwareengineExternalAddressDelete(d *schema.ResourceData, meta inte UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorRetryPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.ExternalIpServiceNotActive}, }) if err != nil { diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_network.go b/google-beta/services/vmwareengine/resource_vmwareengine_network.go index bf1ea5a6bd..b58c8b9caf 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_network.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_network.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -160,6 +161,7 @@ func resourceVmwareengineNetworkCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -168,6 +170,7 @@ func resourceVmwareengineNetworkCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Network: %s", err) @@ -220,12 +223,14 @@ func resourceVmwareengineNetworkRead(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VmwareengineNetwork %q", d.Id())) @@ -283,6 +288,7 @@ func resourceVmwareengineNetworkUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Network %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -310,6 +316,7 @@ func resourceVmwareengineNetworkUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -357,6 +364,8 @@ func resourceVmwareengineNetworkDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Network %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -366,6 +375,7 @@ func resourceVmwareengineNetworkDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Network") diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_network_peering.go b/google-beta/services/vmwareengine/resource_vmwareengine_network_peering.go index aeb17f8258..3f4627ed80 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_network_peering.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_network_peering.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "time" @@ -231,6 +232,7 @@ func resourceVmwareengineNetworkPeeringCreate(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -239,6 +241,7 @@ func resourceVmwareengineNetworkPeeringCreate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkPeering: %s", err) @@ -291,12 +294,14 @@ func resourceVmwareengineNetworkPeeringRead(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VmwareengineNetworkPeering %q", d.Id())) @@ -423,6 +428,7 @@ func resourceVmwareengineNetworkPeeringUpdate(d *schema.ResourceData, meta inter } log.Printf("[DEBUG] Updating NetworkPeering %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -437,6 +443,7 @@ func resourceVmwareengineNetworkPeeringUpdate(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -483,6 +490,8 @@ func resourceVmwareengineNetworkPeeringDelete(d *schema.ResourceData, meta inter billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NetworkPeering %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -492,6 +501,7 @@ func resourceVmwareengineNetworkPeeringDelete(d *schema.ResourceData, meta inter UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkPeering") diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_network_policy.go b/google-beta/services/vmwareengine/resource_vmwareengine_network_policy.go index 520beaab83..69886183f3 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_network_policy.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_network_policy.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "time" @@ -223,6 +224,7 @@ func resourceVmwareengineNetworkPolicyCreate(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -231,6 +233,7 @@ func resourceVmwareengineNetworkPolicyCreate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating NetworkPolicy: %s", err) @@ -283,12 +286,14 @@ func resourceVmwareengineNetworkPolicyRead(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VmwareengineNetworkPolicy %q", d.Id())) @@ -376,6 +381,7 @@ func resourceVmwareengineNetworkPolicyUpdate(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Updating NetworkPolicy %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -390,6 +396,7 @@ func resourceVmwareengineNetworkPolicyUpdate(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -436,6 +443,8 @@ func resourceVmwareengineNetworkPolicyDelete(d *schema.ResourceData, meta interf billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting NetworkPolicy %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -445,6 +454,7 @@ func resourceVmwareengineNetworkPolicyDelete(d *schema.ResourceData, meta interf UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "NetworkPolicy") diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_private_cloud.go b/google-beta/services/vmwareengine/resource_vmwareengine_private_cloud.go index 8cef77f51c..6e521af711 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_private_cloud.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_private_cloud.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -112,6 +113,26 @@ This cannot be changed once the PrivateCloud is created.`, }, }, }, + "stretched_cluster_config": { + Type: schema.TypeList, + Optional: true, + Description: `The stretched cluster configuration for the private cloud.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "preferred_location": { + Type: schema.TypeString, + Optional: true, + Description: `Zone that will remain operational when connection between the two zones is lost.`, + }, + "secondary_location": { + Type: schema.TypeString, + Optional: true, + Description: `Additional zone for a higher level of availability and load balancing.`, + }, + }, + }, + }, }, }, }, @@ -174,9 +195,9 @@ the form: projects/{project_number}/locations/{location}/vmwareEngineNetworks/{v Type: schema.TypeString, Optional: true, ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"STANDARD", "TIME_LIMITED", ""}), + ValidateFunc: verify.ValidateEnum([]string{"STANDARD", "TIME_LIMITED", "STRETCHED", ""}), DiffSuppressFunc: vmwareenginePrivateCloudStandardTypeDiffSuppressFunc, - Description: `Initial type of the private cloud. Possible values: ["STANDARD", "TIME_LIMITED"]`, + Description: `Initial type of the private cloud. Possible values: ["STANDARD", "TIME_LIMITED", "STRETCHED"]`, }, "hcx": { Type: schema.TypeList, @@ -341,6 +362,7 @@ func resourceVmwareenginePrivateCloudCreate(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -349,6 +371,7 @@ func resourceVmwareenginePrivateCloudCreate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -402,12 +425,14 @@ func resourceVmwareenginePrivateCloudRead(d *schema.ResourceData, meta interface billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -498,6 +523,7 @@ func resourceVmwareenginePrivateCloudUpdate(d *schema.ResourceData, meta interfa } log.Printf("[DEBUG] Updating PrivateCloud %q: %#v", d.Id(), obj) + headers := make(http.Header) // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { @@ -512,6 +538,7 @@ func resourceVmwareenginePrivateCloudUpdate(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) @@ -613,6 +640,8 @@ func resourceVmwareenginePrivateCloudDelete(d *schema.ResourceData, meta interfa billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting PrivateCloud %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -622,6 +651,7 @@ func resourceVmwareenginePrivateCloudDelete(d *schema.ResourceData, meta interfa UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, ErrorAbortPredicates: []transport_tpg.RetryErrorPredicateFunc{transport_tpg.Is429QuotaError}, }) if err != nil { @@ -778,6 +808,8 @@ func flattenVmwareenginePrivateCloudManagementCluster(v interface{}, d *schema.R flattenVmwareenginePrivateCloudManagementClusterClusterId(original["clusterId"], d, config) transformed["node_type_configs"] = flattenVmwareenginePrivateCloudManagementClusterNodeTypeConfigs(original["nodeTypeConfigs"], d, config) + transformed["stretched_cluster_config"] = + flattenVmwareenginePrivateCloudManagementClusterStretchedClusterConfig(original["stretchedClusterConfig"], d, config) return []interface{}{transformed} } func flattenVmwareenginePrivateCloudManagementClusterClusterId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -834,6 +866,29 @@ func flattenVmwareenginePrivateCloudManagementClusterNodeTypeConfigsCustomCoreCo return v // let terraform core handle it otherwise } +func flattenVmwareenginePrivateCloudManagementClusterStretchedClusterConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return nil + } + original := v.(map[string]interface{}) + if len(original) == 0 { + return nil + } + transformed := make(map[string]interface{}) + transformed["preferred_location"] = + flattenVmwareenginePrivateCloudManagementClusterStretchedClusterConfigPreferredLocation(original["preferredLocation"], d, config) + transformed["secondary_location"] = + flattenVmwareenginePrivateCloudManagementClusterStretchedClusterConfigSecondaryLocation(original["secondaryLocation"], d, config) + return []interface{}{transformed} +} +func flattenVmwareenginePrivateCloudManagementClusterStretchedClusterConfigPreferredLocation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenVmwareenginePrivateCloudManagementClusterStretchedClusterConfigSecondaryLocation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenVmwareenginePrivateCloudHcx(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -1033,6 +1088,13 @@ func expandVmwareenginePrivateCloudManagementCluster(v interface{}, d tpgresourc transformed["nodeTypeConfigs"] = transformedNodeTypeConfigs } + transformedStretchedClusterConfig, err := expandVmwareenginePrivateCloudManagementClusterStretchedClusterConfig(original["stretched_cluster_config"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedStretchedClusterConfig); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["stretchedClusterConfig"] = transformedStretchedClusterConfig + } + return transformed, nil } @@ -1080,6 +1142,40 @@ func expandVmwareenginePrivateCloudManagementClusterNodeTypeConfigsCustomCoreCou return v, nil } +func expandVmwareenginePrivateCloudManagementClusterStretchedClusterConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil + } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) + + transformedPreferredLocation, err := expandVmwareenginePrivateCloudManagementClusterStretchedClusterConfigPreferredLocation(original["preferred_location"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedPreferredLocation); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["preferredLocation"] = transformedPreferredLocation + } + + transformedSecondaryLocation, err := expandVmwareenginePrivateCloudManagementClusterStretchedClusterConfigSecondaryLocation(original["secondary_location"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedSecondaryLocation); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["secondaryLocation"] = transformedSecondaryLocation + } + + return transformed, nil +} + +func expandVmwareenginePrivateCloudManagementClusterStretchedClusterConfigPreferredLocation(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandVmwareenginePrivateCloudManagementClusterStretchedClusterConfigSecondaryLocation(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + func expandVmwareenginePrivateCloudType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google-beta/services/vmwareengine/resource_vmwareengine_subnet.go b/google-beta/services/vmwareengine/resource_vmwareengine_subnet.go index f592c06d38..a5bd530e39 100644 --- a/google-beta/services/vmwareengine/resource_vmwareengine_subnet.go +++ b/google-beta/services/vmwareengine/resource_vmwareengine_subnet.go @@ -20,6 +20,7 @@ package vmwareengine import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -171,6 +172,7 @@ func resourceVmwareengineSubnetCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "PATCH", @@ -179,6 +181,7 @@ func resourceVmwareengineSubnetCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Subnet: %s", err) @@ -225,12 +228,14 @@ func resourceVmwareengineSubnetRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VmwareengineSubnet %q", d.Id())) @@ -297,6 +302,7 @@ func resourceVmwareengineSubnetUpdate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Updating Subnet %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("ip_cidr_range") { @@ -324,6 +330,7 @@ func resourceVmwareengineSubnetUpdate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { diff --git a/google-beta/services/vpcaccess/resource_vpc_access_connector.go b/google-beta/services/vpcaccess/resource_vpc_access_connector.go index 71eb08591c..f5996898cb 100644 --- a/google-beta/services/vpcaccess/resource_vpc_access_connector.go +++ b/google-beta/services/vpcaccess/resource_vpc_access_connector.go @@ -20,6 +20,7 @@ package vpcaccess import ( "fmt" "log" + "net/http" "reflect" "time" @@ -266,6 +267,7 @@ func resourceVPCAccessConnectorCreate(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -274,6 +276,7 @@ func resourceVPCAccessConnectorCreate(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Connector: %s", err) @@ -353,12 +356,14 @@ func resourceVPCAccessConnectorRead(d *schema.ResourceData, meta interface{}) er billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VPCAccessConnector %q", d.Id())) @@ -444,6 +449,8 @@ func resourceVPCAccessConnectorDelete(d *schema.ResourceData, meta interface{}) billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Connector %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -453,6 +460,7 @@ func resourceVPCAccessConnectorDelete(d *schema.ResourceData, meta interface{}) UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Connector") diff --git a/google-beta/services/workbench/resource_workbench_instance.go b/google-beta/services/workbench/resource_workbench_instance.go index d41c8ef1b2..e71a17e799 100644 --- a/google-beta/services/workbench/resource_workbench_instance.go +++ b/google-beta/services/workbench/resource_workbench_instance.go @@ -20,6 +20,7 @@ package workbench import ( "fmt" "log" + "net/http" "reflect" "sort" "strings" @@ -782,6 +783,7 @@ func resourceWorkbenchInstanceCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -790,6 +792,7 @@ func resourceWorkbenchInstanceCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Instance: %s", err) @@ -866,12 +869,14 @@ func resourceWorkbenchInstanceRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("WorkbenchInstance %q", d.Id())) @@ -965,6 +970,7 @@ func resourceWorkbenchInstanceUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("gce_setup") { @@ -1041,6 +1047,7 @@ func resourceWorkbenchInstanceUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1108,6 +1115,8 @@ func resourceWorkbenchInstanceDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Instance %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1117,6 +1126,7 @@ func resourceWorkbenchInstanceDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Instance") diff --git a/google-beta/services/workbench/resource_workbench_instance_generated_test.go b/google-beta/services/workbench/resource_workbench_instance_generated_test.go index 89c0176834..923b490efb 100644 --- a/google-beta/services/workbench/resource_workbench_instance_generated_test.go +++ b/google-beta/services/workbench/resource_workbench_instance_generated_test.go @@ -143,8 +143,8 @@ resource "google_workbench_instance" "instance" { core_count = 1 } vm_image { - project = "deeplearning-platform-release" - family = "tf-latest-gpu" + project = "cloud-notebooks-managed" + family = "workbench-instances" } } } diff --git a/google-beta/services/workflows/resource_workflows_workflow.go b/google-beta/services/workflows/resource_workflows_workflow.go index 184869ecd7..66b7edc609 100644 --- a/google-beta/services/workflows/resource_workflows_workflow.go +++ b/google-beta/services/workflows/resource_workflows_workflow.go @@ -21,6 +21,7 @@ import ( "context" "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -262,6 +263,7 @@ func resourceWorkflowsWorkflowCreate(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -270,6 +272,7 @@ func resourceWorkflowsWorkflowCreate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Workflow: %s", err) @@ -336,12 +339,14 @@ func resourceWorkflowsWorkflowRead(d *schema.ResourceData, meta interface{}) err billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("WorkflowsWorkflow %q", d.Id())) @@ -467,6 +472,7 @@ func resourceWorkflowsWorkflowUpdate(d *schema.ResourceData, meta interface{}) e } log.Printf("[DEBUG] Updating Workflow %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("description") { @@ -518,6 +524,7 @@ func resourceWorkflowsWorkflowUpdate(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -565,6 +572,8 @@ func resourceWorkflowsWorkflowDelete(d *schema.ResourceData, meta interface{}) e billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Workflow %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -574,6 +583,7 @@ func resourceWorkflowsWorkflowDelete(d *schema.ResourceData, meta interface{}) e UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Workflow") diff --git a/google-beta/services/workstations/resource_workstations_workstation.go b/google-beta/services/workstations/resource_workstations_workstation.go index 3a481176b9..1fbc9ddcc2 100644 --- a/google-beta/services/workstations/resource_workstations_workstation.go +++ b/google-beta/services/workstations/resource_workstations_workstation.go @@ -20,6 +20,7 @@ package workstations import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -217,6 +218,7 @@ func resourceWorkstationsWorkstationCreate(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -225,6 +227,7 @@ func resourceWorkstationsWorkstationCreate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating Workstation: %s", err) @@ -277,12 +280,14 @@ func resourceWorkstationsWorkstationRead(d *schema.ResourceData, meta interface{ billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("WorkstationsWorkstation %q", d.Id())) @@ -379,6 +384,7 @@ func resourceWorkstationsWorkstationUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Updating Workstation %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -418,6 +424,7 @@ func resourceWorkstationsWorkstationUpdate(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -465,6 +472,8 @@ func resourceWorkstationsWorkstationDelete(d *schema.ResourceData, meta interfac billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting Workstation %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -474,6 +483,7 @@ func resourceWorkstationsWorkstationDelete(d *schema.ResourceData, meta interfac UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "Workstation") diff --git a/google-beta/services/workstations/resource_workstations_workstation_cluster.go b/google-beta/services/workstations/resource_workstations_workstation_cluster.go index 461cb4b421..12c090a738 100644 --- a/google-beta/services/workstations/resource_workstations_workstation_cluster.go +++ b/google-beta/services/workstations/resource_workstations_workstation_cluster.go @@ -20,6 +20,7 @@ package workstations import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -187,6 +188,12 @@ To access workstations in the cluster, configure access to the managed service u }, }, }, + "control_plane_ip": { + Type: schema.TypeString, + Computed: true, + Description: `The private IP address of the control plane for this workstation cluster. +Workstation VMs need access to this IP address to work with the service, so make sure that your firewall rules allow egress from the workstation VMs to this address.`, + }, "create_time": { Type: schema.TypeString, Computed: true, @@ -320,6 +327,7 @@ func resourceWorkstationsWorkstationClusterCreate(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -328,6 +336,7 @@ func resourceWorkstationsWorkstationClusterCreate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WorkstationCluster: %s", err) @@ -380,12 +389,14 @@ func resourceWorkstationsWorkstationClusterRead(d *schema.ResourceData, meta int billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("WorkstationsWorkstationCluster %q", d.Id())) @@ -410,6 +421,9 @@ func resourceWorkstationsWorkstationClusterRead(d *schema.ResourceData, meta int if err := d.Set("subnetwork", flattenWorkstationsWorkstationClusterSubnetwork(res["subnetwork"], d, config)); err != nil { return fmt.Errorf("Error reading WorkstationCluster: %s", err) } + if err := d.Set("control_plane_ip", flattenWorkstationsWorkstationClusterControlPlaneIp(res["controlPlaneIp"], d, config)); err != nil { + return fmt.Errorf("Error reading WorkstationCluster: %s", err) + } if err := d.Set("display_name", flattenWorkstationsWorkstationClusterDisplayName(res["displayName"], d, config)); err != nil { return fmt.Errorf("Error reading WorkstationCluster: %s", err) } @@ -506,6 +520,7 @@ func resourceWorkstationsWorkstationClusterUpdate(d *schema.ResourceData, meta i } log.Printf("[DEBUG] Updating WorkstationCluster %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -553,6 +568,7 @@ func resourceWorkstationsWorkstationClusterUpdate(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -600,6 +616,8 @@ func resourceWorkstationsWorkstationClusterDelete(d *schema.ResourceData, meta i billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting WorkstationCluster %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -609,6 +627,7 @@ func resourceWorkstationsWorkstationClusterDelete(d *schema.ResourceData, meta i UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "WorkstationCluster") @@ -677,6 +696,10 @@ func flattenWorkstationsWorkstationClusterSubnetwork(v interface{}, d *schema.Re return v } +func flattenWorkstationsWorkstationClusterControlPlaneIp(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenWorkstationsWorkstationClusterDisplayName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } diff --git a/google-beta/services/workstations/resource_workstations_workstation_config.go b/google-beta/services/workstations/resource_workstations_workstation_config.go index 06aa5912fb..cb51a73d5f 100644 --- a/google-beta/services/workstations/resource_workstations_workstation_config.go +++ b/google-beta/services/workstations/resource_workstations_workstation_config.go @@ -20,6 +20,7 @@ package workstations import ( "fmt" "log" + "net/http" "reflect" "strings" "time" @@ -724,6 +725,7 @@ func resourceWorkstationsWorkstationConfigCreate(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "POST", @@ -732,6 +734,7 @@ func resourceWorkstationsWorkstationConfigCreate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutCreate), + Headers: headers, }) if err != nil { return fmt.Errorf("Error creating WorkstationConfig: %s", err) @@ -784,12 +787,14 @@ func resourceWorkstationsWorkstationConfigRead(d *schema.ResourceData, meta inte billingProject = bp } + headers := make(http.Header) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, Method: "GET", Project: billingProject, RawURL: url, UserAgent: userAgent, + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("WorkstationsWorkstationConfig %q", d.Id())) @@ -970,6 +975,7 @@ func resourceWorkstationsWorkstationConfigUpdate(d *schema.ResourceData, meta in } log.Printf("[DEBUG] Updating WorkstationConfig %q: %#v", d.Id(), obj) + headers := make(http.Header) updateMask := []string{} if d.HasChange("display_name") { @@ -1062,6 +1068,7 @@ func resourceWorkstationsWorkstationConfigUpdate(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutUpdate), + Headers: headers, }) if err != nil { @@ -1109,6 +1116,8 @@ func resourceWorkstationsWorkstationConfigDelete(d *schema.ResourceData, meta in billingProject = bp } + headers := make(http.Header) + log.Printf("[DEBUG] Deleting WorkstationConfig %q", d.Id()) res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ Config: config, @@ -1118,6 +1127,7 @@ func resourceWorkstationsWorkstationConfigDelete(d *schema.ResourceData, meta in UserAgent: userAgent, Body: obj, Timeout: d.Timeout(schema.TimeoutDelete), + Headers: headers, }) if err != nil { return transport_tpg.HandleNotFoundError(err, d, "WorkstationConfig") diff --git a/google-beta/sweeper/gcp_sweeper_test.go b/google-beta/sweeper/gcp_sweeper_test.go index fcdd811c8d..d6fee8008b 100644 --- a/google-beta/sweeper/gcp_sweeper_test.go +++ b/google-beta/sweeper/gcp_sweeper_test.go @@ -105,6 +105,7 @@ import ( _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/orgpolicy" _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/osconfig" _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/oslogin" + _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/parallelstore" _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/privateca" _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/publicca" _ "github.com/hashicorp/terraform-provider-google-beta/google-beta/services/pubsub" diff --git a/google-beta/tpgresource/annotations.go b/google-beta/tpgresource/annotations.go index cf202ff37c..fe1330f3b0 100644 --- a/google-beta/tpgresource/annotations.go +++ b/google-beta/tpgresource/annotations.go @@ -54,6 +54,17 @@ func SetMetadataAnnotationsDiff(_ context.Context, d *schema.ResourceDiff, meta return nil } + // Fix the bug that the computed and nested "annotations" field disappears from the terraform plan. + // https://github.com/hashicorp/terraform-provider-google/issues/17756 + // The bug is introduced by SetNew on "metadata" field with the object including "effective_annotations". + // "effective_annotations" cannot be set directly due to a bug that SetNew doesn't work on nested fields + // in terraform sdk. + // https://github.com/hashicorp/terraform-plugin-sdk/issues/459 + values := d.GetRawPlan().GetAttr("metadata").AsValueSlice() + if len(values) > 0 && !values[0].GetAttr("annotations").IsWhollyKnown() { + return nil + } + raw := d.Get("metadata.0.annotations") if raw == nil { return nil diff --git a/google-beta/tpgresource/labels.go b/google-beta/tpgresource/labels.go index 42bd5647da..969c134946 100644 --- a/google-beta/tpgresource/labels.go +++ b/google-beta/tpgresource/labels.go @@ -139,6 +139,17 @@ func SetMetadataLabelsDiff(_ context.Context, d *schema.ResourceDiff, meta inter return nil } + // Fix the bug that the computed and nested "labels" field disappears from the terraform plan. + // https://github.com/hashicorp/terraform-provider-google/issues/17756 + // The bug is introduced by SetNew on "metadata" field with the object including terraform_labels and effective_labels. + // "terraform_labels" and "effective_labels" cannot be set directly due to a bug that SetNew doesn't work on nested fields + // in terraform sdk. + // https://github.com/hashicorp/terraform-plugin-sdk/issues/459 + values := d.GetRawPlan().GetAttr("metadata").AsValueSlice() + if len(values) > 0 && !values[0].GetAttr("labels").IsWhollyKnown() { + return nil + } + raw := d.Get("metadata.0.labels") if raw == nil { return nil diff --git a/google-beta/tpgresource/utils.go b/google-beta/tpgresource/utils.go index 2b46b2ba68..70e54f5883 100644 --- a/google-beta/tpgresource/utils.go +++ b/google-beta/tpgresource/utils.go @@ -26,6 +26,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "golang.org/x/exp/maps" "google.golang.org/api/googleapi" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" @@ -239,6 +240,88 @@ func ExpandStringMap(d TerraformResourceData, key string) map[string]string { return ConvertStringMap(v.(map[string]interface{})) } +// SortStringsByConfigOrder takes a slice of map[string]interface{} from a TF config +// and API data, and returns a new slice containing the API data, reorderd to match +// the TF config as closely as possible (with new items at the end of the list.) +func SortStringsByConfigOrder(configData, apiData []string) ([]string, error) { + configOrder := map[string]int{} + for index, item := range configData { + _, ok := configOrder[item] + if ok { + return nil, fmt.Errorf("configData element at %d has duplicate value `%s`", index, item) + } + configOrder[item] = index + } + + apiSeen := map[string]struct{}{} + byConfigIndex := map[int]string{} + newElements := []string{} + for index, item := range apiData { + _, ok := apiSeen[item] + if ok { + return nil, fmt.Errorf("apiData element at %d has duplicate value `%s`", index, item) + } + apiSeen[item] = struct{}{} + configIndex, found := configOrder[item] + if found { + byConfigIndex[configIndex] = item + } else { + newElements = append(newElements, item) + } + } + + // Sort set config indexes and convert to a slice of strings. This removes items present in the config + // but not present in the API response. + configIndexes := maps.Keys(byConfigIndex) + sort.Ints(configIndexes) + result := []string{} + for _, index := range configIndexes { + result = append(result, byConfigIndex[index]) + } + + // Add new elements to the end of the list, sorted alphabetically. + sort.Strings(newElements) + result = append(result, newElements...) + + return result, nil +} + +// SortMapsByConfigOrder takes a slice of map[string]interface{} from a TF config +// and API data, and returns a new slice containing the API data, reorderd to match +// the TF config as closely as possible (with new items at the end of the list.) +// idKey is be used to extract a string key from the values in the slice. +func SortMapsByConfigOrder(configData, apiData []map[string]interface{}, idKey string) ([]map[string]interface{}, error) { + configIds := make([]string, len(configData)) + for i, item := range configData { + id, ok := item[idKey].(string) + if !ok { + return nil, fmt.Errorf("configData element at %d does not contain string value in key `%s`", i, idKey) + } + configIds[i] = id + } + + apiIds := make([]string, len(apiData)) + apiMap := map[string]map[string]interface{}{} + for i, item := range apiData { + id, ok := item[idKey].(string) + if !ok { + return nil, fmt.Errorf("apiData element at %d does not contain string value in key `%s`", i, idKey) + } + apiIds[i] = id + apiMap[id] = item + } + + sortedIds, err := SortStringsByConfigOrder(configIds, apiIds) + if err != nil { + return nil, err + } + result := []map[string]interface{}{} + for _, id := range sortedIds { + result = append(result, apiMap[id]) + } + return result, nil +} + func ConvertStringMap(v map[string]interface{}) map[string]string { m := make(map[string]string) for k, val := range v { @@ -534,31 +617,6 @@ func Fake404(reasonResourceType, resourceName string) *googleapi.Error { } } -// validate name of the gcs bucket. Guidelines are located at https://cloud.google.com/storage/docs/naming-buckets -// this does not attempt to check for IP addresses or close misspellings of "google" -func CheckGCSName(name string) error { - if strings.HasPrefix(name, "goog") { - return fmt.Errorf("error: bucket name %s cannot start with %q", name, "goog") - } - - if strings.Contains(name, "google") { - return fmt.Errorf("error: bucket name %s cannot contain %q", name, "google") - } - - valid, _ := regexp.MatchString("^[a-z0-9][a-z0-9_.-]{1,220}[a-z0-9]$", name) - if !valid { - return fmt.Errorf("error: bucket name validation failed %v. See https://cloud.google.com/storage/docs/naming-buckets", name) - } - - for _, str := range strings.Split(name, ".") { - valid, _ := regexp.MatchString("^[a-z0-9_-]{1,63}$", str) - if !valid { - return fmt.Errorf("error: bucket name validation failed %v", str) - } - } - return nil -} - // CheckGoogleIamPolicy makes assertions about the contents of a google_iam_policy data source's policy_data attribute func CheckGoogleIamPolicy(value string) error { if strings.Contains(value, "\"description\":\"\"") { diff --git a/google-beta/tpgresource/utils_test.go b/google-beta/tpgresource/utils_test.go index 7f92858ee9..85ee1d0299 100644 --- a/google-beta/tpgresource/utils_test.go +++ b/google-beta/tpgresource/utils_test.go @@ -11,7 +11,6 @@ import ( "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-provider-google-beta/google-beta/acctest" "github.com/hashicorp/terraform-provider-google-beta/google-beta/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google-beta/google-beta/transport" @@ -39,6 +38,153 @@ var fictionalSchema = map[string]*schema.Schema{ }, } +func TestSortByConfigOrder(t *testing.T) { + cases := map[string]struct { + configData, apiData []string + want []string + wantError bool + }{ + "empty config data and api data": { + configData: []string{}, + apiData: []string{}, + want: []string{}, + }, + "config data with empty api data": { + configData: []string{"one", "two"}, + apiData: []string{}, + want: []string{}, + }, + "empty config data with api data": { + configData: []string{}, + apiData: []string{"one", "two", "three"}, + want: []string{"one", "three", "two"}, + }, + "config data and api data that do not overlap": { + configData: []string{"foo", "bar"}, + apiData: []string{"one", "two", "three"}, + want: []string{"one", "three", "two"}, + }, + "config order is preserved": { + configData: []string{"foo", "two", "bar", "baz"}, + apiData: []string{"one", "two", "three", "bar"}, + want: []string{"two", "bar", "one", "three"}, + }, + "config data and api data overlap completely": { + configData: []string{"foo", "bar", "baz", "one", "two", "three"}, + apiData: []string{"baz", "two", "one", "bar", "three", "foo"}, + want: []string{"foo", "bar", "baz", "one", "two", "three"}, + }, + "config data contains duplicates": { + configData: []string{"one", "one"}, + apiData: []string{}, + wantError: true, + }, + "api data contains duplicates": { + configData: []string{}, + apiData: []string{"one", "one"}, + wantError: true, + }, + } + + for tn, tc := range cases { + tc := tc + t.Run(fmt.Sprintf("strings/%s", tn), func(t *testing.T) { + t.Parallel() + sorted, err := tpgresource.SortStringsByConfigOrder(tc.configData, tc.apiData) + if err != nil { + if !tc.wantError { + t.Fatalf("Unexpected error: %s", err) + } + } else if tc.wantError { + t.Fatalf("Wanted error, got none") + } + if !tc.wantError && (len(sorted) > 0 || len(tc.want) > 0) && !reflect.DeepEqual(sorted, tc.want) { + t.Fatalf("sorted result is incorrect. want %v, got %v", tc.want, sorted) + } + }) + + t.Run(fmt.Sprintf("maps/%s", tn), func(t *testing.T) { + t.Parallel() + configData := []map[string]interface{}{} + for _, item := range tc.configData { + configData = append(configData, map[string]interface{}{ + "value": item, + }) + } + apiData := []map[string]interface{}{} + for _, item := range tc.apiData { + apiData = append(apiData, map[string]interface{}{ + "value": item, + }) + } + want := []map[string]interface{}{} + for _, item := range tc.want { + want = append(want, map[string]interface{}{ + "value": item, + }) + } + sorted, err := tpgresource.SortMapsByConfigOrder(configData, apiData, "value") + if err != nil { + if !tc.wantError { + t.Fatalf("Unexpected error: %s", err) + } + } else if tc.wantError { + t.Fatalf("Wanted error, got none") + } + if !tc.wantError && (len(sorted) > 0 || len(want) > 0) && !reflect.DeepEqual(sorted, want) { + t.Fatalf("sorted result is incorrect. want %v, got %v", want, sorted) + } + }) + } +} + +func TestSortMapsByConfigOrder(t *testing.T) { + // most cases are covered by TestSortByConfigOrder; this covers map-specific cases. + cases := map[string]struct { + configData, apiData []map[string]interface{} + idKey string + wantError bool + want []map[string]interface{} + }{ + "config data is malformed": { + configData: []map[string]interface{}{{ + "foo": "one", + }, + }, + apiData: []map[string]interface{}{}, + idKey: "bar", + wantError: true, + }, + "api data is malformed": { + configData: []map[string]interface{}{}, + apiData: []map[string]interface{}{{ + "foo": "one", + }, + }, + idKey: "bar", + wantError: true, + }, + } + + for tn, tc := range cases { + tc := tc + t.Run(tn, func(t *testing.T) { + t.Parallel() + sorted, err := tpgresource.SortMapsByConfigOrder(tc.configData, tc.apiData, tc.idKey) + if err != nil { + if !tc.wantError { + t.Fatalf("Unexpected error: %s", err) + } + } else if tc.wantError { + t.Fatalf("Wanted error, got none") + } + if !tc.wantError && (len(sorted) > 0 || len(tc.want) > 0) && !reflect.DeepEqual(sorted, tc.want) { + t.Fatalf("sorted result is incorrect. want %v, got %v", tc.want, sorted) + } + }) + } +} + func TestConvertStringArr(t *testing.T) { input := make([]interface{}, 3) input[0] = "aaa" @@ -1109,44 +1255,3 @@ func TestReplaceVars(t *testing.T) { }) } } - -func TestCheckGCSName(t *testing.T) { - valid63 := acctest.RandString(t, 63) - cases := map[string]bool{ - // Valid - "foobar": true, - "foobar1": true, - "12345": true, - "foo_bar_baz": true, - "foo-bar-baz": true, - "foo-bar_baz1": true, - "foo--bar": true, - "foo__bar": true, - "foo-goog": true, - "foo.goog": true, - valid63: true, - fmt.Sprintf("%s.%s.%s", valid63, valid63, valid63): true, - - // Invalid - "goog-foobar": false, - "foobar-google": false, - "-foobar": false, - "foobar-": false, - "_foobar": false, - "foobar_": false, - "fo": false, - "foo$bar": false, - "foo..bar": false, - acctest.RandString(t, 64): false, - fmt.Sprintf("%s.%s.%s.%s", valid63, valid63, valid63, valid63): false, - } - - for bucketName, valid := range cases { - err := tpgresource.CheckGCSName(bucketName) - if valid && err != nil { - t.Errorf("The bucket name %s was expected to pass validation and did not pass.", bucketName) - } else if !valid && err == nil { - t.Errorf("The bucket name %s was NOT expected to pass validation and passed.", bucketName) - } - } -} diff --git a/google-beta/transport/config.go b/google-beta/transport/config.go index 7f773bccc6..9c29a8bed3 100644 --- a/google-beta/transport/config.go +++ b/google-beta/transport/config.go @@ -286,6 +286,7 @@ type Config struct { OrgPolicyBasePath string OSConfigBasePath string OSLoginBasePath string + ParallelstoreBasePath string PrivatecaBasePath string PublicCABasePath string PubsubBasePath string @@ -435,6 +436,7 @@ const NotebooksBasePathKey = "Notebooks" const OrgPolicyBasePathKey = "OrgPolicy" const OSConfigBasePathKey = "OSConfig" const OSLoginBasePathKey = "OSLogin" +const ParallelstoreBasePathKey = "Parallelstore" const PrivatecaBasePathKey = "Privateca" const PublicCABasePathKey = "PublicCA" const PubsubBasePathKey = "Pubsub" @@ -578,6 +580,7 @@ var DefaultBasePaths = map[string]string{ OrgPolicyBasePathKey: "https://orgpolicy.googleapis.com/v2/", OSConfigBasePathKey: "https://osconfig.googleapis.com/v1beta/", OSLoginBasePathKey: "https://oslogin.googleapis.com/v1/", + ParallelstoreBasePathKey: "https://parallelstore.googleapis.com/v1beta/", PrivatecaBasePathKey: "https://privateca.googleapis.com/v1/", PublicCABasePathKey: "https://publicca.googleapis.com/v1beta1/", PubsubBasePathKey: "https://pubsub.googleapis.com/v1/", @@ -1184,6 +1187,11 @@ func SetEndpointDefaults(d *schema.ResourceData) error { "GOOGLE_OS_LOGIN_CUSTOM_ENDPOINT", }, DefaultBasePaths[OSLoginBasePathKey])) } + if d.Get("parallelstore_custom_endpoint") == "" { + d.Set("parallelstore_custom_endpoint", MultiEnvDefault([]string{ + "GOOGLE_PARALLELSTORE_CUSTOM_ENDPOINT", + }, DefaultBasePaths[ParallelstoreBasePathKey])) + } if d.Get("privateca_custom_endpoint") == "" { d.Set("privateca_custom_endpoint", MultiEnvDefault([]string{ "GOOGLE_PRIVATECA_CUSTOM_ENDPOINT", @@ -2312,6 +2320,7 @@ func ConfigureBasePaths(c *Config) { c.OrgPolicyBasePath = DefaultBasePaths[OrgPolicyBasePathKey] c.OSConfigBasePath = DefaultBasePaths[OSConfigBasePathKey] c.OSLoginBasePath = DefaultBasePaths[OSLoginBasePathKey] + c.ParallelstoreBasePath = DefaultBasePaths[ParallelstoreBasePathKey] c.PrivatecaBasePath = DefaultBasePaths[PrivatecaBasePathKey] c.PublicCABasePath = DefaultBasePaths[PublicCABasePathKey] c.PubsubBasePath = DefaultBasePaths[PubsubBasePathKey] diff --git a/google-beta/transport/provider_dcl_endpoints.go b/google-beta/transport/provider_dcl_endpoints.go index e552d361f9..1fc3b107d4 100644 --- a/google-beta/transport/provider_dcl_endpoints.go +++ b/google-beta/transport/provider_dcl_endpoints.go @@ -212,15 +212,15 @@ func ConfigureDCLCustomEndpointAttributesFramework(frameworkSchema *framework_sc } func ProviderDCLConfigure(d *schema.ResourceData, config *Config) interface{} { - config.ApikeysBasePath = d.Get(ApikeysEndpointEntryKey).(string) + // networkConnectivity uses mmv1 basePath, assuredworkloads has a location variable in the basepath, can't be defined here. + config.ApikeysBasePath = "https://apikeys.googleapis.com/v2/" config.AssuredWorkloadsBasePath = d.Get(AssuredWorkloadsEndpointEntryKey).(string) - config.CloudBuildWorkerPoolBasePath = d.Get(CloudBuildWorkerPoolEndpointEntryKey).(string) - config.CloudResourceManagerBasePath = d.Get(CloudResourceManagerEndpointEntryKey).(string) - config.EventarcBasePath = d.Get(EventarcEndpointEntryKey).(string) - config.FirebaserulesBasePath = d.Get(FirebaserulesEndpointEntryKey).(string) - config.GKEHubFeatureBasePath = d.Get(GKEHubFeatureEndpointEntryKey).(string) - config.NetworkConnectivityBasePath = d.Get(NetworkConnectivityEndpointEntryKey).(string) - config.RecaptchaEnterpriseBasePath = d.Get(RecaptchaEnterpriseEndpointEntryKey).(string) - config.CloudBuildWorkerPoolBasePath = d.Get(CloudBuildWorkerPoolEndpointEntryKey).(string) + config.CloudBuildWorkerPoolBasePath = "https://cloudbuild.googleapis.com/v1/" + config.CloudResourceManagerBasePath = "https://cloudresourcemanager.googleapis.com/" + config.EventarcBasePath = "https://eventarc.googleapis.com/v1/" + config.FirebaserulesBasePath = "https://firebaserules.googleapis.com/v1/" + config.GKEHubFeatureBasePath = "https://gkehub.googleapis.com/v1beta1/" + config.RecaptchaEnterpriseBasePath = "https://recaptchaenterprise.googleapis.com/v1/" + return config } diff --git a/google-beta/transport/transport.go b/google-beta/transport/transport.go index d1f1692883..89ecca2efb 100644 --- a/google-beta/transport/transport.go +++ b/google-beta/transport/transport.go @@ -26,12 +26,16 @@ type SendRequestOptions struct { UserAgent string Body map[string]any Timeout time.Duration + Headers http.Header ErrorRetryPredicates []RetryErrorPredicateFunc ErrorAbortPredicates []RetryErrorPredicateFunc } func SendRequest(opt SendRequestOptions) (map[string]interface{}, error) { - reqHeaders := make(http.Header) + reqHeaders := opt.Headers + if reqHeaders == nil { + reqHeaders = make(http.Header) + } reqHeaders.Set("User-Agent", opt.UserAgent) reqHeaders.Set("Content-Type", "application/json") diff --git a/google-beta/verify/validation.go b/google-beta/verify/validation.go index 2d3768197f..710a1a106e 100644 --- a/google-beta/verify/validation.go +++ b/google-beta/verify/validation.go @@ -76,6 +76,16 @@ var ( Rfc6996Asn32BitMin = int64(4200000000) Rfc6996Asn32BitMax = int64(4294967294) GcpRouterPartnerAsn = int64(16550) + + // Format of GCS Bucket Name + // https://cloud.google.com/storage/docs/naming-buckets + GCSNameValidChars = "^[a-z0-9_.-]*$" + GCSNameStartEndChars = "^[a-z|0-9].*[a-z|0-9]$" + GCSNameLength = "^.{3,222}" + GCSNameLengthSplit = "^.{1,63}$" + GCSNameCidr = "^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}$" + GCSNameGoogPrefix = "^goog.*$" + GCSNameContainsGoogle = "^.*google.*$" ) var Rfc1918Networks = []string{ @@ -91,6 +101,44 @@ func ValidateGCEName(v interface{}, k string) (ws []string, errors []error) { return ValidateRegexp(re)(v, k) } +// validateGCSName ensures the name of a gcs bucket matches the requirements for GCS Buckets +// https://cloud.google.com/storage/docs/naming-buckets +func ValidateGCSName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if !regexp.MustCompile(GCSNameValidChars).MatchString(value) { + errors = append(errors, fmt.Errorf("%q name value can only contain lowercase letters, numeric characters, dashes (-), underscores (_), and dots (.)", value)) + } + + if !regexp.MustCompile(GCSNameStartEndChars).MatchString(value) { + errors = append(errors, fmt.Errorf("%q name value must start and end with a number or letter", value)) + } + + if !regexp.MustCompile(GCSNameLength).MatchString(value) { + errors = append(errors, fmt.Errorf("%q name value must contain 3-63 characters. Names containing dots can contain up to 222 characters, but each dot-separated component can be no longer than 63 characters", value)) + } + + for _, str := range strings.Split(value, ".") { + if !regexp.MustCompile(GCSNameLengthSplit).MatchString(str) { + errors = append(errors, fmt.Errorf("%q name value must contain 3-63 characters. Names containing dots can contain up to 222 characters, but each dot-separated component can be no longer than 63 characters", value)) + } + } + + if regexp.MustCompile(GCSNameCidr).MatchString(value) { + errors = append(errors, fmt.Errorf("%q name value cannot be represented as an IP address in dotted-decimal notation (for example, 192.168.5.4)", value)) + } + + if regexp.MustCompile(GCSNameGoogPrefix).MatchString(value) { + errors = append(errors, fmt.Errorf("%q name value cannot begin with the \"goog\" prefix", value)) + } + + if regexp.MustCompile(GCSNameContainsGoogle).MatchString(strings.ReplaceAll(value, "0", "o")) { + errors = append(errors, fmt.Errorf("%q name value cannot contain \"google\" or close misspellings, such as \"g00gle\"", value)) + } + + return +} + // Ensure that the BGP ASN value of Cloud Router is a valid value as per RFC6996 or a value of 16550 func ValidateRFC6996Asn(v interface{}, k string) (ws []string, errors []error) { value := int64(v.(int)) diff --git a/google-beta/verify/validation_test.go b/google-beta/verify/validation_test.go index 3bccabb049..b6e9a27b8d 100644 --- a/google-beta/verify/validation_test.go +++ b/google-beta/verify/validation_test.go @@ -319,3 +319,43 @@ func TestValidateIAMCustomRoleIDRegex(t *testing.T) { t.Errorf("Failed to validate IAMCustomRole IDs: %v", es) } } + +func TestValidateGCSName(t *testing.T) { + x := []StringValidationTestCase{ + // No errors + {TestName: "basic", Value: "foobar"}, + {TestName: "has number", Value: "foobar1"}, + {TestName: "all numbers", Value: "12345"}, + {TestName: "all _", Value: "foo_bar_baz"}, + {TestName: "all -", Value: "foo-bar-baz"}, + {TestName: "begins with number", Value: "1foo-bar_baz"}, + {TestName: "ends with number", Value: "foo-bar_baz1"}, + {TestName: "almost an ip", Value: "192.168.5.foo"}, + {TestName: "has _", Value: "foo-bar_baz"}, + {TestName: "--", Value: "foo--bar"}, + {TestName: "__", Value: "foo__bar"}, + {TestName: "-goog", Value: "foo-goog"}, + {TestName: ".goog", Value: "foo.goog"}, + + // With errors + {TestName: "invalid char $", Value: "foo$bar", ExpectError: true}, + {TestName: "has uppercase", Value: "fooBar", ExpectError: true}, + {TestName: "begins with -", Value: "-foobar", ExpectError: true}, + {TestName: "ends with -", Value: "foobar-", ExpectError: true}, + {TestName: "begins with _", Value: "_foobar", ExpectError: true}, + {TestName: "ends with _", Value: "foobar_", ExpectError: true}, + {TestName: "less than 3 chars", Value: "fo", ExpectError: true}, + {TestName: "..", Value: "foo..bar", ExpectError: true}, + {TestName: "greater than 63 chars with no .", Value: "my-really-long-bucket-name-with-invalid-that-does-not-contain-a-period", ExpectError: true}, + {TestName: "greater than 63 chars between .", Value: "my.really-long-bucket-name-with-invalid-that-does-contain-a-period-but.is-too-long", ExpectError: true}, + {TestName: "has goog prefix", Value: "goog-foobar", ExpectError: true}, + {TestName: "almost an ip", Value: "192.168.5.1", ExpectError: true}, + {TestName: "contains google", Value: "foobar-google", ExpectError: true}, + {TestName: "contains close misspelling of google", Value: "foo-go0gle-bar", ExpectError: true}, + } + + es := TestStringValidationCases(x, ValidateGCSName) + if len(es) > 0 { + t.Errorf("Failed to validate GCS names: %v", es) + } +} diff --git a/website/docs/d/active_folder.html.markdown b/website/docs/d/active_folder.html.markdown index 24b65818f0..14954f7009 100644 --- a/website/docs/d/active_folder.html.markdown +++ b/website/docs/d/active_folder.html.markdown @@ -25,6 +25,9 @@ The following arguments are supported: * `parent` - (Required) The resource name of the parent Folder or Organization. +* `api_method` - (Optional) The API method to use to search for the folder. Valid values are `LIST` and `SEARCH`. Default Value is `LIST`. `LIST` is [strongly consistent](https://cloud.google.com/resource-manager/reference/rest/v3/folders/list#:~:text=list()%20provides%20a-,strongly%20consistent,-view%20of%20the) and requires `resourcemanager.folders.list` on the parent folder, while `SEARCH` is [eventually consistent](https://cloud.google.com/resource-manager/reference/rest/v3/folders/search#:~:text=eventually%20consistent) and only returns folders that the user has `resourcemanager.folders.get` permission on. + + ## Attributes Reference In addition to the arguments listed above, the following attributes are exported: diff --git a/website/docs/d/storage_bucket.html.markdown b/website/docs/d/storage_bucket.html.markdown index a6e88ab620..18fb11bde3 100644 --- a/website/docs/d/storage_bucket.html.markdown +++ b/website/docs/d/storage_bucket.html.markdown @@ -26,6 +26,8 @@ The following arguments are supported: * `name` - (Required) The name of the bucket. +* `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. If no value is supplied in the configuration or through provider defaults then the data source will use the Compute API to find the project id that corresponds to the project number returned from the Storage API. Supplying a value for `project` doesn't influence retrieving data about the bucket but it can be used to prevent use of the Compute API. If you do provide a `project` value ensure that it is the correct value for that bucket; the data source will not check that the project id and project number match. + ## Attributes Reference See [google_storage_bucket](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/storage_bucket#argument-reference) resource for details of the available attributes. diff --git a/website/docs/d/storage_bucket_objects.html.markdown b/website/docs/d/storage_bucket_objects.html.markdown new file mode 100644 index 0000000000..9a0cf5991a --- /dev/null +++ b/website/docs/d/storage_bucket_objects.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Cloud Storage" +description: |- + Retrieve information about a set of GCS bucket objects in a GCS bucket. +--- + + +# google\_storage\_bucket\_objects + +Gets existing objects inside an existing bucket in Google Cloud Storage service (GCS). +See [the official documentation](https://cloud.google.com/storage/docs/key-terms#objects) +and [API](https://cloud.google.com/storage/docs/json_api/v1/objects/list). + +## Example Usage + +Example files stored within a bucket. + +```hcl +data "google_storage_bucket_objects" "files" { + bucket = "file-store" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bucket` - (Required) The name of the containing bucket. +* `match_glob` - (Optional) A glob pattern used to filter results (for example, `foo*bar`). +* `prefix` - (Optional) Filter results to include only objects whose names begin with this prefix. + + +## Attributes Reference + +The following attributes are exported: + +* `bucket_objects` - A list of retrieved objects contained in the provided GCS bucket. Structure is [defined below](#nested_bucket_objects). + +The `bucket_objects` block supports: + +* `content_type` - [Content-Type](https://tools.ietf.org/html/rfc7231#section-3.1.1.5) of the object data. +* `media_link` - A url reference to download this object. +* `name` - The name of the object. +* `self_link` - A url reference to this object. +* `storage_class` - The [StorageClass](https://cloud.google.com/storage/docs/storage-classes) of the bucket object. diff --git a/website/docs/d/tags_tag_keys.html.markdown b/website/docs/d/tags_tag_keys.html.markdown new file mode 100644 index 0000000000..5eda33545e --- /dev/null +++ b/website/docs/d/tags_tag_keys.html.markdown @@ -0,0 +1,61 @@ +--- +subcategory: "Tags" +description: |- + Get tag keys within a GCP organization or project. +--- + +# google\_tags\_tag\_keys + +Get tag keys by org or project `parent`. + +## Example Usage + +```tf +data "google_tags_tag_keys" "environment_tag_key"{ + parent = "organizations/12345" +} +``` +```tf +data "google_tags_tag_keys" "environment_tag_key"{ + parent = "projects/abc" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `parent` - (Required) The resource name of the parent organization or project. It can be in format `organizations/{org_id}` or `projects/{project_id_or_number}`. + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `name` - an identifier for the resource with format `tagKeys/{{name}}` + +* `namespaced_name` - + Namespaced name of the TagKey which is in the format `{parentNamespace}/{shortName}`. + +* `create_time` - + Creation time. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `update_time` - + Update time. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `short_name` - + The user friendly name for a TagKey. The short name should be unique for TagKeys wihting the same tag namespace. + +* `parent` - + The resource name of the TagKey's parent. A TagKey can be parented by an Orgination or a Project. + +* `description` - + User-assigned description of the TagKey. + +* `purpose` - + A purpose denotes that this Tag is intended for use in policies of a specific policy engine, and will involve that policy engine in management operations involving this Tag. A purpose does not grant a policy engine exclusive rights to the Tag, and it may be referenced by other policy engines. + +* `purpose_data` - + Purpose data corresponds to the policy system that the tag is intended for. See documentation for Purpose for formatting of this field. + diff --git a/website/docs/d/tags_tag_values.html.markdown b/website/docs/d/tags_tag_values.html.markdown new file mode 100644 index 0000000000..04f59ccdc6 --- /dev/null +++ b/website/docs/d/tags_tag_values.html.markdown @@ -0,0 +1,50 @@ +--- +subcategory: "Tags" +description: |- + Get tag values from the parent key. +--- + +# google\_tags\_tag\_values + +Get tag values from a `parent` key. + +## Example Usage + +```tf +data "google_tags_tag_values" "environment_tag_values"{ + parent = "tagKeys/56789" +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `parent` - (Required) The resource name of the parent tagKey in format `tagKey/{name}`. + +## Attributes Reference + +In addition to the arguments listed above, the following attributes are exported: + +* `name` - an identifier for the resource with format `tagValues/{{name}}` + +* `namespaced_name` - + Namespaced name of the TagValue. + +* `short_name` - + User-assigned short name for TagValue. The short name should be unique for TagValues within the same parent TagKey. + +* `parent` - + The resource name of the new TagValue's parent TagKey. Must be of the form tagKeys/{tag_key_id}. + +* `create_time` - + Creation time. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `update_time` - + Update time. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `description` - + User-assigned description of the TagValue. diff --git a/website/docs/guides/google_project_service.html.markdown b/website/docs/guides/google_project_service.html.markdown index 7c0323e7ce..f3d37baac8 100644 --- a/website/docs/guides/google_project_service.html.markdown +++ b/website/docs/guides/google_project_service.html.markdown @@ -63,7 +63,7 @@ resource "google_project" "my_project" { billing_account = var.billing_account_id } -resource "time_resource" "wait_30_seconds" { +resource "time_sleep" "wait_30_seconds" { depends_on = [google_project.my_project] create_duration = "30s" @@ -74,7 +74,7 @@ resource "google_project_service" "my_service" { service = "firebase.googleapis.com" disable_dependent_services = true - depends_on = [time_resource.wait_30_seconds] + depends_on = [time_sleep.wait_30_seconds] } ``` @@ -106,4 +106,4 @@ resource "google_project_service" "my_service" { disable_dependent_services = true depends_on = [null_resource.delay] } -``` \ No newline at end of file +``` diff --git a/website/docs/guides/using_gke_with_terraform.html.markdown b/website/docs/guides/using_gke_with_terraform.html.markdown index b165da73b6..da5c7b3cd8 100644 --- a/website/docs/guides/using_gke_with_terraform.html.markdown +++ b/website/docs/guides/using_gke_with_terraform.html.markdown @@ -59,6 +59,29 @@ provider "kubernetes" { ) } ``` +Although the above can result in authentication errors, over time, as the token recorded in the google_client_cofig data resource is short lived (thus it expires) and it's stored in state. Fortunately, the [kubernetes provider can accept valid credentials from an exec-based plugin](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#exec-plugins) to fetch a new token before each Terraform operation (so long as you have the [gke-cloud-auth-plugin for kubectl installed](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke)), like so: + +```hcl +# Retrieve an access token as the Terraform runner +data "google_client_config" "provider" {} + +data "google_container_cluster" "my_cluster" { + name = "my-cluster" + location = "us-central1" +} + +provider "kubernetes" { + host = "https://${data.google_container_cluster.my_cluster.endpoint}" + token = data.google_client_config.provider.access_token + cluster_ca_certificate = base64decode( + data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate, + ) + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "gke-gcloud-auth-plugin" + } +} +``` Alternatively, you can authenticate as another service account on which your Terraform user has been granted the `roles/iam.serviceAccountTokenCreator` diff --git a/website/docs/r/access_context_manager_access_policy.html.markdown b/website/docs/r/access_context_manager_access_policy.html.markdown index 12669d8098..17951d3888 100644 --- a/website/docs/r/access_context_manager_access_policy.html.markdown +++ b/website/docs/r/access_context_manager_access_policy.html.markdown @@ -54,8 +54,8 @@ resource "google_access_context_manager_access_policy" "access-policy" { ```hcl resource "google_project" "project" { - project_id = "acm-test-proj-123" - name = "acm-test-proj-123" + project_id = "my-project-name" + name = "my-project-name" org_id = "123456789" } diff --git a/website/docs/r/alloydb_instance.html.markdown b/website/docs/r/alloydb_instance.html.markdown index 0042f0bcb6..9b435efe68 100644 --- a/website/docs/r/alloydb_instance.html.markdown +++ b/website/docs/r/alloydb_instance.html.markdown @@ -242,6 +242,11 @@ The following arguments are supported: Client connection specific configurations. Structure is [documented below](#nested_client_connection_config). +* `network_config` - + (Optional) + Instance level network configuration. + Structure is [documented below](#nested_network_config). + The `query_insights_config` block supports: @@ -292,6 +297,28 @@ The following arguments are supported: SSL mode. Specifies client-server SSL/TLS connection behavior. Possible values are: `ENCRYPTED_ONLY`, `ALLOW_UNENCRYPTED_AND_ENCRYPTED`. +The `network_config` block supports: + +* `authorized_external_networks` - + (Optional) + A list of external networks authorized to access this instance. This + field is only allowed to be set when `enable_public_ip` is set to + true. + Structure is [documented below](#nested_authorized_external_networks). + +* `enable_public_ip` - + (Optional) + Enabling public ip for the instance. If a user wishes to disable this, + please also clear the list of the authorized external networks set on + the same instance. + + +The `authorized_external_networks` block supports: + +* `cidr_range` - + (Optional) + CIDR range for one authorized network of the instance. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: @@ -319,6 +346,11 @@ In addition to the arguments listed above, the following computed attributes are * `ip_address` - The IP address for the Instance. This is the connection endpoint for an end-user application. +* `public_ip_address` - + The public IP addresses for the Instance. This is available ONLY when + networkConfig.enablePublicIp is set to true. This is the connection + endpoint for an end-user application. + * `terraform_labels` - The combination of labels configured directly on the resource and default labels configured on the provider. diff --git a/website/docs/r/apigee_environment.html.markdown b/website/docs/r/apigee_environment.html.markdown index 402fd67e7c..4a7f614309 100644 --- a/website/docs/r/apigee_environment.html.markdown +++ b/website/docs/r/apigee_environment.html.markdown @@ -123,6 +123,10 @@ The following arguments are supported: An Apigee org can support heterogeneous Environments. Possible values are: `ENVIRONMENT_TYPE_UNSPECIFIED`, `BASE`, `INTERMEDIATE`, `COMPREHENSIVE`. +* `forward_proxy_uri` - + (Optional) + Optional. URI of the forward proxy to be applied to the runtime instances in this environment. Must be in the format of {scheme}://{hostname}:{port}. Note that the scheme must be one of "http" or "https", and the port must be supplied. + The `node_config` block supports: diff --git a/website/docs/r/apigee_organization.html.markdown b/website/docs/r/apigee_organization.html.markdown index 1bc68938b6..8ddea556be 100644 --- a/website/docs/r/apigee_organization.html.markdown +++ b/website/docs/r/apigee_organization.html.markdown @@ -209,6 +209,21 @@ The following arguments are supported: (Optional) Primary GCP region for analytics data storage. For valid values, see [Create an Apigee organization](https://cloud.google.com/apigee/docs/api-platform/get-started/create-org). +* `api_consumer_data_location` - + (Optional) + This field is needed only for customers using non-default data residency regions. + Apigee stores some control plane data only in single region. + This field determines which single region Apigee should use. + +* `api_consumer_data_encryption_key_name` - + (Optional) + Cloud KMS key name used for encrypting API consumer data. + +* `control_plane_encryption_key_name` - + (Optional) + Cloud KMS key name used for encrypting control plane data that is stored in a multi region. + Only used for the data residency region "US" or "EU". + * `authorized_network` - (Optional) Compute Engine network used for Service Networking to be peered with Apigee runtime instances. diff --git a/website/docs/r/apigee_sync_authorization.html.markdown b/website/docs/r/apigee_sync_authorization.html.markdown index 27869077eb..02478463d2 100644 --- a/website/docs/r/apigee_sync_authorization.html.markdown +++ b/website/docs/r/apigee_sync_authorization.html.markdown @@ -57,12 +57,10 @@ resource "google_service_account" "service_account" { display_name = "Service Account" } -resource "google_project_iam_binding" "synchronizer-iam" { +resource "google_project_iam_member" "synchronizer-iam" { project = google_project.project.project_id role = "roles/apigee.synchronizerManager" - members = [ - "serviceAccount:${google_service_account.service_account.email}", - ] + member = "serviceAccount:${google_service_account.service_account.email}" } resource "google_apigee_sync_authorization" "apigee_sync_authorization" { @@ -70,7 +68,7 @@ resource "google_apigee_sync_authorization" "apigee_sync_authorization" { identities = [ "serviceAccount:${google_service_account.service_account.email}", ] - depends_on = [google_project_iam_binding.synchronizer-iam] + depends_on = [google_project_iam_member.synchronizer-iam] } ``` diff --git a/website/docs/r/artifact_registry_repository.html.markdown b/website/docs/r/artifact_registry_repository.html.markdown index e048c0b9af..0896715635 100644 --- a/website/docs/r/artifact_registry_repository.html.markdown +++ b/website/docs/r/artifact_registry_repository.html.markdown @@ -255,49 +255,258 @@ resource "google_artifact_registry_repository" "my-repo" { } ``` -## Example Usage - Artifact Registry Repository Remote Custom +## Example Usage - Artifact Registry Repository Remote Dockerhub Auth ```hcl data "google_project" "project" {} -resource "google_secret_manager_secret" "example-custom-remote-secret" { +resource "google_secret_manager_secret" "example-remote-secret" { secret_id = "example-secret" replication { auto {} } } -resource "google_secret_manager_secret_version" "example-custom-remote-secret_version" { - secret = google_secret_manager_secret.example-custom-remote-secret.id +resource "google_secret_manager_secret_version" "example-remote-secret_version" { + secret = google_secret_manager_secret.example-remote-secret.id secret_data = "remote-password" } resource "google_secret_manager_secret_iam_member" "secret-access" { - secret_id = google_secret_manager_secret.example-custom-remote-secret.id + secret_id = google_secret_manager_secret.example-remote-secret.id role = "roles/secretmanager.secretAccessor" member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" } resource "google_artifact_registry_repository" "my-repo" { location = "us-central1" - repository_id = "example-custom-remote" - description = "example remote docker repository with credentials" + repository_id = "example-dockerhub-remote" + description = "example remote dockerhub repository with credentials" format = "DOCKER" mode = "REMOTE_REPOSITORY" remote_repository_config { description = "docker hub with custom credentials" + disable_upstream_validation = true docker_repository { public_repository = "DOCKER_HUB" } upstream_credentials { username_password_credentials { username = "remote-username" - password_secret_version = google_secret_manager_secret_version.example-custom-remote-secret_version.name + password_secret_version = google_secret_manager_secret_version.example-remote-secret_version.name + } + } + } +} +``` + +## Example Usage - Artifact Registry Repository Remote Docker Custom With Auth + + +```hcl +data "google_project" "project" {} + +resource "google_secret_manager_secret" "example-remote-secret" { + secret_id = "example-secret" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "example-remote-secret_version" { + secret = google_secret_manager_secret.example-remote-secret.id + secret_data = "remote-password" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.example-remote-secret.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "example-docker-custom-remote" + description = "example remote custom docker repository with credentials" + format = "DOCKER" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom docker remote with credentials" + disable_upstream_validation = true + docker_repository { + custom_repository { + uri = "https://registry-1.docker.io" + } + } + upstream_credentials { + username_password_credentials { + username = "remote-username" + password_secret_version = google_secret_manager_secret_version.example-remote-secret_version.name + } + } + } +} +``` + +## Example Usage - Artifact Registry Repository Remote Maven Custom With Auth + + +```hcl +data "google_project" "project" {} + +resource "google_secret_manager_secret" "example-remote-secret" { + secret_id = "example-secret" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "example-remote-secret_version" { + secret = google_secret_manager_secret.example-remote-secret.id + secret_data = "remote-password" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.example-remote-secret.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "example-maven-custom-remote" + description = "example remote custom maven repository with credentials" + format = "MAVEN" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom maven remote with credentials" + disable_upstream_validation = true + maven_repository { + custom_repository { + uri = "https://my.maven.registry" + } + } + upstream_credentials { + username_password_credentials { + username = "remote-username" + password_secret_version = google_secret_manager_secret_version.example-remote-secret_version.name + } + } + } +} +``` + +## Example Usage - Artifact Registry Repository Remote Npm Custom With Auth + + +```hcl +data "google_project" "project" {} + +resource "google_secret_manager_secret" "example-remote-secret" { + secret_id = "example-secret" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "example-remote-secret_version" { + secret = google_secret_manager_secret.example-remote-secret.id + secret_data = "remote-password" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.example-remote-secret.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "example-npm-custom-remote" + description = "example remote custom npm repository with credentials" + format = "NPM" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom npm with credentials" + disable_upstream_validation = true + npm_repository { + custom_repository { + uri = "https://my.npm.registry" + } + } + upstream_credentials { + username_password_credentials { + username = "remote-username" + password_secret_version = google_secret_manager_secret_version.example-remote-secret_version.name + } + } + } +} +``` + +## Example Usage - Artifact Registry Repository Remote Python Custom With Auth + + +```hcl +data "google_project" "project" {} + +resource "google_secret_manager_secret" "example-remote-secret" { + secret_id = "example-secret" + replication { + auto {} + } +} + +resource "google_secret_manager_secret_version" "example-remote-secret_version" { + secret = google_secret_manager_secret.example-remote-secret.id + secret_data = "remote-password" +} + +resource "google_secret_manager_secret_iam_member" "secret-access" { + secret_id = google_secret_manager_secret.example-remote-secret.id + role = "roles/secretmanager.secretAccessor" + member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-artifactregistry.iam.gserviceaccount.com" +} + +resource "google_artifact_registry_repository" "my-repo" { + location = "us-central1" + repository_id = "example-python-custom-remote" + description = "example remote custom python repository with credentials" + format = "PYTHON" + mode = "REMOTE_REPOSITORY" + remote_repository_config { + description = "custom npm with credentials" + disable_upstream_validation = true + python_repository { + custom_repository { + uri = "https://my.python.registry" + } + } + upstream_credentials { + username_password_credentials { + username = "remote-username" + password_secret_version = google_secret_manager_secret_version.example-remote-secret_version.name } } } @@ -539,6 +748,11 @@ The following arguments are supported: The credentials used to access the remote repository. Structure is [documented below](#nested_upstream_credentials). +* `disable_upstream_validation` - + (Optional) + If true, the remote repository upstream and upstream credentials will + not be validated. + The `apt_repository` block supports: @@ -567,6 +781,18 @@ The following arguments are supported: Default value is `DOCKER_HUB`. Possible values are: `DOCKER_HUB`. +* `custom_repository` - + (Optional) + Settings for a remote repository with a custom uri. + Structure is [documented below](#nested_custom_repository). + + +The `custom_repository` block supports: + +* `uri` - + (Optional) + Specific uri to the registry, e.g. `"https://registry-1.docker.io"` + The `maven_repository` block supports: * `public_repository` - @@ -575,6 +801,18 @@ The following arguments are supported: Default value is `MAVEN_CENTRAL`. Possible values are: `MAVEN_CENTRAL`. +* `custom_repository` - + (Optional) + Settings for a remote repository with a custom uri. + Structure is [documented below](#nested_custom_repository). + + +The `custom_repository` block supports: + +* `uri` - + (Optional) + Specific uri to the registry, e.g. `"https://repo.maven.apache.org/maven2"` + The `npm_repository` block supports: * `public_repository` - @@ -583,6 +821,18 @@ The following arguments are supported: Default value is `NPMJS`. Possible values are: `NPMJS`. +* `custom_repository` - + (Optional) + Settings for a remote repository with a custom uri. + Structure is [documented below](#nested_custom_repository). + + +The `custom_repository` block supports: + +* `uri` - + (Optional) + Specific uri to the registry, e.g. `"https://registry.npmjs.org"` + The `python_repository` block supports: * `public_repository` - @@ -591,6 +841,18 @@ The following arguments are supported: Default value is `PYPI`. Possible values are: `PYPI`. +* `custom_repository` - + (Optional) + Settings for a remote repository with a custom uri. + Structure is [documented below](#nested_custom_repository). + + +The `custom_repository` block supports: + +* `uri` - + (Optional) + Specific uri to the registry, e.g. `"https://pypi.io"` + The `yum_repository` block supports: * `public_repository` - diff --git a/website/docs/r/bigquery_datapolicy_data_policy.html.markdown b/website/docs/r/bigquery_datapolicy_data_policy.html.markdown index 43eb94159d..2f251ccb34 100644 --- a/website/docs/r/bigquery_datapolicy_data_policy.html.markdown +++ b/website/docs/r/bigquery_datapolicy_data_policy.html.markdown @@ -38,24 +38,76 @@ To get more information about DataPolicy, see: ```hcl resource "google_bigquery_datapolicy_data_policy" "data_policy" { - location = "us-central1" - data_policy_id = "data_policy" - policy_tag = google_data_catalog_policy_tag.policy_tag.name - data_policy_type = "COLUMN_LEVEL_SECURITY_POLICY" - } + location = "us-central1" + data_policy_id = "data_policy" + policy_tag = google_data_catalog_policy_tag.policy_tag.name + data_policy_type = "COLUMN_LEVEL_SECURITY_POLICY" +} + +resource "google_data_catalog_policy_tag" "policy_tag" { + taxonomy = google_data_catalog_taxonomy.taxonomy.id + display_name = "Low security" + description = "A policy tag normally associated with low security items" +} + +resource "google_data_catalog_taxonomy" "taxonomy" { + region = "us-central1" + display_name = "taxonomy" + description = "A collection of policy tags" + activated_policy_types = ["FINE_GRAINED_ACCESS_CONTROL"] +} +``` + +## Example Usage - Bigquery Datapolicy Data Policy Routine - resource "google_data_catalog_policy_tag" "policy_tag" { - taxonomy = google_data_catalog_taxonomy.taxonomy.id - display_name = "Low security" - description = "A policy tag normally associated with low security items" + +```hcl +resource "google_bigquery_datapolicy_data_policy" "data_policy" { + location = "us-central1" + data_policy_id = "data_policy" + policy_tag = google_data_catalog_policy_tag.policy_tag.name + data_policy_type = "DATA_MASKING_POLICY" + data_masking_policy { + routine = google_bigquery_routine.custom_masking_routine.id } +} + +resource "google_data_catalog_policy_tag" "policy_tag" { + taxonomy = google_data_catalog_taxonomy.taxonomy.id + display_name = "Low security" + description = "A policy tag normally associated with low security items" +} - resource "google_data_catalog_taxonomy" "taxonomy" { - region = "us-central1" - display_name = "taxonomy" - description = "A collection of policy tags" - activated_policy_types = ["FINE_GRAINED_ACCESS_CONTROL"] - } +resource "google_data_catalog_taxonomy" "taxonomy" { + region = "us-central1" + display_name = "taxonomy" + description = "A collection of policy tags" + activated_policy_types = ["FINE_GRAINED_ACCESS_CONTROL"] +} + +resource "google_bigquery_dataset" "test" { + dataset_id = "dataset_id" + location = "us-central1" +} + +resource "google_bigquery_routine" "custom_masking_routine" { + dataset_id = google_bigquery_dataset.test.dataset_id + routine_id = "custom_masking_routine" + routine_type = "SCALAR_FUNCTION" + language = "SQL" + data_governance_type = "DATA_MASKING" + definition_body = "SAFE.REGEXP_REPLACE(ssn, '[0-9]', 'X')" + return_type = "{\"typeKind\" : \"STRING\"}" + + arguments { + name = "ssn" + data_type = "{\"typeKind\" : \"STRING\"}" + } +} ``` ## Argument Reference @@ -96,10 +148,14 @@ The following arguments are supported: The `data_masking_policy` block supports: * `predefined_expression` - - (Required) + (Optional) The available masking rules. Learn more here: https://cloud.google.com/bigquery/docs/column-data-masking-intro#masking_options. Possible values are: `SHA256`, `ALWAYS_NULL`, `DEFAULT_MASKING_VALUE`, `LAST_FOUR_CHARACTERS`, `FIRST_FOUR_CHARACTERS`, `EMAIL_MASK`, `DATE_YEAR_MASK`. +* `routine` - + (Optional) + The name of the BigQuery routine that contains the custom masking routine, in the format of projects/{projectNumber}/datasets/{dataset_id}/routines/{routine_id}. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: diff --git a/website/docs/r/billing_budget.html.markdown b/website/docs/r/billing_budget.html.markdown index d42b7516c7..10238947ae 100644 --- a/website/docs/r/billing_budget.html.markdown +++ b/website/docs/r/billing_budget.html.markdown @@ -307,6 +307,12 @@ The following arguments are supported: using threshold rules. Structure is [documented below](#nested_all_updates_rule). +* `ownership_scope` - + (Optional) + The ownership scope of the budget. The ownership scope and users' + IAM permissions determine who has full access to the budget's data. + Possible values are: `OWNERSHIP_SCOPE_UNSPECIFIED`, `ALL_USERS`, `BILLING_ACCOUNT`. + The `budget_filter` block supports: diff --git a/website/docs/r/cloud_run_v2_job.html.markdown b/website/docs/r/cloud_run_v2_job.html.markdown index 5e5ff42b77..2606624c9d 100644 --- a/website/docs/r/cloud_run_v2_job.html.markdown +++ b/website/docs/r/cloud_run_v2_job.html.markdown @@ -561,6 +561,11 @@ The following arguments are supported: Cloud Storage bucket mounted as a volume using GCSFuse. This feature requires the launch stage to be set to ALPHA or BETA. Structure is [documented below](#nested_gcs). +* `nfs` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + NFS share mounted as a volume. This feature requires the launch stage to be set to ALPHA or BETA. + Structure is [documented below](#nested_nfs). + The `secret` block supports: @@ -620,6 +625,20 @@ The following arguments are supported: (Optional) If true, mount this volume as read-only in all mounts. If false, mount this volume as read-write. +The `nfs` block supports: + +* `server` - + (Required) + Hostname or IP address of the NFS server. + +* `path` - + (Optional) + Path that is exported by the NFS server. + +* `read_only` - + (Optional) + If true, mount this volume as read-only in all mounts. + The `vpc_access` block supports: * `connector` - diff --git a/website/docs/r/cloudfunctions2_function.html.markdown b/website/docs/r/cloudfunctions2_function.html.markdown index cbac3ad893..64cc8e15d2 100644 --- a/website/docs/r/cloudfunctions2_function.html.markdown +++ b/website/docs/r/cloudfunctions2_function.html.markdown @@ -446,6 +446,90 @@ resource "google_cloudfunctions2_function" "function" { } } ``` +## Example Usage - Cloudfunctions2 Basic Builder + + +```hcl +locals { + project = "my-project-name" # Google Cloud Platform Project ID +} + +resource "google_service_account" "account" { + account_id = "gcf-sa" + display_name = "Test Service Account" +} + +resource "google_project_iam_member" "log_writer" { + project = google_service_account.account.project + role = "roles/logging.logWriter" + member = "serviceAccount:${google_service_account.account.email}" +} + +resource "google_project_iam_member" "artifact_registry_writer" { + project = google_service_account.account.project + role = "roles/artifactregistry.writer" + member = "serviceAccount:${google_service_account.account.email}" +} + +resource "google_project_iam_member" "storage_object_admin" { + project = google_service_account.account.project + role = "roles/storage.objectAdmin" + member = "serviceAccount:${google_service_account.account.email}" +} + +resource "google_storage_bucket" "bucket" { + name = "${local.project}-gcf-source" # Every bucket name must be globally unique + location = "US" + uniform_bucket_level_access = true +} + +resource "google_storage_bucket_object" "object" { + name = "function-source.zip" + bucket = google_storage_bucket.bucket.name + source = "function-source.zip" # Add path to the zipped function source code +} + +# builder permissions need to stablize before it can pull the source zip +resource "time_sleep" "wait_60s" { + create_duration = "60s" + + depends_on = [ + google_project_iam_member.log_writer, + google_project_iam_member.artifact_registry_writer, + google_project_iam_member.storage_object_admin, + ] +} + +resource "google_cloudfunctions2_function" "function" { + name = "function-v2" + location = "us-central1" + description = "a new function" + + build_config { + runtime = "nodejs16" + entry_point = "helloHttp" # Set the entry point + source { + storage_source { + bucket = google_storage_bucket.bucket.name + object = google_storage_bucket_object.object.name + } + } + service_account = google_service_account.account.id + } + + service_config { + max_instance_count = 1 + available_memory = "256M" + timeout_seconds = 60 + } + + depends_on = [time_sleep.wait_60s] +} + +output "function_uri" { + value = google_cloudfunctions2_function.function.service_config[0].uri +} +``` ## Example Usage - Cloudfunctions2 Secret Env @@ -850,6 +934,10 @@ The following arguments are supported: (Optional) User managed repository created in Artifact Registry optionally with a customer managed encryption key. +* `service_account` - + (Optional) + The fully-qualified name of the service account to be used for building the container. + The `source` block supports: diff --git a/website/docs/r/composer_user_workloads_secret.html.markdown b/website/docs/r/composer_user_workloads_secret.html.markdown new file mode 100644 index 0000000000..86428f41d6 --- /dev/null +++ b/website/docs/r/composer_user_workloads_secret.html.markdown @@ -0,0 +1,103 @@ +--- +subcategory: "Cloud Composer" +description: |- + User workloads Secret used by Airflow tasks that run with Kubernetes Executor or KubernetesPodOperator. +--- + +# google\_composer\_user\_workloads\_secret + +~> **Warning:** These resources are in beta, and should be used with the terraform-provider-google-beta provider. +See [Provider Versions](https://terraform.io/docs/providers/google/guides/provider_versions.html) for more details on beta resources. + +User workloads Secret used by Airflow tasks that run with Kubernetes Executor or KubernetesPodOperator. +Intended for Composer 3 Environments. + +## Example Usage + +```hcl +resource "google_composer_environment" "example" { + name = "example-environment" + project = "example-project" + region = "us-central1" + config { + software_config { + image_version = "example-image-version" + } + } +} + +resource "google_composer_user_workloads_secret" "example" { + name = "example-secret" + project = "example-project" + region = "us-central1" + environment = google_composer_environment.example.name + data = { + email: base64encode("example-email"), + password: base64encode("example-password"), + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - + (Required) + Name of the Kubernetes Secret. + +* `region` - + (Optional) + The location or Compute Engine region for the environment. + +* `project` - + (Optional) + The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + +* `environment` - + Environment where the Kubernetes Secret will be stored and used. + +* `data` - + (Optional) + The "data" field of Kubernetes Secret, organized in key-value pairs, + which can contain sensitive values such as a password, a token, or a key. + Content of this field will not be displayed in CLI output, + but it will be stored in terraform state file. To protect sensitive data, + follow the best practices outlined in the HashiCorp documentation: + https://developer.hashicorp.com/terraform/language/state/sensitive-data. + The values for all keys have to be base64-encoded strings. + For details see: https://kubernetes.io/docs/concepts/configuration/secret/ + + + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{region}}/environments/{{environment}}/userWorkloadsSecrets/{{name}}` + +## Import + +Secret can be imported using any of these accepted formats: + +* `projects/{{project}}/locations/{{region}}/environments/{{environment}}/userWorkloadsSecrets/{{name}}` +* `{{project}}/{{region}}/{{environment}}/{{name}}` +* `{{name}}` + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import User Workloads Secret using one of the formats above. For example: + +```tf +import { + id = "projects/{{project}}/locations/{{region}}/environments/{{environment}}/userWorkloadsSecrets/{{name}}" + to = google_composer_user_workloads_secret.example +} +``` + +When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), Environment can be imported using one of the formats above. For example: + +``` +$ terraform import google_composer_user_workloads_secret.example projects/{{project}}/locations/{{region}}/environments/{{environment}}/userWorkloadsSecrets/{{name}} +$ terraform import google_composer_user_workloads_secret.example {{project}}/{{region}}/{{environment}}/{{name}} +$ terraform import google_composer_user_workloads_secret.example {{name}} +``` diff --git a/website/docs/r/compute_instance_group_manager.html.markdown b/website/docs/r/compute_instance_group_manager.html.markdown index b76ff77395..bb9f4ee665 100644 --- a/website/docs/r/compute_instance_group_manager.html.markdown +++ b/website/docs/r/compute_instance_group_manager.html.markdown @@ -164,7 +164,9 @@ group. You can specify only one value. Structure is [documented below](#nested_a * `stateful_external_ip` - (Optional) External network IPs assigned to the instances that will be preserved on instance delete, update, etc. This map is keyed with the network interface name. Structure is [documented below](#nested_stateful_external_ip). -* `update_policy` - (Optional) The update policy for this managed instance group. Structure is [documented below](#nested_update_policy). For more information, see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/updating-managed-instance-groups) and [API](https://cloud.google.com/compute/docs/reference/rest/v1/instanceGroupManagers/patch) +* `update_policy` - (Optional) The update policy for this managed instance group. Structure is [documented below](#nested_update_policy). For more information, see the [official documentation](https://cloud.google.com/compute/docs/instance-groups/updating-managed-instance-groups) and [API](https://cloud.google.com/compute/docs/reference/rest/v1/instanceGroupManagers/patch). + +* `params` - (Optional [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Input only additional params for instance group manager creation. Structure is [documented below](#nested_params). For more information, see [API](https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers/insert). - - - @@ -306,6 +308,18 @@ one of which has a `target_size.percent` of `60` will create 2 instances of that * `delete_rule` - (Optional), A value that prescribes what should happen to the external ip when the VM instance is deleted. The available options are `NEVER` and `ON_PERMANENT_INSTANCE_DELETION`. `NEVER` - detach the ip when the VM is deleted, but do not delete the ip. `ON_PERMANENT_INSTANCE_DELETION` will delete the external ip when the VM is permanently deleted from the instance group. +The `params` block supports: + +```hcl +params{ + resource_manager_tags = { + "tagKeys/123": "tagValues/123" + } +} +``` + +* `resource_manager_tags` - (Optional) Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see [Manage tags for resources](https://cloud.google.com/compute/docs/tag-resources) + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are diff --git a/website/docs/r/compute_instance_settings.html.markdown b/website/docs/r/compute_instance_settings.html.markdown index 0ad6016d28..ed10dbef70 100644 --- a/website/docs/r/compute_instance_settings.html.markdown +++ b/website/docs/r/compute_instance_settings.html.markdown @@ -21,8 +21,6 @@ description: |- Represents an Instance Settings resource. Instance settings are centralized configuration parameters that allow users to configure the default values for specific VM parameters that are normally set using GCE instance API methods. -~> **Warning:** This resource is in beta, and should be used with the terraform-provider-google-beta provider. -See [Provider Versions](https://terraform.io/docs/providers/google/guides/provider_versions.html) for more details on beta resources. To get more information about InstanceSettings, see: @@ -41,7 +39,6 @@ To get more information about InstanceSettings, see: ```hcl resource "google_compute_instance_settings" "gce_instance_settings" { - provider = google-beta zone = "us-east7-b" metadata { items = { diff --git a/website/docs/r/compute_region_instance_group_manager.html.markdown b/website/docs/r/compute_region_instance_group_manager.html.markdown index 052747cdee..4805d1fd50 100644 --- a/website/docs/r/compute_region_instance_group_manager.html.markdown +++ b/website/docs/r/compute_region_instance_group_manager.html.markdown @@ -173,6 +173,8 @@ group. You can specify one or more values. For more information, see the [offici * `stateful_external_ip` - (Optional) External network IPs assigned to the instances that will be preserved on instance delete, update, etc. This map is keyed with the network interface name. Structure is [documented below](#nested_stateful_external_ip). +* `params` - (Optional [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Input only additional params for instance group manager creation. Structure is [documented below](#nested_params). For more information, see [API](https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers/insert). + - - - The `update_policy` block supports: @@ -317,6 +319,18 @@ one of which has a `target_size.percent` of `60` will create 2 instances of that * `delete_rule` - (Optional), A value that prescribes what should happen to the external ip when the VM instance is deleted. The available options are `NEVER` and `ON_PERMANENT_INSTANCE_DELETION`. `NEVER` - detach the ip when the VM is deleted, but do not delete the ip. `ON_PERMANENT_INSTANCE_DELETION` will delete the external ip when the VM is permanently deleted from the instance group. +The `params` block supports: + +```hcl +params{ + resource_manager_tags = { + "tagKeys/123": "tagValues/123" + } +} +``` + +* `resource_manager_tags` - (Optional) Resource manager tags to bind to the managed instance group. The tags are key-value pairs. Keys must be in the format tagKeys/123 and values in the format tagValues/456. For more information, see [Manage tags for resources](https://cloud.google.com/compute/docs/tag-resources) + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are diff --git a/website/docs/r/compute_region_target_https_proxy.html.markdown b/website/docs/r/compute_region_target_https_proxy.html.markdown index 9d766e9690..a5d8bd28db 100644 --- a/website/docs/r/compute_region_target_https_proxy.html.markdown +++ b/website/docs/r/compute_region_target_https_proxy.html.markdown @@ -94,6 +94,117 @@ resource "google_compute_region_health_check" "default" { } } ``` + +## Example Usage - Region Target Https Proxy Mtls + + +```hcl +data "google_project" "project" { + provider = google-beta +} + +resource "google_compute_region_target_https_proxy" "default" { + provider = google-beta + region = "us-central1" + name = "test-mtls-proxy" + url_map = google_compute_region_url_map.default.id + ssl_certificates = [google_compute_region_ssl_certificate.default.id] + server_tls_policy = google_network_security_server_tls_policy.default.id +} + +resource "google_certificate_manager_trust_config" "default" { + provider = google-beta + location = "us-central1" + name = "my-trust-config" + description = "sample description for trust config" + + trust_stores { + trust_anchors { + pem_certificate = file("test-fixtures/ca_cert.pem") + } + intermediate_cas { + pem_certificate = file("test-fixtures/ca_cert.pem") + } + } + + labels = { + foo = "bar" + } +} + +resource "google_network_security_server_tls_policy" "default" { + provider = google-beta + location = "us-central1" + name = "my-tls-policy" + description = "my description" + allow_open = "false" + mtls_policy { + client_validation_mode = "REJECT_INVALID" + client_validation_trust_config = "projects/${data.google_project.project.number}/locations/us-central1/trustConfigs/${google_certificate_manager_trust_config.default.name}" + } +} + +resource "google_compute_region_ssl_certificate" "default" { + provider = google-beta + region = "us-central1" + name = "my-certificate" + private_key = file("path/to/private.key") + certificate = file("path/to/certificate.crt") +} + +resource "google_compute_region_url_map" "default" { + provider = google-beta + region = "us-central1" + name = "url-map" + description = "a description" + + default_service = google_compute_region_backend_service.default.id + + host_rule { + hosts = ["mysite.com"] + path_matcher = "allpaths" + } + + path_matcher { + name = "allpaths" + default_service = google_compute_region_backend_service.default.id + + path_rule { + paths = ["/*"] + service = google_compute_region_backend_service.default.id + } + } +} + +resource "google_compute_region_backend_service" "default" { + provider = google-beta + region = "us-central1" + name = "backend-service" + port_name = "http" + protocol = "HTTP" + timeout_sec = 10 + + load_balancing_scheme = "INTERNAL_MANAGED" + + health_checks = [google_compute_region_health_check.default.id] +} + +resource "google_compute_region_health_check" "default" { + provider = google-beta + region = "us-central1" + name = "http-health-check" + check_interval_sec = 1 + timeout_sec = 1 + + http_health_check { + port = 80 + } +} +```
Open in Cloud Shell @@ -180,6 +291,18 @@ The following arguments are supported: the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource will not have any SSL policy configured. +* `server_tls_policy` - + (Optional) + A URL referring to a networksecurity.ServerTlsPolicy + resource that describes how the proxy should authenticate inbound + traffic. serverTlsPolicy only applies to a global TargetHttpsProxy + attached to globalForwardingRules with the loadBalancingScheme + set to INTERNAL_SELF_MANAGED or EXTERNAL or EXTERNAL_MANAGED. + For details which ServerTlsPolicy resources are accepted with + INTERNAL_SELF_MANAGED and which with EXTERNAL, EXTERNAL_MANAGED + loadBalancingScheme consult ServerTlsPolicy documentation. + If left blank, communications are not encrypted. + * `region` - (Optional) The Region in which the created target https proxy should reside. diff --git a/website/docs/r/compute_router.html.markdown b/website/docs/r/compute_router.html.markdown index d44cff491c..2f4c423662 100644 --- a/website/docs/r/compute_router.html.markdown +++ b/website/docs/r/compute_router.html.markdown @@ -171,6 +171,14 @@ The following arguments are supported: between the two peers. If set, this value must be between 20 and 60. The default is 20. +* `identifier_range` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + Explicitly specifies a range of valid BGP Identifiers for this Router. + It is provided as a link-local IPv4 range (from 169.254.0.0/16), of + size at least /30, even if the BGP sessions are over IPv6. It must + not overlap with any IPv4 BGP session ranges. Other vendors commonly + call this router ID. + The `advertised_ip_ranges` block supports: diff --git a/website/docs/r/compute_router_interface.html.markdown b/website/docs/r/compute_router_interface.html.markdown index 736f9a10fc..4c884d7808 100644 --- a/website/docs/r/compute_router_interface.html.markdown +++ b/website/docs/r/compute_router_interface.html.markdown @@ -40,6 +40,9 @@ In addition to the above required fields, a router interface must have specified * `ip_range` - (Optional) IP address and range of the interface. The IP range must be in the RFC3927 link-local IP space. Changing this forces a new interface to be created. +* `ip_version` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + IP version of this interface. Can be either IPV4 or IPV6. + * `vpn_tunnel` - (Optional) The name or resource link to the VPN tunnel this interface will be linked to. Changing this forces a new interface to be created. Only one of `vpn_tunnel`, `interconnect_attachment` or `subnetwork` can be specified. diff --git a/website/docs/r/compute_router_nat.html.markdown b/website/docs/r/compute_router_nat.html.markdown index 7a549f1e1c..ee7bb3b0cf 100644 --- a/website/docs/r/compute_router_nat.html.markdown +++ b/website/docs/r/compute_router_nat.html.markdown @@ -351,6 +351,13 @@ The following arguments are supported: Configuration for logging on NAT Structure is [documented below](#nested_log_config). +* `endpoint_types` - + (Optional) + Specifies the endpoint Types supported by the NAT Gateway. + Supported values include: + `ENDPOINT_TYPE_VM`, `ENDPOINT_TYPE_SWG`, + `ENDPOINT_TYPE_MANAGED_PROXY_LB`. + * `rules` - (Optional) A list of rules associated with this NAT. diff --git a/website/docs/r/compute_router_peer.html.markdown b/website/docs/r/compute_router_peer.html.markdown index b5d755cb82..4158ff29de 100644 --- a/website/docs/r/compute_router_peer.html.markdown +++ b/website/docs/r/compute_router_peer.html.markdown @@ -282,6 +282,10 @@ The following arguments are supported: (Optional) Enable IPv6 traffic over BGP Peer. If not specified, it is disabled by default. +* `enable_ipv4` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + Enable IPv4 traffic over BGP Peer. It is enabled by default if the peerIpAddress is version 4. + * `ipv6_nexthop_address` - (Optional) IPv6 address of the interface inside Google Cloud Platform. @@ -289,6 +293,10 @@ The following arguments are supported: If you do not specify the next hop addresses, Google Cloud automatically assigns unused addresses from the 2600:2d00:0:2::/64 or 2600:2d00:0:3::/64 range for you. +* `ipv4_nexthop_address` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + IPv4 address of the interface inside Google Cloud Platform. + * `peer_ipv6_nexthop_address` - (Optional) IPv6 address of the BGP interface outside Google Cloud Platform. @@ -296,6 +304,10 @@ The following arguments are supported: If you do not specify the next hop addresses, Google Cloud automatically assigns unused addresses from the 2600:2d00:0:2::/64 or 2600:2d00:0:3::/64 range for you. +* `peer_ipv4_nexthop_address` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + IPv4 address of the BGP interface outside Google Cloud Platform. + * `region` - (Optional) Region where the router and BgpPeer reside. diff --git a/website/docs/r/compute_security_policy_rule.html.markdown b/website/docs/r/compute_security_policy_rule.html.markdown new file mode 100644 index 0000000000..d91d85d8a2 --- /dev/null +++ b/website/docs/r/compute_security_policy_rule.html.markdown @@ -0,0 +1,385 @@ +--- +# ---------------------------------------------------------------------------- +# +# *** AUTO GENERATED CODE *** Type: MMv1 *** +# +# ---------------------------------------------------------------------------- +# +# This file is automatically generated by Magic Modules and manual +# changes will be clobbered when the file is regenerated. +# +# Please read more about how to change this file in +# .github/CONTRIBUTING.md. +# +# ---------------------------------------------------------------------------- +subcategory: "Compute Engine" +description: |- + A rule for the SecurityPolicy. +--- + +# google\_compute\_security\_policy\_rule + +A rule for the SecurityPolicy. + + +To get more information about SecurityPolicyRule, see: + +* [API documentation](https://cloud.google.com/compute/docs/reference/rest/v1/securityPolicies/addRule) +* How-to Guides + * [Creating global security policy rules](https://cloud.google.com/armor/docs/configure-security-policies) + + +## Example Usage - Security Policy Rule Basic + + +```hcl +resource "google_compute_security_policy" "default" { + name = "policyruletest" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "new rule" + priority = 100 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.10.0.0/16"] + } + } + action = "allow" + preview = true +} +``` +## Example Usage - Security Policy Rule Default Rule + + +```hcl +resource "google_compute_security_policy" "default" { + name = "policyruletest" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +# A default rule is generated when creating the security_policy resource, import is needed to patch it +# import { +# id = "projects//global/securityPolicies/policyruletest/priority/2147483647" +# to = google_compute_security_policy_rule.default_rule +# } +resource "google_compute_security_policy_rule" "default_rule" { + security_policy = google_compute_security_policy.default.name + description = "default rule" + action = "allow" + priority = "2147483647" + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["*"] + } + } +} + +resource "google_compute_security_policy_rule" "policy_rule" { + security_policy = google_compute_security_policy.default.name + description = "new rule" + priority = 100 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.10.0.0/16"] + } + } + action = "allow" + preview = true +} +``` + +## Example Usage - Security Policy Rule Multiple Rules + + +```hcl +resource "google_compute_security_policy" "default" { + name = "policywithmultiplerules" + description = "basic global security policy" + type = "CLOUD_ARMOR" +} + +resource "google_compute_security_policy_rule" "policy_rule_one" { + security_policy = google_compute_security_policy.default.name + description = "new rule one" + priority = 100 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["10.10.0.0/16"] + } + } + action = "allow" + preview = true +} + +resource "google_compute_security_policy_rule" "policy_rule_two" { + security_policy = google_compute_security_policy.default.name + description = "new rule two" + priority = 101 + match { + versioned_expr = "SRC_IPS_V1" + config { + src_ip_ranges = ["192.168.0.0/16", "10.0.0.0/8"] + } + } + action = "allow" + preview = true +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `priority` - + (Required) + An integer indicating the priority of a rule in the list. + The priority must be a positive value between 0 and 2147483647. + Rules are evaluated from highest to lowest priority where 0 is the highest priority and 2147483647 is the lowest priority. + +* `action` - + (Required) + The Action to perform when the rule is matched. The following are the valid actions: + * allow: allow access to target. + * deny(STATUS): deny access to target, returns the HTTP response code specified. Valid values for STATUS are 403, 404, and 502. + * rate_based_ban: limit client traffic to the configured threshold and ban the client if the traffic exceeds the threshold. Configure parameters for this action in RateLimitOptions. Requires rateLimitOptions to be set. + * redirect: redirect to a different target. This can either be an internal reCAPTCHA redirect, or an external URL-based redirect via a 302 response. Parameters for this action can be configured via redirectOptions. This action is only supported in Global Security Policies of type CLOUD_ARMOR. + * throttle: limit client traffic to the configured threshold. Configure parameters for this action in rateLimitOptions. Requires rateLimitOptions to be set for this. + +* `security_policy` - + (Required) + The name of the security policy this rule belongs to. + + +- - - + + +* `description` - + (Optional) + An optional description of this resource. Provide this property when you create the resource. + +* `match` - + (Optional) + A match condition that incoming traffic is evaluated against. + If it evaluates to true, the corresponding 'action' is enforced. + Structure is [documented below](#nested_match). + +* `preconfigured_waf_config` - + (Optional) + Preconfigured WAF configuration to be applied for the rule. + If the rule does not evaluate preconfigured WAF rules, i.e., if evaluatePreconfiguredWaf() is not used, this field will have no effect. + Structure is [documented below](#nested_preconfigured_waf_config). + +* `preview` - + (Optional) + If set to true, the specified action is not enforced. + +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + +The `match` block supports: + +* `versioned_expr` - + (Optional) + Preconfigured versioned expression. If this field is specified, config must also be specified. + Available preconfigured expressions along with their requirements are: SRC_IPS_V1 - must specify the corresponding srcIpRange field in config. + Possible values are: `SRC_IPS_V1`. + +* `expr` - + (Optional) + User defined CEVAL expression. A CEVAL expression is used to specify match criteria such as origin.ip, source.region_code and contents in the request header. + Structure is [documented below](#nested_expr). + +* `config` - + (Optional) + The configuration options available when specifying versionedExpr. + This field must be specified if versionedExpr is specified and cannot be specified if versionedExpr is not specified. + Structure is [documented below](#nested_config). + + +The `expr` block supports: + +* `expression` - + (Required) + Textual representation of an expression in Common Expression Language syntax. The application context of the containing message determines which well-known feature set of CEL is supported. + +The `config` block supports: + +* `src_ip_ranges` - + (Optional) + CIDR IP address range. Maximum number of srcIpRanges allowed is 10. + +The `preconfigured_waf_config` block supports: + +* `exclusion` - + (Optional) + An exclusion to apply during preconfigured WAF evaluation. + Structure is [documented below](#nested_exclusion). + + +The `exclusion` block supports: + +* `request_header` - + (Optional) + Request header whose value will be excluded from inspection during preconfigured WAF evaluation. + Structure is [documented below](#nested_request_header). + +* `request_cookie` - + (Optional) + Request cookie whose value will be excluded from inspection during preconfigured WAF evaluation. + Structure is [documented below](#nested_request_cookie). + +* `request_uri` - + (Optional) + Request URI from the request line to be excluded from inspection during preconfigured WAF evaluation. + When specifying this field, the query or fragment part should be excluded. + Structure is [documented below](#nested_request_uri). + +* `request_query_param` - + (Optional) + Request query parameter whose value will be excluded from inspection during preconfigured WAF evaluation. + Note that the parameter can be in the query string or in the POST body. + Structure is [documented below](#nested_request_query_param). + +* `target_rule_set` - + (Required) + Target WAF rule set to apply the preconfigured WAF exclusion. + +* `target_rule_ids` - + (Optional) + A list of target rule IDs under the WAF rule set to apply the preconfigured WAF exclusion. + If omitted, it refers to all the rule IDs under the WAF rule set. + + +The `request_header` block supports: + +* `operator` - + (Required) + You can specify an exact match or a partial match by using a field operator and a field value. + Available options: + EQUALS: The operator matches if the field value equals the specified value. + STARTS_WITH: The operator matches if the field value starts with the specified value. + ENDS_WITH: The operator matches if the field value ends with the specified value. + CONTAINS: The operator matches if the field value contains the specified value. + EQUALS_ANY: The operator matches if the field value is any value. + +* `value` - + (Optional) + A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. + The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY. + +The `request_cookie` block supports: + +* `operator` - + (Required) + You can specify an exact match or a partial match by using a field operator and a field value. + Available options: + EQUALS: The operator matches if the field value equals the specified value. + STARTS_WITH: The operator matches if the field value starts with the specified value. + ENDS_WITH: The operator matches if the field value ends with the specified value. + CONTAINS: The operator matches if the field value contains the specified value. + EQUALS_ANY: The operator matches if the field value is any value. + +* `value` - + (Optional) + A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. + The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY. + +The `request_uri` block supports: + +* `operator` - + (Required) + You can specify an exact match or a partial match by using a field operator and a field value. + Available options: + EQUALS: The operator matches if the field value equals the specified value. + STARTS_WITH: The operator matches if the field value starts with the specified value. + ENDS_WITH: The operator matches if the field value ends with the specified value. + CONTAINS: The operator matches if the field value contains the specified value. + EQUALS_ANY: The operator matches if the field value is any value. + +* `value` - + (Optional) + A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. + The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY. + +The `request_query_param` block supports: + +* `operator` - + (Required) + You can specify an exact match or a partial match by using a field operator and a field value. + Available options: + EQUALS: The operator matches if the field value equals the specified value. + STARTS_WITH: The operator matches if the field value starts with the specified value. + ENDS_WITH: The operator matches if the field value ends with the specified value. + CONTAINS: The operator matches if the field value contains the specified value. + EQUALS_ANY: The operator matches if the field value is any value. + +* `value` - + (Optional) + A request field matching the specified value will be excluded from inspection during preconfigured WAF evaluation. + The field value must be given if the field operator is not EQUALS_ANY, and cannot be given if the field operator is EQUALS_ANY. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/global/securityPolicies/{{security_policy}}/priority/{{priority}}` + + +## Timeouts + +This resource provides the following +[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: + +- `create` - Default is 20 minutes. +- `update` - Default is 20 minutes. +- `delete` - Default is 20 minutes. + +## Import + + +SecurityPolicyRule can be imported using any of these accepted formats: + +* `projects/{{project}}/global/securityPolicies/{{security_policy}}/priority/{{priority}}` +* `{{project}}/{{security_policy}}/{{priority}}` +* `{{security_policy}}/{{priority}}` + + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import SecurityPolicyRule using one of the formats above. For example: + +```tf +import { + id = "projects/{{project}}/global/securityPolicies/{{security_policy}}/priority/{{priority}}" + to = google_compute_security_policy_rule.default +} +``` + +When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), SecurityPolicyRule can be imported using one of the formats above. For example: + +``` +$ terraform import google_compute_security_policy_rule.default projects/{{project}}/global/securityPolicies/{{security_policy}}/priority/{{priority}} +$ terraform import google_compute_security_policy_rule.default {{project}}/{{security_policy}}/{{priority}} +$ terraform import google_compute_security_policy_rule.default {{security_policy}}/{{priority}} +``` + +## User Project Overrides + +This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/container_cluster.html.markdown b/website/docs/r/container_cluster.html.markdown index 54837be292..9502e01f4b 100644 --- a/website/docs/r/container_cluster.html.markdown +++ b/website/docs/r/container_cluster.html.markdown @@ -15,7 +15,7 @@ To get more information about GKE clusters, see: * [About cluster configuration choices](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters) * Terraform guidance * [Using GKE with Terraform](/docs/providers/google/guides/using_gke_with_terraform.html) - * [Provision a GKE Cluster (Google Cloud) Learn tutorial](https://learn.hashicorp.com/tutorials/terraform/gke?in=terraform/kubernetes&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) + * [Provision a GKE Cluster (Google Cloud) Learn tutorial](https://learn.hashicorp.com/tutorials/terraform/gke?in=terraform/kubernetes&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) -> On version 5.0.0+ of the provider, you must explicitly set `deletion_protection = false` and run `terraform apply` to write the field to state in order to destroy a cluster. @@ -349,7 +349,7 @@ subnetwork in which the cluster's instances are launched. * `enable_intranode_visibility` - (Optional) Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network. -* `enable_l4_ilb_subsetting` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) +* `enable_l4_ilb_subsetting` - (Optional) Whether L4ILB Subsetting is enabled for this cluster. * `enable_multi_networking` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) @@ -364,6 +364,9 @@ subnetwork in which the cluster's instances are launched. * `datapath_provider` - (Optional) The desired datapath provider for this cluster. This is set to `LEGACY_DATAPATH` by default, which uses the IPTables-based kube-proxy implementation. Set to `ADVANCED_DATAPATH` to enable Dataplane v2. +* `enable_cilium_clusterwide_network_policy` - (Optional) + Whether CiliumClusterWideNetworkPolicy is enabled on this cluster. Defaults to false. + * `default_snat_status` - (Optional) [GKE SNAT](https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#how_ipmasq_works) DefaultSnatStatus contains the desired state of whether default sNAT should be disabled on the cluster, [API doc](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#networkconfig). Structure is [documented below](#nested_default_snat_status) @@ -448,6 +451,9 @@ Fleet configuration for the cluster. Structure is [documented below](#nested_fle * `config_connector_config` - (Optional). The status of the ConfigConnector addon. It is disabled by default; Set `enabled = true` to enable. +* `stateful_ha_config` - (Optional). + The status of the Stateful HA addon, which provides automatic configurable failover for stateful applications. + It is disabled by default for Standard clusters. Set `enabled = true` to enable. This example `addons_config` disables two addons: @@ -1033,6 +1039,8 @@ workload_identity_config { The `node_pool_auto_config` block supports: +* `resource_manager_tags` - (Optional) A map of resource manager tag keys and values to be attached to the nodes for managing Compute Engine firewalls using Network Firewall Policies. Tags must be according to specifications found [here](https://cloud.google.com/vpc/docs/tags-firewalls-overview#specifications). A maximum of 5 tag key-value pairs can be specified. Existing tags will be replaced with new values. Tags must be in one of the following formats ([KEY]=[VALUE]) 1. `tagKeys/{tag_key_id}=tagValues/{tag_value_id}` 2. `{org_id}/{tag_key_name}={tag_value_name}` 3. `{project_id}/{tag_key_name}={tag_value_name}`. + * `network_tags` (Optional) - The network tag config for the cluster's automatically provisioned node pools. The `network_tags` block supports: diff --git a/website/docs/r/data_loss_prevention_discovery_config.html.markdown b/website/docs/r/data_loss_prevention_discovery_config.html.markdown new file mode 100644 index 0000000000..9440b74b32 --- /dev/null +++ b/website/docs/r/data_loss_prevention_discovery_config.html.markdown @@ -0,0 +1,685 @@ +--- +# ---------------------------------------------------------------------------- +# +# *** AUTO GENERATED CODE *** Type: MMv1 *** +# +# ---------------------------------------------------------------------------- +# +# This file is automatically generated by Magic Modules and manual +# changes will be clobbered when the file is regenerated. +# +# Please read more about how to change this file in +# .github/CONTRIBUTING.md. +# +# ---------------------------------------------------------------------------- +subcategory: "Data loss prevention" +description: |- + Configuration for discovery to scan resources for profile generation. +--- + +# google\_data\_loss\_prevention\_discovery\_config + +Configuration for discovery to scan resources for profile generation. Only one discovery configuration may exist per organization, folder, or project. + + +To get more information about DiscoveryConfig, see: + +* [API documentation](https://cloud.google.com/dlp/docs/reference/rest/v2/projects.locations.discoveryConfigs) +* How-to Guides + * [Schedule inspection scan](https://cloud.google.com/dlp/docs/schedule-inspection-scan) + +## Example Usage - Dlp Discovery Config Basic + + +```hcl +resource "google_data_loss_prevention_discovery_config" "basic" { + parent = "projects/my-project-name/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/my-project-name" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +``` +## Example Usage - Dlp Discovery Config Actions + + +```hcl +resource "google_data_loss_prevention_discovery_config" "actions" { + parent = "projects/my-project-name/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + actions { + export_data { + profile_table { + project_id = "project" + dataset_id = "dataset" + table_id = "table" + } + } + } + actions { + pub_sub_notification { + topic = "projects/%{project}/topics/${google_pubsub_topic.actions.name}" + event = "NEW_PROFILE" + pubsub_condition { + expressions { + logical_operator = "OR" + conditions { + minimum_sensitivity_score = "HIGH" + } + } + } + detail_of_message = "TABLE_PROFILE" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_pubsub_topic" "actions" { + name = "fake-topic" +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/my-project-name" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +``` +## Example Usage - Dlp Discovery Config Org Running + + +```hcl +resource "google_data_loss_prevention_discovery_config" "org_running" { + parent = "organizations/123456789/locations/us" + location = "us" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + org_config { + project_id = "my-project-name" + location { + organization_id = "123456789" + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] + status = "RUNNING" +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/my-project-name" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +``` +## Example Usage - Dlp Discovery Config Org Folder Paused + + +```hcl +resource "google_data_loss_prevention_discovery_config" "org_folder_paused" { + parent = "organizations/123456789/locations/us" + location = "us" + + targets { + big_query_target { + filter { + other_tables {} + } + } + } + org_config { + project_id = "my-project-name" + location { + folder_id = 123 + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] + status = "PAUSED" +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/my-project-name" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +``` +## Example Usage - Dlp Discovery Config Conditions Cadence + + +```hcl +resource "google_data_loss_prevention_discovery_config" "conditions_cadence" { + parent = "projects/my-project-name/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + other_tables {} + } + conditions { + type_collection = "BIG_QUERY_COLLECTION_ALL_TYPES" + } + cadence { + schema_modified_cadence { + types = ["SCHEMA_NEW_COLUMNS"] + frequency = "UPDATE_FREQUENCY_DAILY" + } + table_modified_cadence { + types = ["TABLE_MODIFIED_TIMESTAMP"] + frequency = "UPDATE_FREQUENCY_DAILY" + } + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/my-project-name" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +``` +## Example Usage - Dlp Discovery Config Filter Regexes And Conditions + + +```hcl +resource "google_data_loss_prevention_discovery_config" "filter_regexes_and_conditions" { + parent = "projects/my-project-name/locations/us" + location = "us" + status = "RUNNING" + + targets { + big_query_target { + filter { + tables { + include_regexes { + patterns { + project_id_regex = ".*" + dataset_id_regex = ".*" + table_id_regex = ".*" + } + } + } + } + conditions { + created_after = "2023-10-02T15:01:23Z" + types { + types = ["BIG_QUERY_TABLE_TYPE_TABLE", "BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE"] + } + or_conditions { + min_row_count = 10 + min_age = "10800s" + } + } + } + } + targets { + big_query_target { + filter { + other_tables {} + } + } + } + inspect_templates = ["projects/%{project}/inspectTemplates/${google_data_loss_prevention_inspect_template.basic.name}"] +} + +resource "google_data_loss_prevention_inspect_template" "basic" { + parent = "projects/my-project-name" + description = "My description" + display_name = "display_name" + + inspect_config { + info_types { + name = "EMAIL_ADDRESS" + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `parent` - + (Required) + The parent of the discovery config in any of the following formats: + * `projects/{{project}}/locations/{{location}}` + * `organizations/{{organization_id}}/locations/{{location}}` + +* `location` - + (Required) + Location to create the discovery config in. + + +- - - + + +* `display_name` - + (Optional) + Display Name (max 1000 Chars) + +* `org_config` - + (Optional) + A nested object resource + Structure is [documented below](#nested_org_config). + +* `inspect_templates` - + (Optional) + Detection logic for profile generation + +* `actions` - + (Optional) + Actions to execute at the completion of scanning + Structure is [documented below](#nested_actions). + +* `targets` - + (Optional) + Target to match against for determining what to scan and how frequently + Structure is [documented below](#nested_targets). + +* `status` - + (Optional) + Required. A status for this configuration + Possible values are: `RUNNING`, `PAUSED`. + + +The `org_config` block supports: + +* `project_id` - + (Optional) + The project that will run the scan. The DLP service account that exists within this project must have access to all resources that are profiled, and the cloud DLP API must be enabled. + +* `location` - + (Optional) + The data to scan folder org or project + Structure is [documented below](#nested_location). + + +The `location` block supports: + +* `organization_id` - + (Optional) + The ID of an organization to scan + +* `folder_id` - + (Optional) + The ID for the folder within an organization to scan + +The `actions` block supports: + +* `export_data` - + (Optional) + Export data profiles into a provided location + Structure is [documented below](#nested_export_data). + +* `pub_sub_notification` - + (Optional) + Publish a message into the Pub/Sub topic. + Structure is [documented below](#nested_pub_sub_notification). + + +The `export_data` block supports: + +* `profile_table` - + (Optional) + Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery + Structure is [documented below](#nested_profile_table). + + +The `profile_table` block supports: + +* `project_id` - + (Optional) + The Google Cloud Platform project ID of the project containing the table. If omitted, the project ID is inferred from the API call. + +* `dataset_id` - + (Optional) + Dataset Id of the table + +* `table_id` - + (Optional) + Name of the table + +The `pub_sub_notification` block supports: + +* `topic` - + (Optional) + Cloud Pub/Sub topic to send notifications to. Format is projects/{project}/topics/{topic}. + +* `event` - + (Optional) + The type of event that triggers a Pub/Sub. At most one PubSubNotification per EventType is permitted. + Possible values are: `NEW_PROFILE`, `CHANGED_PROFILE`, `SCORE_INCREASED`, `ERROR_CHANGED`. + +* `pubsub_condition` - + (Optional) + Conditions for triggering pubsub + Structure is [documented below](#nested_pubsub_condition). + +* `detail_of_message` - + (Optional) + How much data to include in the pub/sub message. + Possible values are: `TABLE_PROFILE`, `RESOURCE_NAME`. + + +The `pubsub_condition` block supports: + +* `expressions` - + (Optional) + An expression + Structure is [documented below](#nested_expressions). + + +The `expressions` block supports: + +* `logical_operator` - + (Optional) + The operator to apply to the collection of conditions + Possible values are: `OR`, `AND`. + +* `conditions` - + (Optional) + Conditions to apply to the expression + Structure is [documented below](#nested_conditions). + + +The `conditions` block supports: + +* `minimum_risk_score` - + (Optional) + The minimum data risk score that triggers the condition. + Possible values are: `HIGH`, `MEDIUM_OR_HIGH`. + +* `minimum_sensitivity_score` - + (Optional) + The minimum sensitivity level that triggers the condition. + Possible values are: `HIGH`, `MEDIUM_OR_HIGH`. + +The `targets` block supports: + +* `big_query_target` - + (Optional) + BigQuery target for Discovery. The first target to match a table will be the one applied. + Structure is [documented below](#nested_big_query_target). + + +The `big_query_target` block supports: + +* `filter` - + (Optional) + Required. The tables the discovery cadence applies to. The first target with a matching filter will be the one to apply to a table + Structure is [documented below](#nested_filter). + +* `conditions` - + (Optional) + In addition to matching the filter, these conditions must be true before a profile is generated + Structure is [documented below](#nested_conditions). + +* `cadence` - + (Optional) + How often and when to update profiles. New tables that match both the fiter and conditions are scanned as quickly as possible depending on system capacity. + Structure is [documented below](#nested_cadence). + +* `disabled` - + (Optional) + Tables that match this filter will not have profiles created. + + +The `filter` block supports: + +* `tables` - + (Optional) + A specific set of tables for this filter to apply to. A table collection must be specified in only one filter per config. + Structure is [documented below](#nested_tables). + +* `other_tables` - + (Optional) + Catch-all. This should always be the last filter in the list because anything above it will apply first. + + +The `tables` block supports: + +* `include_regexes` - + (Optional) + A collection of regular expressions to match a BQ table against. + Structure is [documented below](#nested_include_regexes). + + +The `include_regexes` block supports: + +* `patterns` - + (Optional) + A single BigQuery regular expression pattern to match against one or more tables, datasets, or projects that contain BigQuery tables. + Structure is [documented below](#nested_patterns). + + +The `patterns` block supports: + +* `project_id_regex` - + (Optional) + For organizations, if unset, will match all projects. Has no effect for data profile configurations created within a project. + +* `dataset_id_regex` - + (Optional) + if unset, this property matches all datasets + +* `table_id_regex` - + (Optional) + if unset, this property matches all tables + +The `conditions` block supports: + +* `created_after` - + (Optional) + A timestamp in RFC3339 UTC "Zulu" format with nanosecond resolution and upto nine fractional digits. + +* `or_conditions` - + (Optional) + At least one of the conditions must be true for a table to be scanned. + Structure is [documented below](#nested_or_conditions). + +* `types` - + (Optional) + Restrict discovery to specific table type + Structure is [documented below](#nested_types). + +* `type_collection` - + (Optional) + Restrict discovery to categories of table types. Currently view, materialized view, snapshot and non-biglake external tables are supported. + Possible values are: `BIG_QUERY_COLLECTION_ALL_TYPES`, `BIG_QUERY_COLLECTION_ONLY_SUPPORTED_TYPES`. + + +The `or_conditions` block supports: + +* `min_age` - + (Optional) + Duration format. The minimum age a table must have before Cloud DLP can profile it. Value greater than 1. + +* `min_row_count` - + (Optional) + Minimum number of rows that should be present before Cloud DLP profiles as a table. + +The `types` block supports: + +* `types` - + (Optional) + A set of BiqQuery table types + Each value may be one of: `BIG_QUERY_TABLE_TYPE_TABLE`, `BIG_QUERY_TABLE_TYPE_EXTERNAL_BIG_LAKE`. + +The `cadence` block supports: + +* `schema_modified_cadence` - + (Optional) + Governs when to update data profiles when a schema is modified + Structure is [documented below](#nested_schema_modified_cadence). + +* `table_modified_cadence` - + (Optional) + Governs when to update profile when a table is modified. + Structure is [documented below](#nested_table_modified_cadence). + + +The `schema_modified_cadence` block supports: + +* `types` - + (Optional) + The type of events to consider when deciding if the table's schema has been modified and should have the profile updated. Defaults to NEW_COLUMN. + Each value may be one of: `SCHEMA_NEW_COLUMNS`, `SCHEMA_REMOVED_COLUMNS`. + +* `frequency` - + (Optional) + How frequently profiles may be updated when schemas are modified. Default to monthly + Possible values are: `UPDATE_FREQUENCY_NEVER`, `UPDATE_FREQUENCY_DAILY`, `UPDATE_FREQUENCY_MONTHLY`. + +The `table_modified_cadence` block supports: + +* `types` - + (Optional) + The type of events to consider when deciding if the table has been modified and should have the profile updated. Defaults to MODIFIED_TIMESTAMP + Each value may be one of: `TABLE_MODIFIED_TIMESTAMP`. + +* `frequency` - + (Optional) + How frequently data profiles can be updated when tables are modified. Defaults to never. + Possible values are: `UPDATE_FREQUENCY_NEVER`, `UPDATE_FREQUENCY_DAILY`, `UPDATE_FREQUENCY_MONTHLY`. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{parent}}/discoveryConfigs/{{name}}` + +* `name` - + Unique resource name for the DiscoveryConfig, assigned by the service when the DiscoveryConfig is created. + +* `errors` - + Output only. A stream of errors encountered when the config was activated. Repeated errors may result in the config automatically being paused. Output only field. Will return the last 100 errors. Whenever the config is modified this list will be cleared. + Structure is [documented below](#nested_errors). + +* `create_time` - + Output only. The creation timestamp of a DiscoveryConfig. + +* `update_time` - + Output only. The last update timestamp of a DiscoveryConfig. + +* `last_run_time` - + Output only. The timestamp of the last time this config was executed + + +The `errors` block contains: + +* `details` - + (Optional) + Detailed error codes and messages. + Structure is [documented below](#nested_details). + +* `timestamp` - + (Optional) + The times the error occurred. List includes the oldest timestamp and the last 9 timestamps. + + +The `details` block supports: + +* `code` - + (Optional) + The status code, which should be an enum value of google.rpc.Code. + +* `message` - + (Optional) + A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. + +* `details` - + (Optional) + A list of messages that carry the error details. + +## Timeouts + +This resource provides the following +[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: + +- `create` - Default is 20 minutes. +- `update` - Default is 20 minutes. +- `delete` - Default is 20 minutes. + +## Import + + +DiscoveryConfig can be imported using any of these accepted formats: + +* `{{parent}}/discoveryConfigs/{{name}}` +* `{{parent}}/{{name}}` + + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import DiscoveryConfig using one of the formats above. For example: + +```tf +import { + id = "{{parent}}/discoveryConfigs/{{name}}" + to = google_data_loss_prevention_discovery_config.default +} +``` + +When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), DiscoveryConfig can be imported using one of the formats above. For example: + +``` +$ terraform import google_data_loss_prevention_discovery_config.default {{parent}}/discoveryConfigs/{{name}} +$ terraform import google_data_loss_prevention_discovery_config.default {{parent}}/{{name}} +``` diff --git a/website/docs/r/datastore_index.html.markdown b/website/docs/r/datastore_index.html.markdown index 4a96bfd552..3e7be6877a 100644 --- a/website/docs/r/datastore_index.html.markdown +++ b/website/docs/r/datastore_index.html.markdown @@ -34,15 +34,23 @@ one, you can create a `google_app_engine_application` resource with `database_type` set to `"CLOUD_DATASTORE_COMPATIBILITY"` to do so. Your Datastore location will be the same as the App Engine location specified. - ## Example Usage - Datastore Index ```hcl +resource "google_firestore_database" "database" { + project = "my-project-name" + # google_datastore_index resources only support the (default) database. + # However, google_firestore_index can express any Datastore Mode index + # and should be preferred in all cases. + name = "(default)" + location_id = "nam5" + type = "DATASTORE_MODE" + + delete_protection_state = "DELETE_PROTECTION_DISABLED" + deletion_policy = "DELETE" +} + resource "google_datastore_index" "default" { kind = "foo" properties { @@ -53,6 +61,8 @@ resource "google_datastore_index" "default" { name = "property_b" direction = "ASCENDING" } + + depends_on = [google_firestore_database.database] } ``` diff --git a/website/docs/r/dns_record_set.html.markdown b/website/docs/r/dns_record_set.html.markdown index d4823cb788..d2612690ab 100644 --- a/website/docs/r/dns_record_set.html.markdown +++ b/website/docs/r/dns_record_set.html.markdown @@ -177,7 +177,7 @@ resource "google_dns_record_set" "geo" { } ``` -#### Primary-Backup +#### Failover ```hcl resource "google_dns_record_set" "a" { @@ -269,15 +269,15 @@ The following arguments are supported: The `routing_policy` block supports: * `wrr` - (Optional) The configuration for Weighted Round Robin based routing policy. - Structure is [document below](#nested_wrr). + Structure is [documented below](#nested_wrr). * `geo` - (Optional) The configuration for Geolocation based routing policy. - Structure is [document below](#nested_geo). + Structure is [documented below](#nested_geo). * `enable_geo_fencing` - (Optional) Specifies whether to enable fencing for geo queries. -* `primary_backup` - (Optional) The configuration for a primary-backup policy with global to regional failover. Queries are responded to with the global primary targets, but if none of the primary targets are healthy, then we fallback to a regional failover policy. - Structure is [document below](#nested_primary_backup). +* `primary_backup` - (Optional) The configuration for a failover policy with global to regional failover. Queries are responded to with the global primary targets, but if none of the primary targets are healthy, then we fallback to a regional failover policy. + Structure is [documented below](#nested_primary_backup). The `wrr` block supports: @@ -286,7 +286,7 @@ The following arguments are supported: * `rrdatas` - (Optional) Same as `rrdatas` above. * `health_checked_targets` - (Optional) The list of targets to be health checked. Note that if DNSSEC is enabled for this zone, only one of `rrdatas` or `health_checked_targets` can be set. - Structure is [document below](#nested_health_checked_targets). + Structure is [documented below](#nested_health_checked_targets). The `geo` block supports: @@ -295,12 +295,12 @@ The following arguments are supported: * `rrdatas` - (Optional) Same as `rrdatas` above. * `health_checked_targets` - (Optional) For A and AAAA types only. The list of targets to be health checked. These can be specified along with `rrdatas` within this item. - Structure is [document below](#nested_health_checked_targets). + Structure is [documented below](#nested_health_checked_targets). The `primary_backup` block supports: * `primary` - (Required) The list of global primary targets to be health checked. - Structure is [document below](#nested_health_checked_targets). + Structure is [documented below](#nested_health_checked_targets). * `backup_geo` - (Required) The backup geo targets, which provide a regional failover policy for the otherwise global primary targets. Structure is [document above](#nested_geo). @@ -312,7 +312,7 @@ The following arguments are supported: The `health_checked_targets` block supports: * `internal_load_balancers` - (Required) The list of internal load balancers to health check. - Structure is [document below](#nested_internal_load_balancers). + Structure is [documented below](#nested_internal_load_balancers). The `internal_load_balancers` block supports: diff --git a/website/docs/r/filestore_instance.html.markdown b/website/docs/r/filestore_instance.html.markdown index 27c3efb032..0ec196fa91 100644 --- a/website/docs/r/filestore_instance.html.markdown +++ b/website/docs/r/filestore_instance.html.markdown @@ -95,6 +95,34 @@ resource "google_filestore_instance" "instance" { } } ``` + +## Example Usage - Filestore Instance Protocol + + +```hcl +resource "google_filestore_instance" "instance" { + provider = google-beta + name = "test-instance" + location = "us-central1" + tier = "ENTERPRISE" + protocol = "NFS_V4_1" + + file_shares { + capacity_gb = 1024 + name = "share1" + } + + networks { + network = "default" + modes = ["MODE_IPV4"] + } + +} +``` ## Example Usage - Filestore Instance Enterprise @@ -249,6 +277,15 @@ The following arguments are supported: (Optional) A description of the instance. +* `protocol` - + (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + Either NFSv3, for using NFS version 3 as file sharing protocol, + or NFSv4.1, for using NFS version 4.1 as file sharing protocol. + NFSv4.1 can be used with HIGH_SCALE_SSD, ZONAL, REGIONAL and ENTERPRISE. + The default is NFSv3. + Default value is `NFS_V3`. + Possible values are: `NFS_V3`, `NFS_V4_1`. + * `labels` - (Optional) Resource labels to represent user-provided metadata. diff --git a/website/docs/r/firebase_app_check_play_integrity_config.html.markdown b/website/docs/r/firebase_app_check_play_integrity_config.html.markdown index 1398b8422d..d932d07306 100644 --- a/website/docs/r/firebase_app_check_play_integrity_config.html.markdown +++ b/website/docs/r/firebase_app_check_play_integrity_config.html.markdown @@ -33,6 +33,17 @@ To get more information about PlayIntegrityConfig, see: ```hcl +# Enables the Play Integrity API +resource "google_project_service" "play_integrity" { + provider = google-beta + + project = "my-project-name" + service = "playintegrity.googleapis.com" + + # Don't disable the service if the resource block is removed by accident. + disable_on_destroy = false +} + resource "google_firebase_android_app" "default" { provider = google-beta @@ -70,6 +81,17 @@ resource "google_firebase_app_check_play_integrity_config" "default" { ```hcl +# Enables the Play Integrity API +resource "google_project_service" "play_integrity" { + provider = google-beta + + project = "my-project-name" + service = "playintegrity.googleapis.com" + + # Don't disable the service if the resource block is removed by accident. + disable_on_destroy = false +} + resource "google_firebase_android_app" "default" { provider = google-beta diff --git a/website/docs/r/firebase_app_check_recaptcha_enterprise_config.html.markdown b/website/docs/r/firebase_app_check_recaptcha_enterprise_config.html.markdown index f604704c79..3b2f52090b 100644 --- a/website/docs/r/firebase_app_check_recaptcha_enterprise_config.html.markdown +++ b/website/docs/r/firebase_app_check_recaptcha_enterprise_config.html.markdown @@ -32,6 +32,17 @@ To get more information about RecaptchaEnterpriseConfig, see: ```hcl +# Enables the reCAPTCHA Enterprise API +resource "google_project_service" "recaptcha_enterprise" { + provider = google-beta + + project = "my-project-name" + service = "recaptchaenterprise.googleapis.com" + + # Don't disable the service if the resource block is removed by accident. + disable_on_destroy = false +} + resource "google_firebase_web_app" "default" { provider = google-beta diff --git a/website/docs/r/firebase_hosting_version.html.markdown b/website/docs/r/firebase_hosting_version.html.markdown index 33c064f836..e25b6a957d 100644 --- a/website/docs/r/firebase_hosting_version.html.markdown +++ b/website/docs/r/firebase_hosting_version.html.markdown @@ -59,6 +59,34 @@ resource "google_firebase_hosting_release" "default" { message = "Redirect to Google" } ``` +## Example Usage - Firebasehosting Version Path + + +```hcl +resource "google_firebase_hosting_site" "default" { + provider = google-beta + project = "my-project-name" + site_id = "site-id" +} + +resource "google_firebase_hosting_version" "default" { + provider = google-beta + site_id = google_firebase_hosting_site.default.site_id + config { + rewrites { + glob = "**" + path = "/index.html" + } + } +} + +resource "google_firebase_hosting_release" "default" { + provider = google-beta + site_id = google_firebase_hosting_site.default.site_id + version_name = google_firebase_hosting_version.default.name + message = "Path Rewrite" +} +``` ## Example Usage - Firebasehosting Version Cloud Run @@ -209,6 +237,10 @@ The following arguments are supported: (Optional) The user-supplied RE2 regular expression to match against the request URL path. +* `path` - + (Optional) + The URL path to rewrite the request to. + * `function` - (Optional) The function to proxy requests to. Must match the exported function name exactly. diff --git a/website/docs/r/firestore_backup_schedule.html.markdown b/website/docs/r/firestore_backup_schedule.html.markdown index a74f522c7c..29decd0278 100644 --- a/website/docs/r/firestore_backup_schedule.html.markdown +++ b/website/docs/r/firestore_backup_schedule.html.markdown @@ -53,7 +53,7 @@ resource "google_firestore_backup_schedule" "daily-backup" { project = "my-project-name" database = google_firestore_database.database.name - retention = "604800s" // 7 days (maximum possible value for daily backups) + retention = "8467200s" // 14 weeks (maximum possible retention) daily_recurrence {} } @@ -76,7 +76,7 @@ resource "google_firestore_backup_schedule" "weekly-backup" { project = "my-project-name" database = google_firestore_database.database.name - retention = "8467200s" // 14 weeks (maximum possible value for weekly backups) + retention = "8467200s" // 14 weeks (maximum possible retention) weekly_recurrence { day = "SUNDAY" @@ -93,7 +93,7 @@ The following arguments are supported: (Required) At what relative time in the future, compared to its creation time, the backup should be deleted, e.g. keep backups for 7 days. A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s". - For a daily backup recurrence, set this to a value up to 7 days. If you set a weekly backup recurrence, set this to a value up to 14 weeks. + You can set this to a value up to 14 weeks. - - - diff --git a/website/docs/r/firestore_document.html.markdown b/website/docs/r/firestore_document.html.markdown index 465fa056ae..5bf91a7d1b 100644 --- a/website/docs/r/firestore_document.html.markdown +++ b/website/docs/r/firestore_document.html.markdown @@ -37,8 +37,6 @@ If you wish to use App Engine, you may instead create a `google_app_engine_application` resource with `database_type` set to `"CLOUD_FIRESTORE"`. Your Firestore location will be the same as the App Engine location specified. -Note: The surface does not support configurable database id. Only `(default)` -is allowed for the database parameter. ## Example Usage - Firestore Document Basic @@ -175,7 +173,7 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `{{name}}` * `name` - - A server defined name for this index. Format: + A server defined name for this document. Format: `projects/{{project_id}}/databases/{{database_id}}/documents/{{path}}/{{document_id}}` * `path` - diff --git a/website/docs/r/firestore_index.html.markdown b/website/docs/r/firestore_index.html.markdown index 90cafe36a0..4e6e9b6f4a 100644 --- a/website/docs/r/firestore_index.html.markdown +++ b/website/docs/r/firestore_index.html.markdown @@ -101,6 +101,44 @@ resource "google_firestore_index" "my-index" { } } ``` +## Example Usage - Firestore Index Vector + + +```hcl +resource "google_firestore_database" "database" { + project = "my-project-name" + name = "database-id-vector" + location_id = "nam5" + type = "FIRESTORE_NATIVE" + + delete_protection_state = "DELETE_PROTECTION_DISABLED" + deletion_policy = "DELETE" +} + +resource "google_firestore_index" "my-index" { + project = "my-project-name" + database = google_firestore_database.database.name + collection = "atestcollection" + + fields { + field_path = "field_name" + order = "ASCENDING" + } + + fields { + field_path = "__name__" + order = "ASCENDING" + } + + fields { + field_path = "description" + vector_config { + dimension = 128 + flat {} + } + } +} +``` ## Argument Reference @@ -113,12 +151,12 @@ The following arguments are supported: * `fields` - (Required) - The fields supported by this index. The last field entry is always for - the field path `__name__`. If, on creation, `__name__` was not - specified as the last field, it will be added automatically with the - same direction as that of the last field defined. If the final field - in a composite index is not directional, the `__name__` will be - ordered `"ASCENDING"` (unless explicitly specified otherwise). + The fields supported by this index. The last non-stored field entry is + always for the field path `__name__`. If, on creation, `__name__` was not + specified as the last field, it will be added automatically with the same + direction as that of the last field defined. If the final field in a + composite index is not directional, the `__name__` will be ordered + `"ASCENDING"` (unless explicitly specified otherwise). Structure is [documented below](#nested_fields). @@ -131,15 +169,33 @@ The following arguments are supported: * `order` - (Optional) Indicates that this field supports ordering by the specified order or comparing using =, <, <=, >, >=. - Only one of `order` and `arrayConfig` can be specified. + Only one of `order`, `arrayConfig`, and `vectorConfig` can be specified. Possible values are: `ASCENDING`, `DESCENDING`. * `array_config` - (Optional) - Indicates that this field supports operations on arrayValues. Only one of `order` and `arrayConfig` can - be specified. + Indicates that this field supports operations on arrayValues. Only one of `order`, `arrayConfig`, and + `vectorConfig` can be specified. Possible values are: `CONTAINS`. +* `vector_config` - + (Optional) + Indicates that this field supports vector search operations. Only one of `order`, `arrayConfig`, and + `vectorConfig` can be specified. Vector Fields should come after the field path `__name__`. + Structure is [documented below](#nested_vector_config). + + +The `vector_config` block supports: + +* `dimension` - + (Optional) + The resulting index will only include vectors of this dimension, and can be used for vector search + with the same dimension. + +* `flat` - + (Optional) + Indicates the vector index is a flat index. + - - - diff --git a/website/docs/r/gke_backup_backup_plan.html.markdown b/website/docs/r/gke_backup_backup_plan.html.markdown index 959f54e8b8..4ebd747e09 100644 --- a/website/docs/r/gke_backup_backup_plan.html.markdown +++ b/website/docs/r/gke_backup_backup_plan.html.markdown @@ -194,6 +194,144 @@ resource "google_gke_backup_backup_plan" "full" { } } ``` +## Example Usage - Gkebackup Backupplan Rpo Daily Window + + +```hcl +resource "google_container_cluster" "primary" { + name = "rpo-daily-cluster" + location = "us-central1" + initial_node_count = 1 + workload_identity_config { + workload_pool = "my-project-name.svc.id.goog" + } + addons_config { + gke_backup_agent_config { + enabled = true + } + } + deletion_protection = "true" + network = "default" + subnetwork = "default" +} + +resource "google_gke_backup_backup_plan" "rpo_daily_window" { + name = "rpo-daily-window" + cluster = google_container_cluster.primary.id + location = "us-central1" + retention_policy { + backup_delete_lock_days = 30 + backup_retain_days = 180 + } + backup_schedule { + paused = true + rpo_config { + target_rpo_minutes=1440 + exclusion_windows { + start_time { + hours = 12 + } + duration = "7200s" + daily = true + } + exclusion_windows { + start_time { + hours = 8 + minutes = 40 + seconds = 1 + nanos = 100 + } + duration = "3600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 16 + } + } + } + } + backup_config { + include_volume_data = true + include_secrets = true + all_namespaces = true + } +} +``` +## Example Usage - Gkebackup Backupplan Rpo Weekly Window + + +```hcl +resource "google_container_cluster" "primary" { + name = "rpo-weekly-cluster" + location = "us-central1" + initial_node_count = 1 + workload_identity_config { + workload_pool = "my-project-name.svc.id.goog" + } + addons_config { + gke_backup_agent_config { + enabled = true + } + } + deletion_protection = "true" + network = "default" + subnetwork = "default" +} + +resource "google_gke_backup_backup_plan" "rpo_weekly_window" { + name = "rpo-weekly-window" + cluster = google_container_cluster.primary.id + location = "us-central1" + retention_policy { + backup_delete_lock_days = 30 + backup_retain_days = 180 + } + backup_schedule { + paused = true + rpo_config { + target_rpo_minutes=1440 + exclusion_windows { + start_time { + hours = 1 + minutes = 23 + } + duration = "1800s" + days_of_week { + days_of_week = ["MONDAY", "THURSDAY"] + } + } + exclusion_windows { + start_time { + hours = 12 + } + duration = "3600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 17 + } + } + exclusion_windows { + start_time { + hours = 8 + minutes = 40 + } + duration = "600s" + single_occurrence_date { + year = 2024 + month = 3 + day = 18 + } + } + } + } + backup_config { + include_volume_data = true + include_secrets = true + all_namespaces = true + } +} +``` ## Argument Reference @@ -277,7 +415,9 @@ The following arguments are supported: existing Backups under it. Backups created AFTER a successful update will automatically pick up the new value. NOTE: backupRetainDays must be >= backupDeleteLockDays. - If cronSchedule is defined, then this must be <= 360 * the creation interval.] + If cronSchedule is defined, then this must be <= 360 * the creation interval. + If rpo_config is defined, then this must be + <= 360 * targetRpoMinutes/(1440minutes/day) * `locked` - (Optional) @@ -291,12 +431,118 @@ The following arguments are supported: (Optional) A standard cron string that defines a repeating schedule for creating Backups via this BackupPlan. + This is mutually exclusive with the rpoConfig field since at most one + schedule can be defined for a BackupPlan. If this is defined, then backupRetainDays must also be defined. * `paused` - (Optional) This flag denotes whether automatic Backup creation is paused for this BackupPlan. +* `rpo_config` - + (Optional) + Defines the RPO schedule configuration for this BackupPlan. This is mutually + exclusive with the cronSchedule field since at most one schedule can be defined + for a BackupPLan. If this is defined, then backupRetainDays must also be defined. + Structure is [documented below](#nested_rpo_config). + + +The `rpo_config` block supports: + +* `target_rpo_minutes` - + (Required) + Defines the target RPO for the BackupPlan in minutes, which means the target + maximum data loss in time that is acceptable for this BackupPlan. This must be + at least 60, i.e., 1 hour, and at most 86400, i.e., 60 days. + +* `exclusion_windows` - + (Optional) + User specified time windows during which backup can NOT happen for this BackupPlan. + Backups should start and finish outside of any given exclusion window. Note: backup + jobs will be scheduled to start and finish outside the duration of the window as + much as possible, but running jobs will not get canceled when it runs into the window. + All the time and date values in exclusionWindows entry in the API are in UTC. We + only allow <=1 recurrence (daily or weekly) exclusion window for a BackupPlan while no + restriction on number of single occurrence windows. + Structure is [documented below](#nested_exclusion_windows). + + +The `exclusion_windows` block supports: + +* `start_time` - + (Required) + Specifies the start time of the window using time of the day in UTC. + Structure is [documented below](#nested_start_time). + +* `duration` - + (Required) + Specifies duration of the window in seconds with up to nine fractional digits, + terminated by 's'. Example: "3.5s". Restrictions for duration based on the + recurrence type to allow some time for backup to happen: + - single_occurrence_date: no restriction + - daily window: duration < 24 hours + - weekly window: + - days of week includes all seven days of a week: duration < 24 hours + - all other weekly window: duration < 168 hours (i.e., 24 * 7 hours) + +* `single_occurrence_date` - + (Optional) + No recurrence. The exclusion window occurs only once and on this date in UTC. + Only one of singleOccurrenceDate, daily and daysOfWeek may be set. + Structure is [documented below](#nested_single_occurrence_date). + +* `daily` - + (Optional) + The exclusion window occurs every day if set to "True". + Specifying this field to "False" is an error. + Only one of singleOccurrenceDate, daily and daysOfWeek may be set. + +* `days_of_week` - + (Optional) + The exclusion window occurs on these days of each week in UTC. + Only one of singleOccurrenceDate, daily and daysOfWeek may be set. + Structure is [documented below](#nested_days_of_week). + + +The `start_time` block supports: + +* `hours` - + (Optional) + Hours of day in 24 hour format. + +* `minutes` - + (Optional) + Minutes of hour of day. + +* `seconds` - + (Optional) + Seconds of minutes of the time. + +* `nanos` - + (Optional) + Fractions of seconds in nanoseconds. + +The `single_occurrence_date` block supports: + +* `year` - + (Optional) + Year of the date. + +* `month` - + (Optional) + Month of a year. + +* `day` - + (Optional) + Day of a month. + +The `days_of_week` block supports: + +* `days_of_week` - + (Optional) + A list of days of week. + Each value may be one of: `MONDAY`, `TUESDAY`, `WEDNESDAY`, `THURSDAY`, `FRIDAY`, `SATURDAY`, `SUNDAY`. + The `backup_config` block supports: * `include_volume_data` - diff --git a/website/docs/r/gkeonprem_vmware_cluster.html.markdown b/website/docs/r/gkeonprem_vmware_cluster.html.markdown index 0422848034..db267fbd77 100644 --- a/website/docs/r/gkeonprem_vmware_cluster.html.markdown +++ b/website/docs/r/gkeonprem_vmware_cluster.html.markdown @@ -123,6 +123,7 @@ resource "google_gkeonprem_vmware_cluster" "cluster-f5lb" { } vm_tracking_enabled = true enable_control_plane_v2 = true + disable_bundled_ingress = true authorization { admin_users { username = "testuser@gmail.com" @@ -374,6 +375,10 @@ The following arguments are supported: (Optional) Enable control plane V2. Default to false. +* `disable_bundled_ingress` - + (Optional) + Disable bundled ingress. + * `upgrade_policy` - (Optional) Specifies upgrade policy for the cluster. diff --git a/website/docs/r/google_project_iam_member_remove.html.markdown b/website/docs/r/google_project_iam_member_remove.html.markdown new file mode 100644 index 0000000000..74d5f9bb5b --- /dev/null +++ b/website/docs/r/google_project_iam_member_remove.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Cloud Platform" +description: |- + Ensures that a member:role pairing does not exist in a project's IAM policy. +--- + +# google\_project\_iam\member\_remove + +Ensures that a member:role pairing does not exist in a project's IAM policy. + +On create, this resource will modify the policy to remove the `member` from the +`role`. If the membership is ever re-added, the next refresh will clear this +resource from state, proposing re-adding it to correct the membership. Import is +not supported- this resource will acquire the current policy and modify it as +part of creating the resource. + +This resource will conflict with `google_project_iam_policy` and +`google_project_iam_binding` resources that share a role, as well as +`google_project_iam_member` resources that target the same membership. When +multiple resources conflict the final state is not guaranteed to include or omit +the membership. Subsequent `terraform apply` calls will always show a diff +until the configuration is corrected. + +For more information see +[the official documentation](https://cloud.google.com/iam/docs/granting-changing-revoking-access) +and +[API reference](https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy). + +## Example Usage + +```hcl +data "google_project" "target_project {} + +resource "google_project_iam_member_remove" "foo" { + role = "roles/editor" + project = google_project.target_project.project_id + member = "serviceAccount:${google_project.target_project.number}-compute@developer.gserviceaccount.com" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `project` - (Required) The project id of the target project. + +* `role` - (Required) The target role that should be removed. + +* `member` - (Required) The IAM principal that should not have the target role. + Each entry can have one of the following values: + * **user:{emailid}**: An email address that represents a specific Google account. For example, alice@gmail.com or joe@example.com. + * **serviceAccount:{emailid}**: An email address that represents a service account. For example, my-other-app@appspot.gserviceaccount.com. + * **group:{emailid}**: An email address that represents a Google group. For example, admins@example.com. + * **domain:{domain}**: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. + diff --git a/website/docs/r/integrations_auth_config.html.markdown b/website/docs/r/integrations_auth_config.html.markdown new file mode 100644 index 0000000000..fa7193ae6e --- /dev/null +++ b/website/docs/r/integrations_auth_config.html.markdown @@ -0,0 +1,408 @@ +--- +# ---------------------------------------------------------------------------- +# +# *** AUTO GENERATED CODE *** Type: MMv1 *** +# +# ---------------------------------------------------------------------------- +# +# This file is automatically generated by Magic Modules and manual +# changes will be clobbered when the file is regenerated. +# +# Please read more about how to change this file in +# .github/CONTRIBUTING.md. +# +# ---------------------------------------------------------------------------- +subcategory: "Application Integration" +description: |- + The AuthConfig resource use to hold channels and connection config data. +--- + +# google\_integrations\_auth\_config + +The AuthConfig resource use to hold channels and connection config data. + + +To get more information about AuthConfig, see: + +* [API documentation](https://cloud.google.com/application-integration/docs/reference/rest/v1/projects.locations.authConfigs) +* How-to Guides + * [Official Documentation](https://cloud.google.com/application-integration/docs/overview) + * [Manage authentication profiles](https://cloud.google.com/application-integration/docs/configure-authentication-profiles) + +## Example Usage - Integrations Auth Config Basic + + +```hcl +resource "google_integrations_client" "client" { + location = "us-west1" + provision_gmek = true +} + +resource "google_integrations_auth_config" "basic_example" { + location = "us-west1" + display_name = "test-authconfig" + description = "Test auth config created via terraform" + decrypted_credential { + credential_type = "USERNAME_AND_PASSWORD" + username_and_password { + username = "test-username" + password = "test-password" + } + } + depends_on = [google_integrations_client.client] +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `display_name` - + (Required) + The name of the auth config. + +* `location` - + (Required) + Location in which client needs to be provisioned. + + +- - - + + +* `description` - + (Optional) + A description of the auth config. + +* `visibility` - + (Optional) + The visibility of the auth config. + Possible values are: `PRIVATE`, `CLIENT_VISIBLE`. + +* `expiry_notification_duration` - + (Optional) + User can define the time to receive notification after which the auth config becomes invalid. Support up to 30 days. Support granularity in hours. + A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s". + +* `override_valid_time` - + (Optional) + User provided expiry time to override. For the example of Salesforce, username/password credentials can be valid for 6 months depending on the instance settings. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `decrypted_credential` - + (Optional) + Raw auth credentials. + Structure is [documented below](#nested_decrypted_credential). + +* `client_certificate` - + (Optional) + Raw client certificate + Structure is [documented below](#nested_client_certificate). + +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + +The `decrypted_credential` block supports: + +* `credential_type` - + (Required) + Credential type associated with auth configs. + +* `username_and_password` - + (Optional) + Username and password credential. + Structure is [documented below](#nested_username_and_password). + +* `oauth2_authorization_code` - + (Optional) + OAuth2 authorization code credential. + Structure is [documented below](#nested_oauth2_authorization_code). + +* `oauth2_client_credentials` - + (Optional) + OAuth2 client credentials. + Structure is [documented below](#nested_oauth2_client_credentials). + +* `jwt` - + (Optional) + JWT credential. + Structure is [documented below](#nested_jwt). + +* `auth_token` - + (Optional) + Auth token credential. + Structure is [documented below](#nested_auth_token). + +* `service_account_credentials` - + (Optional) + Service account credential. + Structure is [documented below](#nested_service_account_credentials). + +* `oidc_token` - + (Optional) + Google OIDC ID Token. + Structure is [documented below](#nested_oidc_token). + + +The `username_and_password` block supports: + +* `username` - + (Optional) + Username to be used. + +* `password` - + (Optional) + Password to be used. + +The `oauth2_authorization_code` block supports: + +* `client_id` - + (Optional) + The client's id. + +* `client_secret` - + (Optional) + The client's secret. + +* `scope` - + (Optional) + A space-delimited list of requested scope permissions. + +* `auth_endpoint` - + (Optional) + The auth url endpoint to send the auth code request to. + +* `token_endpoint` - + (Optional) + The token url endpoint to send the token request to. + +The `oauth2_client_credentials` block supports: + +* `client_id` - + (Optional) + The client's ID. + +* `client_secret` - + (Optional) + The client's secret. + +* `token_endpoint` - + (Optional) + The token endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token. + +* `scope` - + (Optional) + A space-delimited list of requested scope permissions. + +* `token_params` - + (Optional) + Token parameters for the auth request. + Structure is [documented below](#nested_token_params). + +* `request_type` - + (Optional) + Represent how to pass parameters to fetch access token + Possible values are: `REQUEST_TYPE_UNSPECIFIED`, `REQUEST_BODY`, `QUERY_PARAMETERS`, `ENCODED_HEADER`. + + +The `token_params` block supports: + +* `entries` - + (Optional) + A list of parameter map entries. + Structure is [documented below](#nested_entries). + + +The `entries` block supports: + +* `key` - + (Optional) + Key of the map entry. + Structure is [documented below](#nested_key). + +* `value` - + (Optional) + Value of the map entry. + Structure is [documented below](#nested_value). + + +The `key` block supports: + +* `literal_value` - + (Optional) + Passing a literal value + Structure is [documented below](#nested_literal_value). + + +The `literal_value` block supports: + +* `string_value` - + (Optional) + String. + +The `value` block supports: + +* `literal_value` - + (Optional) + Passing a literal value + Structure is [documented below](#nested_literal_value). + + +The `literal_value` block supports: + +* `string_value` - + (Optional) + String. + +The `jwt` block supports: + +* `jwt_header` - + (Optional) + Identifies which algorithm is used to generate the signature. + +* `jwt_payload` - + (Optional) + Contains a set of claims. The JWT specification defines seven Registered Claim Names which are the standard fields commonly included in tokens. Custom claims are usually also included, depending on the purpose of the token. + +* `secret` - + (Optional) + User's pre-shared secret to sign the token. + +* `jwt` - + (Output) + The token calculated by the header, payload and signature. + +The `auth_token` block supports: + +* `type` - + (Optional) + Authentication type, e.g. "Basic", "Bearer", etc. + +* `token` - + (Optional) + The token for the auth type. + +The `service_account_credentials` block supports: + +* `service_account` - + (Optional) + Name of the service account that has the permission to make the request. + +* `scope` - + (Optional) + A space-delimited list of requested scope permissions. + +The `oidc_token` block supports: + +* `service_account_email` - + (Optional) + The service account email to be used as the identity for the token. + +* `audience` - + (Optional) + Audience to be used when generating OIDC token. The audience claim identifies the recipients that the JWT is intended for. + +* `token` - + (Output) + ID token obtained for the service account. + +* `token_expire_time` - + (Output) + The approximate time until the token retrieved is valid. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +The `client_certificate` block supports: + +* `ssl_certificate` - + (Required) + The ssl certificate encoded in PEM format. This string must include the begin header and end footer lines. + +* `encrypted_private_key` - + (Required) + The ssl certificate encoded in PEM format. This string must include the begin header and end footer lines. + +* `passphrase` - + (Optional) + 'passphrase' should be left unset if private key is not encrypted. + Note that 'passphrase' is not the password for web server, but an extra layer of security to protected private key. + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `{{name}}` + +* `name` - + Resource name of the auth config. + +* `certificate_id` - + Certificate id for client certificate. + +* `credential_type` - + Credential type of the encrypted credential. + +* `creator_email` - + The creator's email address. Generated based on the End User Credentials/LOAS role of the user making the call. + +* `create_time` - + The timestamp when the auth config is created. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `last_modifier_email` - + The last modifier's email address. Generated based on the End User Credentials/LOAS role of the user making the call. + +* `update_time` - + The timestamp when the auth config is modified. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `state` - + The status of the auth config. + +* `reason` - + The reason / details of the current status. + +* `valid_time` - + The time until the auth config is valid. Empty or max value is considered the auth config won't expire. + A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". + +* `encrypted_credential` - + Auth credential encrypted by Cloud KMS. Can be decrypted as Credential with proper KMS key. + A base64-encoded string. + + +## Timeouts + +This resource provides the following +[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: + +- `create` - Default is 20 minutes. +- `update` - Default is 20 minutes. +- `delete` - Default is 20 minutes. + +## Import + + +AuthConfig can be imported using any of these accepted formats: + +* `{{name}}` + + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import AuthConfig using one of the formats above. For example: + +```tf +import { + id = "{{name}}" + to = google_integrations_auth_config.default +} +``` + +When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), AuthConfig can be imported using one of the formats above. For example: + +``` +$ terraform import google_integrations_auth_config.default {{name}} +``` + +## User Project Overrides + +This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/integrations_client.html.markdown b/website/docs/r/integrations_client.html.markdown index 52efc44032..bae1bb2860 100644 --- a/website/docs/r/integrations_client.html.markdown +++ b/website/docs/r/integrations_client.html.markdown @@ -40,6 +40,7 @@ To get more information about Client, see: ```hcl resource "google_integrations_client" "example" { location = "us-central1" + provision_gmek = true } ```
@@ -56,7 +57,7 @@ data "google_project" "test_project" { resource "google_kms_key_ring" "keyring" { name = "my-keyring" - location = "us-central1" + location = "us-east1" } resource "google_kms_crypto_key" "cryptokey" { @@ -71,19 +72,23 @@ resource "google_kms_crypto_key_version" "test_key" { depends_on = [google_kms_crypto_key.cryptokey] } +resource "google_service_account" "service_account" { + account_id = "service-account-id" + display_name = "Service Account" +} + resource "google_integrations_client" "example" { - location = "us-central1" + location = "us-east1" create_sample_workflows = true - provision_gmek = true - run_as_service_account = "radndom-service-account" + run_as_service_account = google_service_account.service_account.email cloud_kms_config { - kms_location = "us-central1" + kms_location = "us-east1" kms_ring = google_kms_key_ring.keyring.id key = google_kms_crypto_key.cryptokey.id key_version = google_kms_crypto_key_version.test_key.id - kms_project_id = data.google_project.test_project.id + kms_project_id = data.google_project.test_project.project_id } - depends_on = [google_kms_crypto_key_version.test_key] + depends_on = [google_kms_crypto_key_version.test_key, google_service_account.service_account] } ``` diff --git a/website/docs/r/looker_instance.html.markdown b/website/docs/r/looker_instance.html.markdown index cd93ff1906..ae78296208 100644 --- a/website/docs/r/looker_instance.html.markdown +++ b/website/docs/r/looker_instance.html.markdown @@ -40,7 +40,7 @@ To get more information about Instance, see: ```hcl resource "google_looker_instance" "looker-instance" { name = "my-instance" - platform_edition = "LOOKER_CORE_STANDARD" + platform_edition = "LOOKER_CORE_STANDARD_ANNUAL" region = "us-central1" oauth_config { client_id = "my-client-id" @@ -59,18 +59,12 @@ resource "google_looker_instance" "looker-instance" { ```hcl resource "google_looker_instance" "looker-instance" { name = "my-instance" - platform_edition = "LOOKER_CORE_STANDARD" + platform_edition = "LOOKER_CORE_STANDARD_ANNUAL" region = "us-central1" public_ip_enabled = true admin_settings { allowed_email_domains = ["google.com"] } - // User metadata config is only available when platform edition is LOOKER_CORE_STANDARD. - user_metadata { - additional_developer_user_count = 10 - additional_standard_user_count = 10 - additional_viewer_user_count = 10 - } maintenance_window { day_of_week = "THURSDAY" start_time { @@ -195,12 +189,14 @@ resource "google_kms_crypto_key_iam_member" "crypto_key" { ```hcl resource "google_looker_instance" "looker-instance" { name = "my-instance" - platform_edition = "LOOKER_CORE_STANDARD" + platform_edition = "LOOKER_CORE_STANDARD_ANNUAL" region = "us-central1" oauth_config { client_id = "my-client-id" client_secret = "my-client-secret" } + // After your Looker (Google Cloud core) instance has been created, you can set up, view information about, or delete a custom domain for your instance. + // Therefore 2 terraform applies, one to create the instance, then another to set up the custom domain. custom_domain { domain = "my-custom-domain.com" } @@ -259,8 +255,8 @@ The following arguments are supported: * `platform_edition` - (Optional) Platform editions for a Looker instance. Each edition maps to a set of instance features, like its size. Must be one of these values: - - LOOKER_CORE_TRIAL: trial instance - - LOOKER_CORE_STANDARD: pay as you go standard instance + - LOOKER_CORE_TRIAL: trial instance (Currently Unavailable) + - LOOKER_CORE_STANDARD: pay as you go standard instance (Currently Unavailable) - LOOKER_CORE_STANDARD_ANNUAL: subscription standard instance - LOOKER_CORE_ENTERPRISE_ANNUAL: subscription enterprise instance - LOOKER_CORE_EMBED_ANNUAL: subscription embed instance diff --git a/website/docs/r/monitoring_uptime_check_config.html.markdown b/website/docs/r/monitoring_uptime_check_config.html.markdown index eb593594c1..92248532c1 100644 --- a/website/docs/r/monitoring_uptime_check_config.html.markdown +++ b/website/docs/r/monitoring_uptime_check_config.html.markdown @@ -134,6 +134,9 @@ resource "google_monitoring_uptime_check_config" "https" { port = "443" use_ssl = true validate_ssl = true + service_agent_authentication { + type = "OIDC_TOKEN" + } } monitored_resource { @@ -362,9 +365,14 @@ The following arguments are supported: * `auth_info` - (Optional) - The authentication information. Optional when creating an HTTP check; defaults to empty. + The authentication information using username and password. Optional when creating an HTTP check; defaults to empty. Do not use with other authentication fields. Structure is [documented below](#nested_auth_info). +* `service_agent_authentication` - + (Optional) + The authentication information using the Monitoring Service Agent. Optional when creating an HTTPS check; defaults to empty. Do not use with other authentication fields. + Structure is [documented below](#nested_service_agent_authentication). + * `port` - (Optional) The port to the page to run the check against. Will be combined with `host` (specified within the [`monitored_resource`](#nested_monitored_resource)) and path to construct the full URL. Optional (defaults to 80 without SSL, or 443 with SSL). @@ -415,6 +423,13 @@ The following arguments are supported: (Required) The username to authenticate. +The `service_agent_authentication` block supports: + +* `type` - + (Optional) + The type of authentication to use. + Possible values are: `SERVICE_AGENT_AUTHENTICATION_TYPE_UNSPECIFIED`, `OIDC_TOKEN`. + The `accepted_response_status_codes` block supports: * `status_value` - diff --git a/website/docs/r/network_connectivity_internal_range.html.markdown b/website/docs/r/network_connectivity_internal_range.html.markdown new file mode 100644 index 0000000000..13d12963cf --- /dev/null +++ b/website/docs/r/network_connectivity_internal_range.html.markdown @@ -0,0 +1,266 @@ +--- +# ---------------------------------------------------------------------------- +# +# *** AUTO GENERATED CODE *** Type: MMv1 *** +# +# ---------------------------------------------------------------------------- +# +# This file is automatically generated by Magic Modules and manual +# changes will be clobbered when the file is regenerated. +# +# Please read more about how to change this file in +# .github/CONTRIBUTING.md. +# +# ---------------------------------------------------------------------------- +subcategory: "Network Connectivity" +description: |- + The internal range resource for IPAM operations within a VPC network. +--- + +# google\_network\_connectivity\_internal\_range + +The internal range resource for IPAM operations within a VPC network. Used to represent a private address range along with behavioral characterstics of that range (its usage and peering behavior). Networking resources can link to this range if they are created as belonging to it. + + +To get more information about InternalRange, see: + +* [API documentation](https://cloud.google.com/network-connectivity/docs/reference/networkconnectivity/rest/v1/projects.locations.internalRanges) +* How-to Guides + * [Use internal ranges](https://cloud.google.com/vpc/docs/create-use-internal-ranges) + + +## Example Usage - Network Connectivity Internal Ranges Basic + + +```hcl +resource "google_network_connectivity_internal_range" "default" { + name = "basic" + description = "Test internal range" + network = google_compute_network.default.self_link + usage = "FOR_VPC" + peering = "FOR_SELF" + ip_cidr_range = "10.0.0.0/24" + + labels = { + label-a: "b" + } +} + +resource "google_compute_network" "default" { + name = "internal-ranges" + auto_create_subnetworks = false +} +``` + +## Example Usage - Network Connectivity Internal Ranges Automatic Reservation + + +```hcl +resource "google_network_connectivity_internal_range" "default" { + name = "automatic-reservation" + network = google_compute_network.default.id + usage = "FOR_VPC" + peering = "FOR_SELF" + prefix_length = 24 + target_cidr_range = [ + "192.16.0.0/16" + ] +} + +resource "google_compute_network" "default" { + name = "internal-ranges" + auto_create_subnetworks = false +} +``` + +## Example Usage - Network Connectivity Internal Ranges External Ranges + + +```hcl +resource "google_network_connectivity_internal_range" "default" { + name = "external-ranges" + network = google_compute_network.default.id + usage = "EXTERNAL_TO_VPC" + peering = "FOR_SELF" + ip_cidr_range = "172.16.0.0/24" + + labels = { + external-reserved-range: "on-premises" + } +} + +resource "google_compute_network" "default" { + name = "internal-ranges" + auto_create_subnetworks = false +} +``` + +## Example Usage - Network Connectivity Internal Ranges Reserve With Overlap + + +```hcl +resource "google_network_connectivity_internal_range" "default" { + name = "overlap-range" + description = "Test internal range" + network = google_compute_network.default.id + usage = "FOR_VPC" + peering = "FOR_SELF" + ip_cidr_range = "10.0.0.0/30" + + overlaps = [ + "OVERLAP_EXISTING_SUBNET_RANGE" + ] + + depends_on = [ + google_compute_subnetwork.default + ] +} + +resource "google_compute_network" "default" { + name = "internal-ranges" + auto_create_subnetworks = false +} + +resource "google_compute_subnetwork" "default" { + name = "overlapping-subnet" + ip_cidr_range = "10.0.0.0/24" + region = "us-central1" + network = google_compute_network.default.id +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `name` - + (Required) + The name of the policy based route. + +* `network` - + (Required) + Fully-qualified URL of the network that this route applies to, for example: projects/my-project/global/networks/my-network. + +* `usage` - + (Required) + The type of usage set for this InternalRange. + Possible values are: `FOR_VPC`, `EXTERNAL_TO_VPC`. + +* `peering` - + (Required) + The type of peering set for this internal range. + Possible values are: `FOR_SELF`, `FOR_PEER`, `NOT_SHARED`. + + +- - - + + +* `labels` - + (Optional) + User-defined labels. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + +* `description` - + (Optional) + An optional description of this resource. + +* `ip_cidr_range` - + (Optional) + The IP range that this internal range defines. + +* `prefix_length` - + (Optional) + An alternate to ipCidrRange. Can be set when trying to create a reservation that automatically finds a free range of the given size. + If both ipCidrRange and prefixLength are set, there is an error if the range sizes do not match. Can also be used during updates to change the range size. + +* `target_cidr_range` - + (Optional) + Optional. Can be set to narrow down or pick a different address space while searching for a free range. + If not set, defaults to the "10.0.0.0/8" address space. This can be used to search in other rfc-1918 address spaces like "172.16.0.0/12" and "192.168.0.0/16" or non-rfc-1918 address spaces used in the VPC. + +* `overlaps` - + (Optional) + Optional. Types of resources that are allowed to overlap with the current internal range. + Each value may be one of: `OVERLAP_ROUTE_RANGE`, `OVERLAP_EXISTING_SUBNET_RANGE`. + +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/locations/global/internalRanges/{{name}}` + +* `users` - + Output only. The list of resources that refer to this internal range. + Resources that use the internal range for their range allocation are referred to as users of the range. + Other resources mark themselves as users while doing so by creating a reference to this internal range. Having a user, based on this reference, prevents deletion of the internal range referred to. Can be empty. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + + +## Timeouts + +This resource provides the following +[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: + +- `create` - Default is 30 minutes. +- `update` - Default is 30 minutes. +- `delete` - Default is 30 minutes. + +## Import + + +InternalRange can be imported using any of these accepted formats: + +* `projects/{{project}}/locations/global/internalRanges/{{name}}` +* `{{project}}/{{name}}` +* `{{name}}` + + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import InternalRange using one of the formats above. For example: + +```tf +import { + id = "projects/{{project}}/locations/global/internalRanges/{{name}}" + to = google_network_connectivity_internal_range.default +} +``` + +When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), InternalRange can be imported using one of the formats above. For example: + +``` +$ terraform import google_network_connectivity_internal_range.default projects/{{project}}/locations/global/internalRanges/{{name}} +$ terraform import google_network_connectivity_internal_range.default {{project}}/{{name}} +$ terraform import google_network_connectivity_internal_range.default {{name}} +``` + +## User Project Overrides + +This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/network_security_firewall_endpoint.html.markdown b/website/docs/r/network_security_firewall_endpoint.html.markdown index 2e6f150405..bea7e4a163 100644 --- a/website/docs/r/network_security_firewall_endpoint.html.markdown +++ b/website/docs/r/network_security_firewall_endpoint.html.markdown @@ -36,20 +36,21 @@ To get more information about FirewallEndpoint, see: * [Create and associate firewall endpoints](https://cloud.google.com/firewall/docs/configure-firewall-endpoints) ~> **Warning:** If you are using User ADCs (Application Default Credentials) with this resource, -you must specify a `billing_project` and set `user_project_override` to true +you must specify a `billing_project_id` and set `user_project_override` to true in the provider configuration. Otherwise the ACM API will return a 403 error. Your account must have the `serviceusage.services.use` permission on the -`billing_project` you defined. +`billing_project_id` you defined. ## Example Usage - Network Security Firewall Endpoint Basic ```hcl resource "google_network_security_firewall_endpoint" "default" { - provider = google-beta - name = "my-firewall-endpoint" - parent = "organizations/123456789" - location = "us-central1-a" + provider = google-beta + name = "my-firewall-endpoint" + parent = "organizations/123456789" + location = "us-central1-a" + billing_project_id = "my-project-name" labels = { foo = "bar" diff --git a/website/docs/r/network_security_firewall_endpoint_association.html.markdown b/website/docs/r/network_security_firewall_endpoint_association.html.markdown index 40e057c740..b4228f39ae 100644 --- a/website/docs/r/network_security_firewall_endpoint_association.html.markdown +++ b/website/docs/r/network_security_firewall_endpoint_association.html.markdown @@ -98,6 +98,12 @@ The following arguments are supported: **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. +* `disabled` - + (Optional) + Whether the association is disabled. True indicates that traffic will not be intercepted. + ~> **Note:** The API will reject the request if this value is set to true when creating the resource, + otherwise on an update the association can be disabled. + * `parent` - (Optional) The name of the parent this firewall endpoint association belongs to. diff --git a/website/docs/r/parallelstore_instance.html.markdown b/website/docs/r/parallelstore_instance.html.markdown new file mode 100644 index 0000000000..c48e39df7a --- /dev/null +++ b/website/docs/r/parallelstore_instance.html.markdown @@ -0,0 +1,233 @@ +--- +# ---------------------------------------------------------------------------- +# +# *** AUTO GENERATED CODE *** Type: MMv1 *** +# +# ---------------------------------------------------------------------------- +# +# This file is automatically generated by Magic Modules and manual +# changes will be clobbered when the file is regenerated. +# +# Please read more about how to change this file in +# .github/CONTRIBUTING.md. +# +# ---------------------------------------------------------------------------- +subcategory: "Parallelstore" +description: |- + A Parallelstore Instance. +--- + +# google\_parallelstore\_instance + +A Parallelstore Instance. + +~> **Warning:** This resource is in beta, and should be used with the terraform-provider-google-beta provider. +See [Provider Versions](https://terraform.io/docs/providers/google/guides/provider_versions.html) for more details on beta resources. + + + +## Example Usage - Parallelstore Instance Basic + + +```hcl +resource "google_parallelstore_instance" "instance" { + instance_id = "instance" + location = "us-central1-a" + description = "test instance" + capacity_gib = 12000 + network = google_compute_network.network.name + + labels = { + test = "value" + } + provider = google-beta + depends_on = [google_service_networking_connection.default] +} + +resource "google_compute_network" "network" { + name = "network" + auto_create_subnetworks = true + mtu = 8896 + provider = google-beta +} + + + +# Create an IP address +resource "google_compute_global_address" "private_ip_alloc" { + name = "address" + purpose = "VPC_PEERING" + address_type = "INTERNAL" + prefix_length = 24 + network = google_compute_network.network.id + provider = google-beta +} + +# Create a private connection +resource "google_service_networking_connection" "default" { + network = google_compute_network.network.id + service = "servicenetworking.googleapis.com" + reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] + provider = google-beta +} +``` + +## Argument Reference + +The following arguments are supported: + + +* `capacity_gib` - + (Required) + Immutable. Storage capacity of Parallelstore instance in Gibibytes (GiB). + +* `location` - + (Required) + Part of `parent`. See documentation of `projectsId`. + +* `instance_id` - + (Required) + The logical name of the Parallelstore instance in the user project with the following restrictions: + * Must contain only lowercase letters, numbers, and hyphens. + * Must start with a letter. + * Must be between 1-63 characters. + * Must end with a number or a letter. + * Must be unique within the customer project/ location + + +- - - + + +* `description` - + (Optional) + The description of the instance. 2048 characters or less. + +* `labels` - + (Optional) + Cloud Labels are a flexible and lightweight mechanism for organizing cloud + resources into groups that reflect a customer's organizational needs and + deployment strategies. Cloud Labels can be used to filter collections of + resources. They can be used to control how resource metrics are aggregated. + And they can be used as arguments to policy management rules (e.g. route, + firewall, load balancing, etc.). + * Label keys must be between 1 and 63 characters long and must conform to + the following regular expression: `a-z{0,62}`. + * Label values must be between 0 and 63 characters long and must conform + to the regular expression `[a-z0-9_-]{0,63}`. + * No more than 64 labels can be associated with a given resource. + See https://goo.gl/xmQnxf for more information on and examples of labels. + If you plan to use labels in your own code, please note that additional + characters may be allowed in the future. Therefore, you are advised to use + an internal label representation, such as JSON, which doesn't rely upon + specific characters being disallowed. For example, representing labels + as the string: name + "_" + value would prove problematic if we were to + allow "_" in a future release. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + +* `network` - + (Optional) + Immutable. The name of the Google Compute Engine + [VPC network](https://cloud.google.com/vpc/docs/vpc) to which the + instance is connected. + +* `reserved_ip_range` - + (Optional) + Immutable. Contains the id of the allocated IP address range associated with the + private service access connection for example, "test-default" associated + with IP range 10.0.0.0/29. If no range id is provided all ranges will be + considered. + +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are exported: + +* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/instances/{{instance_id}}` + +* `name` - + The resource name of the instance, in the format + `projects/{project}/locations/{location}/instances/{instance_id}` + +* `state` - + The instance state. + Possible values: + STATE_UNSPECIFIED + CREATING + ACTIVE + DELETING + FAILED + +* `create_time` - + The time when the instance was created. + +* `update_time` - + The time when the instance was updated. + +* `daos_version` - + The version of DAOS software running in the instance + +* `access_points` - + List of access_points. + Contains a list of IPv4 addresses used for client side configuration. + +* `effective_reserved_ip_range` - + Immutable. Contains the id of the allocated IP address range associated with the + private service access connection for example, "test-default" associated + with IP range 10.0.0.0/29. This field is populated by the service and + and contains the value currently used by the service. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + + +## Timeouts + +This resource provides the following +[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: + +- `create` - Default is 20 minutes. +- `update` - Default is 20 minutes. +- `delete` - Default is 20 minutes. + +## Import + + +Instance can be imported using any of these accepted formats: + +* `projects/{{project}}/locations/{{location}}/instances/{{instance_id}}` +* `{{project}}/{{location}}/{{instance_id}}` +* `{{location}}/{{instance_id}}` + + +In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import) to import Instance using one of the formats above. For example: + +```tf +import { + id = "projects/{{project}}/locations/{{location}}/instances/{{instance_id}}" + to = google_parallelstore_instance.default +} +``` + +When using the [`terraform import` command](https://developer.hashicorp.com/terraform/cli/commands/import), Instance can be imported using one of the formats above. For example: + +``` +$ terraform import google_parallelstore_instance.default projects/{{project}}/locations/{{location}}/instances/{{instance_id}} +$ terraform import google_parallelstore_instance.default {{project}}/{{location}}/{{instance_id}} +$ terraform import google_parallelstore_instance.default {{location}}/{{instance_id}} +``` + +## User Project Overrides + +This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/privateca_certificate.html.markdown b/website/docs/r/privateca_certificate.html.markdown index f5cadefcb8..6f5ed9a91d 100644 --- a/website/docs/r/privateca_certificate.html.markdown +++ b/website/docs/r/privateca_certificate.html.markdown @@ -426,6 +426,102 @@ resource "google_privateca_certificate" "default" { depends_on = [google_privateca_certificate_authority.default] } ``` +## Example Usage - Privateca Certificate Custom Ski + + +```hcl +resource "google_privateca_ca_pool" "default" { + location = "us-central1" + name = "my-pool" + tier = "ENTERPRISE" +} + +resource "google_privateca_certificate_authority" "default" { + location = "us-central1" + pool = google_privateca_ca_pool.default.name + certificate_authority_id = "my-authority" + config { + subject_config { + subject { + organization = "HashiCorp" + common_name = "my-certificate-authority" + } + subject_alt_name { + dns_names = ["hashicorp.com"] + } + } + x509_config { + ca_options { + is_ca = true + } + key_usage { + base_key_usage { + digital_signature = true + cert_sign = true + crl_sign = true + } + extended_key_usage { + server_auth = true + } + } + } + } + lifetime = "86400s" + key_spec { + algorithm = "RSA_PKCS1_4096_SHA256" + } + + // Disable CA deletion related safe checks for easier cleanup. + deletion_protection = false + skip_grace_period = true + ignore_active_certificates_on_deletion = true +} + + +resource "google_privateca_certificate" "default" { + location = "us-central1" + pool = google_privateca_ca_pool.default.name + name = "my-certificate" + lifetime = "860s" + config { + subject_config { + subject { + common_name = "san1.example.com" + country_code = "us" + organization = "google" + organizational_unit = "enterprise" + locality = "mountain view" + province = "california" + street_address = "1600 amphitheatre parkway" + postal_code = "94109" + } + } + subject_key_id { + key_id = "4cf3372289b1d411b999dbb9ebcd44744b6b2fca" + } + x509_config { + ca_options { + is_ca = false + } + key_usage { + base_key_usage { + crl_sign = true + } + extended_key_usage { + server_auth = true + } + } + } + public_key { + format = "PEM" + key = filebase64("test-fixtures/rsa_public.pem") + } + } + // Certificates require an authority to exist in the pool, though they don't + // need to be explicitly connected to it + depends_on = [google_privateca_certificate_authority.default] +} +``` ## Argument Reference @@ -502,6 +598,11 @@ The following arguments are supported: Specifies some of the values in a certificate that are related to the subject. Structure is [documented below](#nested_subject_config). +* `subject_key_id` - + (Optional) + When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CA service, which was not generated using method (1) described in RFC 5280 section 4.2.1.2.. + Structure is [documented below](#nested_subject_key_id). + * `public_key` - (Required) A PublicKey describes a public key. @@ -807,6 +908,12 @@ The following arguments are supported: (Optional) Contains only valid 32-bit IPv4 addresses or RFC 4291 IPv6 addresses. +The `subject_key_id` block supports: + +* `key_id` - + (Optional) + The value of the KeyId in lowercase hexidecimal. + The `public_key` block supports: * `key` - diff --git a/website/docs/r/privateca_certificate_authority.html.markdown b/website/docs/r/privateca_certificate_authority.html.markdown index b680cdbdfc..c345208dc6 100644 --- a/website/docs/r/privateca_certificate_authority.html.markdown +++ b/website/docs/r/privateca_certificate_authority.html.markdown @@ -270,6 +270,67 @@ resource "google_privateca_certificate_authority" "default" { ] } ``` + +## Example Usage - Privateca Certificate Authority Custom Ski + + +```hcl +resource "google_privateca_certificate_authority" "default" { + // This example assumes this pool already exists. + // Pools cannot be deleted in normal test circumstances, so we depend on static pools + pool = "ca-pool" + certificate_authority_id = "my-certificate-authority" + location = "us-central1" + deletion_protection = "true" + config { + subject_config { + subject { + organization = "HashiCorp" + common_name = "my-certificate-authority" + } + subject_alt_name { + dns_names = ["hashicorp.com"] + } + } + subject_key_id { + key_id = "4cf3372289b1d411b999dbb9ebcd44744b6b2fca" + } + x509_config { + ca_options { + is_ca = true + max_issuer_path_length = 10 + } + key_usage { + base_key_usage { + digital_signature = true + content_commitment = true + key_encipherment = false + data_encipherment = true + key_agreement = true + cert_sign = true + crl_sign = true + decipher_only = true + } + extended_key_usage { + server_auth = true + client_auth = false + email_protection = true + code_signing = true + time_stamping = true + } + } + } + } + lifetime = "86400s" + key_spec { + cloud_kms_key_version = "projects/keys-project/locations/us-central1/keyRings/key-ring/cryptoKeys/crypto-key/cryptoKeyVersions/1" + } +} +``` ## Argument Reference @@ -304,6 +365,11 @@ The following arguments are supported: The `config` block supports: +* `subject_key_id` - + (Optional) + When specified this provides a custom SKI to be used in the certificate. This should only be used to maintain a SKI of an existing CA originally created outside CA service, which was not generated using method (1) described in RFC 5280 section 4.2.1.2.. + Structure is [documented below](#nested_subject_key_id). + * `x509_config` - (Required) Describes how some of the technical X.509 fields in a certificate should be populated. @@ -315,6 +381,12 @@ The following arguments are supported: Structure is [documented below](#nested_subject_config). +The `subject_key_id` block supports: + +* `key_id` - + (Optional) + The value of the KeyId in lowercase hexidecimal. + The `x509_config` block supports: * `additional_extensions` - diff --git a/website/docs/r/redis_cluster.html.markdown b/website/docs/r/redis_cluster.html.markdown index 1596c0c7f7..ebd3025dd7 100644 --- a/website/docs/r/redis_cluster.html.markdown +++ b/website/docs/r/redis_cluster.html.markdown @@ -45,6 +45,7 @@ resource "google_redis_cluster" "cluster-ha" { } region = "us-central1" replica_count = 1 + node_type = "REDIS_SHARED_CORE_NANO" transit_encryption_mode = "TRANSIT_ENCRYPTION_MODE_DISABLED" authorization_mode = "AUTH_MODE_DISABLED" depends_on = [ @@ -126,6 +127,12 @@ The following arguments are supported: Default value is `TRANSIT_ENCRYPTION_MODE_DISABLED`. Possible values are: `TRANSIT_ENCRYPTION_MODE_UNSPECIFIED`, `TRANSIT_ENCRYPTION_MODE_DISABLED`, `TRANSIT_ENCRYPTION_MODE_SERVER_AUTHENTICATION`. +* `node_type` - + (Optional) + The nodeType for the Redis cluster. + If not provided, REDIS_HIGHMEM_MEDIUM will be used as default + Possible values are: `REDIS_SHARED_CORE_NANO`, `REDIS_HIGHMEM_MEDIUM`, `REDIS_HIGHMEM_XLARGE`, `REDIS_STANDARD_SMALL`. + * `replica_count` - (Optional) Optional. The number of replica nodes per shard. @@ -172,6 +179,9 @@ In addition to the arguments listed above, the following computed attributes are * `size_gb` - Output only. Redis memory size in GB for the entire cluster. +* `precise_size_gb` - + Output only. Redis memory precise size in GB for the entire cluster. + The `discovery_endpoints` block contains: diff --git a/website/docs/r/secret_manager_secret.html.markdown b/website/docs/r/secret_manager_secret.html.markdown index 00f2b3ac40..748d38db6d 100644 --- a/website/docs/r/secret_manager_secret.html.markdown +++ b/website/docs/r/secret_manager_secret.html.markdown @@ -83,6 +83,25 @@ resource "google_secret_manager_secret" "secret-with-annotations" { } } ``` + +## Example Usage - Secret With Version Destroy Ttl + + +```hcl +resource "google_secret_manager_secret" "secret-with-version-destroy-ttl" { + secret_id = "secret" + + version_destroy_ttl = "2592000s" + + replication { + auto {} + } +} +```
Open in Cloud Shell @@ -229,6 +248,14 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. +* `version_destroy_ttl` - + (Optional) + Secret Version TTL after destruction request. + This is a part of the delayed delete feature on Secret Version. + For secret with versionDestroyTtl>0, version destruction doesn't happen immediately + on calling destroy instead the version goes to a disabled state and + the actual destruction happens after this TTL expires. + * `topics` - (Optional) A list of up to 10 Pub/Sub topics to which messages are published when control plane operations are called on the secret or its versions. diff --git a/website/docs/r/securityposture_posture.html.markdown b/website/docs/r/securityposture_posture.html.markdown index fc17c038d7..3e016d0a6e 100644 --- a/website/docs/r/securityposture_posture.html.markdown +++ b/website/docs/r/securityposture_posture.html.markdown @@ -53,8 +53,8 @@ resource "google_securityposture_posture" "posture1"{ enforce = true condition { description = "condition description" - expression = "resource.matchTag('org_id/tag_key_short_name,'tag_value_short_name')" - title = "a CEL condition" + expression = "resource.matchTag('org_id/tag_key_short_name,'tag_value_short_name')" + title = "a CEL condition" } } } @@ -65,9 +65,9 @@ resource "google_securityposture_posture" "posture1"{ constraint { org_policy_constraint_custom { custom_constraint { - name = "organizations/123456789/customConstraints/custom.disableGkeAutoUpgrade" - display_name = "Disable GKE auto upgrade" - description = "Only allow GKE NodePool resource to be created or updated if AutoUpgrade is not enabled where this custom constraint is enforced." + name = "organizations/123456789/customConstraints/custom.disableGkeAutoUpgrade" + display_name = "Disable GKE auto upgrade" + description = "Only allow GKE NodePool resource to be created or updated if AutoUpgrade is not enabled where this custom constraint is enforced." action_type = "ALLOW" condition = "resource.management.autoUpgrade == false" method_types = ["CREATE", "UPDATE"] diff --git a/website/docs/r/securityposture_posture_deployment.html.markdown b/website/docs/r/securityposture_posture_deployment.html.markdown index 6e05e66369..656ffc911b 100644 --- a/website/docs/r/securityposture_posture_deployment.html.markdown +++ b/website/docs/r/securityposture_posture_deployment.html.markdown @@ -36,14 +36,14 @@ To get more information about PostureDeployment, see: ```hcl resource "google_securityposture_posture" "posture_1" { - posture_id = "posture_1" - parent = "organizations/123456789" - location = "global" - state = "ACTIVE" + posture_id = "posture_1" + parent = "organizations/123456789" + location = "global" + state = "ACTIVE" description = "a new posture" policy_sets { policy_set_id = "org_policy_set" - description = "set of org policies" + description = "set of org policies" policies { policy_id = "policy_1" constraint { @@ -59,13 +59,13 @@ resource "google_securityposture_posture" "posture_1" { } resource "google_securityposture_posture_deployment" "postureDeployment" { - posture_deployment_id = "posture_deployment_1" - parent = "organizations/123456789" - location = "global" - description = "a new posture deployment" - target_resource = "projects/1111111111111" - posture_id = google_securityposture_posture.posture_1.name - posture_revision_id = google_securityposture_posture.posture_1.revision_id + posture_deployment_id = "posture_deployment_1" + parent = "organizations/123456789" + location = "global" + description = "a new posture deployment" + target_resource = "projects/1111111111111" + posture_id = google_securityposture_posture.posture_1.name + posture_revision_id = google_securityposture_posture.posture_1.revision_id } ``` diff --git a/website/docs/r/sql_database_instance.html.markdown b/website/docs/r/sql_database_instance.html.markdown index fc057586ed..3bdf7ad727 100644 --- a/website/docs/r/sql_database_instance.html.markdown +++ b/website/docs/r/sql_database_instance.html.markdown @@ -288,6 +288,8 @@ The `settings` block supports: * `deletion_protection_enabled` - (Optional) Enables deletion protection of an instance at the GCP level. Enabling this protection will guard against accidental deletion across all surfaces (API, gcloud, Cloud Console and Terraform) by enabling the [GCP Cloud SQL instance deletion protection](https://cloud.google.com/sql/docs/postgres/deletion-protection). Terraform provider support was introduced in version 4.48.0. Defaults to `false`. +* `enable_google_ml_integration` - (Optional) Enables [Cloud SQL instances to connect to Vertex AI](https://cloud.google.com/sql/docs/postgres/integrate-cloud-sql-with-vertex-ai) and pass requests for real-time predictions and insights. Defaults to `false`. + * `disk_autoresize` - (Optional) Enables auto-resizing of the storage size. Defaults to `true`. * `disk_autoresize_limit` - (Optional) The maximum size to which storage capacity can be automatically increased. The default value is 0, which specifies that there is no limit. diff --git a/website/docs/r/vmwareengine_private_cloud.html.markdown b/website/docs/r/vmwareengine_private_cloud.html.markdown index b8265deb63..0612a15eee 100644 --- a/website/docs/r/vmwareengine_private_cloud.html.markdown +++ b/website/docs/r/vmwareengine_private_cloud.html.markdown @@ -158,6 +158,11 @@ The following arguments are supported: where the key is canonical identifier of the node type (corresponds to the NodeType). Structure is [documented below](#nested_node_type_configs). +* `stretched_cluster_config` - + (Optional) + The stretched cluster configuration for the private cloud. + Structure is [documented below](#nested_stretched_cluster_config). + The `node_type_configs` block supports: @@ -174,6 +179,16 @@ The following arguments are supported: If zero is provided max value from `nodeType.availableCustomCoreCounts` will be used. This cannot be changed once the PrivateCloud is created. +The `stretched_cluster_config` block supports: + +* `preferred_location` - + (Optional) + Zone that will remain operational when connection between the two zones is lost. + +* `secondary_location` - + (Optional) + Additional zone for a higher level of availability and load balancing. + - - - @@ -184,7 +199,7 @@ The following arguments are supported: * `type` - (Optional) Initial type of the private cloud. - Possible values are: `STANDARD`, `TIME_LIMITED`. + Possible values are: `STANDARD`, `TIME_LIMITED`, `STRETCHED`. * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. diff --git a/website/docs/r/workbench_instance.html.markdown b/website/docs/r/workbench_instance.html.markdown index 6817b67a95..852001f859 100644 --- a/website/docs/r/workbench_instance.html.markdown +++ b/website/docs/r/workbench_instance.html.markdown @@ -77,8 +77,8 @@ resource "google_workbench_instance" "instance" { core_count = 1 } vm_image { - project = "deeplearning-platform-release" - family = "tf-latest-gpu" + project = "cloud-notebooks-managed" + family = "workbench-instances" } } } diff --git a/website/docs/r/workstations_workstation_cluster.html.markdown b/website/docs/r/workstations_workstation_cluster.html.markdown index 965fdbf2cc..50aba518e2 100644 --- a/website/docs/r/workstations_workstation_cluster.html.markdown +++ b/website/docs/r/workstations_workstation_cluster.html.markdown @@ -270,6 +270,10 @@ In addition to the arguments listed above, the following computed attributes are * `uid` - The system-generated UID of the resource. +* `control_plane_ip` - + The private IP address of the control plane for this workstation cluster. + Workstation VMs need access to this IP address to work with the service, so make sure that your firewall rules allow egress from the workstation VMs to this address. + * `degraded` - Whether this resource is in degraded mode, in which case it may require user action to restore full functionality. Details can be found in the conditions field.