destiny game, dataproc error 400 terraform
DevOps, Terraform

SOLVED: V1_BETA2 is not supported in Dataproc Image Version 2.1 and above

Terraform: Error 400 Creating Dataproc Cluster?

This error encountered is due to the fact that the gcloud V1_BETA2 API version is not supported in Dataproc Image Version 2.1 and above. To resolve this, we have two possible approaches:

Option 1: Upgrade Terraform and Google Provider Versions

  • Terraform Upgrade: Upgrading Terraform from 0.11.15 to a more recent version (preferably 1.x.x) will allow you to use a newer version of the Google provider.
  • Google Provider Upgrade: My current Google provider version (2.20.3) is super outdated. Upgrading to the latest version would support the latest Dataproc image version, ensuring compatibility with Dataproc 2.2.x.

Option 2: Using a Null Resource to Create Dataproc Cluster using gcloud

This approach lets you define a null_resource that runs a gcloud command to create a Dataproc cluster dynamically using inputs from your existing Terraform module.

Here’s an example of how you can modify your current module to use a null_resource for cluster creation:

resource "null_resource" "create_dataproc_cluster" {
  provisioner "local-exec" {
    command = <<EOT
      gcloud dataproc clusters create ${var.cluster_name} \
      --region=${var.gcp_region} \
      --project=${var.gcp_project_id} \
      --image-version=${var.gce_image_version} \
      --master-machine-type=${var.gce_instance_type_master} \
      --master-boot-disk-size=${var.gce_boot_disk_size_gb_master} \
      --num-workers=${var.gce_instance_count_worker} \
      --worker-machine-type=${var.gce_instance_type_worker} \
      --worker-boot-disk-size=${var.gce_boot_disk_size_gb_worker} \
      --subnet=${var.vpc_subnetwork_name} \
      --no-address \
      --service-account=${var.dpr_service_account} \
      --optional-components=
        ${join(",",var.dataproc_cluster_optional_components)} \
      --properties=${join(",", var.dataproc_cluster_override_properties)}
    EOT
  }

  depends_on = ["google_storage_bucket.bucket_name"]
}

Key Points:

  • The gcloud command is used to create the Dataproc cluster.
  • You are using dynamic inputs (from your current Terraform variables) such as var.cluster_name, var.gce_image_version, etc., for flexibility.
  • local-exec provisioner is used to run the shell command during terraform apply.

This solution allows you to bypass the limitations of your current Google provider version but this shouldn’t be used for production environment. You should also consider upgrading your Terraform and provider versions as soon as possible to limit these kinds of problems.

Incorporate Deleting Dataproc Cluster in Null Resource

Use below code in the null resource module config:

provisioner "local-exec" {
    when    = "destroy"
    command = <<EOT
      gcloud dataproc clusters delete ${var.cluster_name} \
      --region=${var.gcp_region}
    EOT
  }

  depends_on = ["google_storage_bucket.bucket_name"]
}

Option 2 (using a null_resource) is not a recommended approach in the long run or for production environments due to several reasons:

  • Terraform’s Declarative Nature Is Compromised
  • Lack of State Awareness
  • Error Handling and Reusability
  • Maintainability and Scaling Issues
  • Security Concerns

Why I Had to Choose This Approach?

I had to opt for this approach due to a time constraint in upgrading my entire code base from Terraform 0.11.15 to 0.13.7, which is required to use a newer Google provider version. The older Terraform version I was using does not support the newer Google APIs necessary to manage Dataproc clusters with image versions 2.1.x and above.

In summary: While this solution works as a temporary workaround for the error(creating Dataproc cluster: googleapi: Error 400: V1_BETA2 is not supported in Dataproc Image Version 2.1 and above. Please use V1) , it sacrifices the long-term benefits of using Terraform in a fully declarative and maintainable way. Upgrading your Terraform and Google provider versions will eventually be essential to ensure maintainability, state tracking, and security.

3 Comments

  1. We are a large team of pentesters and we know how to cash out your company’s DATA. Yours 80% from the deal (from 10k$-200k$). Everything is absolutely safe and anonymous for the company employee. For further instructions, write to Telegram t.me/Faceless_Syndicate_bot
    or use Tor domain
    faceless4jrx2bbgc3wjxmdijpehu5czfcl7b3k3ui2vxjompmm6hbid.onion

    It doesn’t matter what your job or position is; for our initial access, we need any corporate computer. This could be a computer at the reception desk, in the institute’s laboratory, in the logistics terminal, at a store checkout, at a checkpoint, or in a back room, as long as it belongs to a large organization’s network.

    1. In first message fill out your company name, counntry and website address

    2. If the company meets our requirements, we will contact you within a few days to discuss possible collaboration options in detail.

  2. We are a large team of pentesters and we know how to cash out your company’s DATA. Yours 80% from the deal (from 10k$-200k$). Everything is absolutely safe and anonymous for the company employee. For further instructions, write to Telegram bot t.me/Faceless_Syndicate_bot

  3. fJirOCG BZUV lTqYxA rOfnyj

Leave a Reply

Your email address will not be published. Required fields are marked *