当前位置:网站首页>Terrain learning summary (6) -- terrain practice based on Alibaba cloud platform

Terrain learning summary (6) -- terrain practice based on Alibaba cloud platform

2022-06-09 10:04:00 Technology d life

Terraform What is it?

Terraform(https://www.terraform.io/) yes HashiCorp One of its open source (Go Language development ) Of DevOps Infrastructure resource management operation and maintenance tools , You can see the corresponding DevOps Tool chain :

Terraform Can safely and efficiently build 、 Change and merge various service resources of multiple cloud vendors , Alibaba cloud is currently supported 、AWS、 Microsoft Azure、Vmware、Google Cloud Platform Resource creation of cloud products from multiple cloud manufacturers .

Write, Plan, and Create Infrastructure as Code

Terraform Define all resource types through the template configuration file ( Like a host ,OS, Storage type , middleware , The Internet VPC,SLB,DB,Cache etc. ) And the number of resources 、 Specification type 、 Resource creation dependencies , Based on resource vendors OpenAPI Quickly create a defined resource list with one click , It also supports one click destruction of resources .

By the way HashiCorp Other products of this company :

  1. Vagrant Vagrant by HashiCorp

  2. Consul HashiCorp Consul - Connect and Secure Any Service

  3. Vault HashiCorp Vault - Manage Secrets & Protect Sensitive Data

  4. Nomad HashiCorp Nomad Enterprise

  5. Packer Packer by HashiCorp

Terraform First experience

install

stay CentOS 7 Install from above , as follows :

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo yum -y install terraform

Verify version information :

[[email protected] ~]# terraform version
Terraform v1.0.2
on linux_amd64

Get command line help

#  Get help , see  Terraform  Which subcommands and parameters are supported 
terraform -help

#  View the help information of a specific subcommand 
terraform -help plan

#  Open the command line to complete 
terraform -install-autocomplete

Create an alicloud ECS example

Prepare sub accounts , establish RAM Sub account , Sub accounts can only be accessed through OpenAPI Access resources on the cloud in the form of , And you can't give all permissions , Give only the specified permissions , Such as ECS、RDS、SLB And other specific permissions . It is recommended to use environment variables to store authentication information :

export ALICLOUD_ACCESS_KEY="********"
export ALICLOUD_SECRET_KEY="*************"
export ALICLOUD_REGION="cn-shanghai"

Here is a test code main.tf, Its main function is to create on Alibaba cloud VPC、vSwitch、 Security group 、ECS example , The final output ECS The Internet IP. The code is as follows :

resource "alicloud_vpc" "vpc" {
    name       = "tf_test_foo"
    cidr_block = "172.16.0.0/12"
}

resource "alicloud_vswitch" "vsw" {
    vpc_id            = alicloud_vpc.vpc.id
    cidr_block        = "172.16.0.0/21"
    availability_zone = "cn-shanghai-b"
}

resource "alicloud_security_group" "default" {
    name = "default"
    vpc_id = alicloud_vpc.vpc.id
}

resource "alicloud_security_group_rule" "allow_all_tcp" {
    type              = "ingress"
    ip_protocol       = "tcp"
    nic_type          = "intranet"
    policy            = "accept"
    port_range        = "1/65535"
    priority          = 1
    security_group_id = alicloud_security_group.default.id
    cidr_ip           = "0.0.0.0/0"
}

resource "alicloud_instance" "instance" {
    availability_zone = "cn-shanghai-b"
    security_groups = alicloud_security_group.default.*.id
    instance_type        = "ecs.n2.small"
    system_disk_category = "cloud_efficiency"
    image_id             = "ubuntu_18_04_64_20G_alibase_20190624.vhd"
    instance_name        = "test_foo"
    vswitch_id = alicloud_vswitch.vsw.id
    internet_max_bandwidth_out = 1
    password = "yourPassword"
}

output "public_ip" {
    value = alicloud_instance.instance.public_ip
}

We passed a main.tf file ( It just needs to be .tf file ) Defined ECS( Mirror image 、 Instance type )、VPC(CIDR、VPC Name)、 Security team, etc , adopt Terraform Analyze the resource configuration parameters , Call alicloud OpenAPI Resource verification is used to create , At the same time, the whole resource creation state is transformed into a .tf.state In file , Based on this file, you can know all the information about resource creation , Including resource quantity adjustment , Specification adjustment , Instance changes rely on this very important file . View results :

$ terraform show

Infrastructure code principles

We completed the creation of the infrastructure through code , And the created resources are as described in our declaration file . This is actually a technical implementation of infrastructure, that is, code . Infrastructure as code is a way to build and manage a dynamic infrastructure using new technologies . It puts the infrastructure 、 Tools and services and the management of infrastructure itself as a software system , Adopt software engineering practices to manage system changes in a structured and secure way . There are four key principles of infrastructure, that is, code :

  • Regeneration : Any element in the environment can be easily copied .

  • Uniformity : No matter when , The configuration of each element of the created environment is exactly the same .

  • Quick feedback : Can be frequent 、 Make changes easily , And quickly know whether the change is correct .

  • visibility : All changes to the environment should be easy to understand 、 Auditable 、 Version controlled .

Terraform Programming

If when writing code , What's the problem , You can find the answer on the official website . The official website is the best learning resource , Not one of them. . Document link is :https://www.terraform.io/language. Let's take a look at an example . Variable definitions :

#  Examples of lists 
variable "list_example" {
  description = "An example of a list in Terraform"
  type = "list"
  default = [1, 2, 3]
}

#  An example of a dictionary 
variable "map_example" {
  description = "An example of a map in Terraform"
  type = "map"
  default = {
      key1 = "value1"
      key2 = "value2"
      key3 = "value3"
  }
}

#  If the type is not specified , The default is string 
variable "server_port" {
  description = "The port the server will use for HTTP requests"
}
$ tree -L 1 .
.
├── madlibs
├── madlibs.tf
├── madlibs.zip
├── templates
├── terraform.tfstate
├── terraform.tfstate.backup
└── terraform.tfvars

2 directories, 5 files

look down terraform.tfvars The content of the document :

[[email protected] ch03]# cat terraform.tfvars

words = {
nouns      = ["army", "panther", "walnuts", "sandwich", "Zeus", "banana", "cat", "jellyfish", "jigsaw", "violin", "milk", "sun"]
adjectives = ["bitter", "sticky", "thundering", "abundant", "chubby", "grumpy"]
verbs      = ["run", "dance", "love", "respect", "kicked", "baked"]
adverbs    = ["delicately", "beautifully", "quickly", "truthfully", "wearily"]
numbers    = [42, 27, 101, 73, -5, 0]
}

And madlibs.tf The content of the document

[[email protected] ch03]# cat madlibs.tf

terraform {
    required_version = ">= 0.15"
    required_providers {
      random = {
         source  = "hashicorp/random"
        version = "~> 3.0"
      }

      local = {
         source  = "hashicorp/local"
        version = "~> 2.0"
      }

      archive = {
         source  = "hashicorp/archive"
        version = "~> 2.0"
      }
    }
}

variable "words" {
    description = "A word pool to use for Mad Libs"
    type = object({
      nouns      = list(string),
      adjectives = list(string),
      verbs      = list(string),
      adverbs    = list(string),
      numbers    = list(number),
    })

    validation {
      condition     = length(var.words["nouns"]) >= 10
      error_message = "At least 10 nouns must be supplied."
    }
}

variable "num_files" {
    type        = number
    description = "(optional) describe your variable"
    default     = 100
}

locals {
   uppercase_words = { for k, v in var.words : k => [for s in v : upper(s)] }
}

resource "random_shuffle" "random_nouns" {
    count = var.num_files
    input = local.uppercase_words["nouns"]
}

resource "random_shuffle" "random_adjectives" {
    count = var.num_files
    input = local.uppercase_words["adjectives"]
}

resource "random_shuffle" "random_verbs" {
    count = var.num_files
    input = local.uppercase_words["verbs"]
}

resource "random_shuffle" "random_adverbs" {
    count = var.num_files
    input = local.uppercase_words["adverbs"]
}

resource "random_shuffle" "random_numbers" {
    count = var.num_files
    input = local.uppercase_words["numbers"]
}

locals {
    templates = tolist(fileset(path.module, "templates/*.txt"))
}

resource "local_file" "mad_libs" {
    count    = var.num_files
    filename = "madlibs/madlibs-${count.index}.txt"
    content = templatefile(element(local.templates, count.index),
      {
        nouns      = random_shuffle.random_nouns[count.index].result
        adjectives = random_shuffle.random_adjectives[count.index].result
        verbs      = random_shuffle.random_verbs[count.index].result
        adverbs    = random_shuffle.random_adverbs[count.index].result
        numbers    = random_shuffle.random_numbers[count.index].result
    })
}

data "archive_file" "mad_libs" {
    depends_on = [
      local_file.mad_libs
    ]
    type        = "zip"
    source_dir  = "${path.module}/madlibs"
    output_path = "${path.cwd}/madlibs.zip"

}

How to reference the value of a variable ?var.<VARIABLE_NAME>. Be careful :Terraform Custom functions are not supported . We can only use Terraform Built in about 100 A function to program .

Repeat Yourself vs. Don't Repeat Yourself (DRY)

In software engineering , It is not advocated DRY Of . But in reality , We can see everywhere Ctrl-C And Ctrl-V Such a programming paradigm (:P). The following two code examples , Which is better ?

From the picture above , The code structure on the left is optimal , It is in accordance with DRY principle ; The code structure on the right is consistent with Ctrl-CCtrl-V This model . For the above two scenarios , We give the following Suggest

  • When our environment has no difference or the difference is relatively small , It is recommended to use the code structure on the left ;

  • When our environment is quite different , You can only choose the code structure on the right ;

Use workspace To reuse code

For the same configuration file ,workspace The function allows us to have multiple status files . That means , We don't need to copy 、 Paste the code , Multi link deployment can be realized . Every workspace Have their own variables and environmental information . As shown in the figure below :

We've been using... Before workspace 了 , Although we don't realize it . When executed terraform init When ,Terraform You have created a default workspace And switch to the workspace I'm off . You can verify that :

[[email protected] ch03]# terraform workspace list
* default

Multi environment deployment , So let's use Terraform Of workspace Feature for a multi environment deployment . An example :

[[email protected] ch06]# tree .
.
├── environments
│   ├── dev.tfvars
│   └── prod.tfvars
├── main.tf
└── terraform.tfstate.d
  ├── dev
  │   ├── terraform.tfstate
  │   └── terraform.tfstate.backup
  └── prod
      └── terraform.tfstate

4 directories, 6 files

Look at the code ( How to... When there's nothing , Debug and verify the code ):

[[email protected] ch06]# cat main.tf

variable "region" {
description = "My Test Region"
type        = string
}

output "myworkspace" {
value = {
  region = var.region
  workspace = terraform.workspace
}
}

[[email protected] ch06]# cat environments/dev.tfvars
region = "cn-shanghai-dev"

[[email protected] ch06]# cat environments/prod.tfvars
region = "cn-shanghai-prod"

Let's switch to dev Of workspace below , Then execute the code :

[[email protected] ch06]# terraform workspace select dev
Switched to workspace "dev".

[[email protected] ch06]# terraform apply -var-file=./environments/dev.tfvars -auto-approve
No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

myworkspace = {
 "region" = "cn-shanghai-dev"
 "workspace" = "dev"
}

Switch to prod Of workspace below , Then verify the code :

[[email protected] ch06]# terraform workspace select prod

Switched to workspace "prod".

[[email protected] ch06]# terraform apply -var-file=./environments/prod.tfvars -auto-approve

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

myworkspace = {
 "region" = "cn-shanghai-prod"
 "workspace" = "prod"
}

When executed destroy When? ? The same thing , You need to specify a variable file :

[[email protected] ch06]# terraform destroy -var-file=./environments/prod.tfvars -auto-approve

Changes to Outputs:
 - myworkspace = {
     - region    = "cn-shanghai-prod"
     - workspace = "prod"
  } -> null

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.

Destroy complete! Resources: 0 destroyed.

More use output Debug the code .Terraform Of output Much like... In other programming languages :printfprintecho Such as function , Let's print out what we are interested in , In order to verify in time .

#  About operation  workspace  The order of 
##  See what's currently  workspace
[[email protected] ch06]# terraform workspace list
default
dev
* prod

##  Create a file called  uat  Of  workspace
[[email protected] ch06]# terraform workspace new uat
Created and switched to workspace "uat"!

##  Switch to the name  dev  Of  workspace
[[email protected] ch06]# terraform workspace select dev
Switched to workspace "dev".

##  Delete the name  uat  Of  workspace
[[email protected] ch06]# terraform workspace delete uat
Deleted workspace "uat"!

Multi cloud deployment

The main idea is :

  1. stay providers.tf In file , Specify the of multiple cloud vendors Provider;

  2. Operate the cloud resources of each cloud manufacturer in the form of modules ;

Next, let's look directly at the code structure :

[[email protected] part1_hybridcloud-lb]# tree .
.
├── bootstrap.sh
├── main.tf
├── outputs.tf
├── providers.tf
└── versions.tf

0 directories, 5 files

Let's see providers.tf file ( More than one... Should be specified in the code  Provider):

provider "aws" {
 profile = "<profile>"
 region  = "us-west-2"
}

provider "azurerm" {
 features {}
}

provider "google" {
 project = "<project_id>"
 region  = "us-east1"
}

provider "docker" {} #A

Look again. main.tf file :

module "aws" {
 source = "terraform-in-action/vm/cloud/modules/aws" #A
 environment = {
   name             = "AWS" #B
   background_color = "orange" #B
}
}

module "azure" {
 source = "terraform-in-action/vm/cloud/modules/azure" #A
 environment = {
   name             = "Azure"
   background_color = "blue"
}
}

module "gcp" {
 source     = "terraform-in-action/vm/cloud/modules/gcp" #A
 environment = {
   name             = "GCP"
   background_color = "red"
}
}

module "loadbalancer" {
 source = "terraform-in-action/vm/cloud/modules/loadbalancer" #A
 addresses = [
   module.aws.network_address, #C
   module.azure.network_address, #C
   module.gcp.network_address, #C
]
}

Zero downtime deployment (Zero-downtime deployment ZDD)

This section describes three solutions to achieve zero downtime deployment .

  1. Terraform Of create_before_destroy Metaattribute

  2. Blue and green deployment

  3. And Ansible To unite to marriage

Set life cycle

When no settings are made , The picture below is Terraform The default behavior . When some properties ( In particular, some mandatory update properties , Such as : Instance type 、 Mirror image ID、 User data, etc ) When modified , Re execution apply when , Previously existing resources will be destroyed .

resource "aws_instance" "instance" {
ami = var.ami

instance_type = var.instance_type

user_data = <<-EOF
    #!/bin/bash
    mkdir -p /var/www && cd /var/www
    echo "App v${var.version}" >> index.html
    python3 -m http.server 80
    EOF
}

As can be seen from the picture below , The time from destruction to full availability of the new instance , The whole cannot be used externally .

To avoid the above , The lifecycle meta parameter allows us to customize the resource lifecycle . Lifecycle nested blocks exist on all resources . We can set the following three flags :

  1. create_before_destroy (bool)—— When set to true when , New resources will be created before old objects are deleted .

  2. prevent_destroy (bool)—— Set to true when ,Terraform Any plan that destroys the infrastructure objects associated with resources and makes explicit errors will be rejected .

  3. ignore_changes (list of attribute names)—— Specify a list of resources ,Terraform The new execution plan will be ignored when executing the plan .

create_before_destroy

In the following code , Set up create_before_destroy = true,

resource "aws_instance" "instance" {
   ami = var.ami
   instance_type = "t3.micro"

   lifecycle {
       create_before_destroy = true
  }

   user_data = <<-EOF
       #!/bin/bash
       mkdir -p /var/www && cd /var/www
       echo "App v${var.version}" >> index.html
       python3 -m http.server 80
   EOF
}

When executing the above code , The flow chart is shown below :

create_before_destroy Only right managed resources take effect , Like the data source, it doesn't work .《Terraform Up and Running》 The author's views on this option :I do not use create_before_destroy as I have found it to be more trouble than it is worth.

Blue and green deployment

In blue / During green deployment , We can switch between two production environments : One is called blue environment , The other is called green environment . At any given time , Only one production environment is active . Routers direct traffic to real-time environments , It can be a load balancer , It can also be DNS Parser . Whenever you want to deploy to a production environment , Please deploy to idle environment first . then , When we are ready , Switch the router from pointing to the real-time server to pointing to the idle server —— The server is already running the latest version of the software . This switch is called switching , It can be done manually or automatically . When the flow conversion is completed , Idle servers will become new real-time servers , The former active server is now an idle server ( As shown in the figure below ).

Let's look at an example , The flow chart is as follows :

The code is as follows green_blue.tf

provider "aws" {
   region = "us-west-2"
}

variable "production" {
   default = "green" //  Deploy  Green  Environmental Science 
}

module "base" {
   source = "terraform-in-action/aws/bluegreen/modules/base"
   production = var.production
}

module "green" {
   source = "terraform-in-action/aws/bluegreen/modules/autoscaling"
   app_version = "v1.0"
   label = "green"
   base = module.base
}

module "blue" {
   source = "terraform-in-action/aws/bluegreen/modules/autoscaling"
   app_version = "v2.0"
   label = "blue"
   base = module.base
}

output "lb_dns_name" {
   value = module.base.lb_dns_name
}

Blue green environment cutover Blue After the environment is all started , You can switch between blue and green . The code is as follows green_blue.tf

provider "aws" {
   region = "us-west-2"
}

variable "production" {
   default = "blue"
}

module "base" {
   source = "terraform-in-action/aws/bluegreen/modules/base"
   production = var.production
}

module "green" {
   source = "terraform-in-action/aws/bluegreen/modules/autoscaling"
   app_version = "v1.0"
   label = "green"
   base = module.base
}

module "blue" {
   source = "terraform-in-action/aws/bluegreen/modules/autoscaling"
   app_version = "v2.0"
   label = "blue"
   base = module.base
}

output "lb_dns_name" {
   value = module.base.lb_dns_name
}

Terraform And Ansible To unite to marriage

We need to calm down and think about a problem :"Terraform Is it the right tool for the job ? in many instances , The answer is No . about VM Application deployment on , Configuration management tools will be more suitable for . Next , We let professional tools do their professional things .Terraform Focus on infrastructure , For rapid delivery of infrastructure . For the application deployment of the upper layer ,Terraform I'm not good at .

Next , With AWS For example ,Terraform Responsible for the creation of infrastructure ,Ansible Responsible for creating applications on it . The flow chart is as follows :

The code is as follows :

provider "aws" {
 region  = "us-west-2"
}

resource "tls_private_key" "key" {
 algorithm = "RSA"
}

resource "local_file" "private_key" {
 filename          = "${path.module}/ansible-key.pem"
 sensitive_content = tls_private_key.key.private_key_pem
 file_permission   = "0400"
}

resource "aws_key_pair" "key_pair" {
 key_name   = "ansible-key"
 public_key = tls_private_key.key.public_key_openssh
}

data "aws_vpc" "default" {
 default = true
}

resource "aws_security_group" "allow_ssh" {
 vpc_id = data.aws_vpc.default.id

 ingress {
   from_port   = 22
   to_port     = 22
   protocol    = "tcp"
   cidr_blocks = ["0.0.0.0/0"]
}

 ingress {
   from_port   = 80
   to_port     = 80
   protocol    = "tcp"
   cidr_blocks = ["0.0.0.0/0"]
}

 egress {
   from_port   = 0
   to_port     = 0
   protocol    = "-1"
   cidr_blocks = ["0.0.0.0/0"]
}
}

data "aws_ami" "ubuntu" {
 most_recent = true

 filter {
   name   = "name"
   values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}

 owners = ["099720109477"]
}

resource "aws_instance" "ansible_server" {
 ami                    = data.aws_ami.ubuntu.id
 instance_type          = "t3.micro"
 vpc_security_group_ids = [aws_security_group.allow_ssh.id]
 key_name               = aws_key_pair.key_pair.key_name

 tags = {
   Name = "Ansible Server"
}

 provisioner "remote-exec" {
   inline = [
     "sudo apt update -y",
     "sudo apt install -y software-properties-common",
     "sudo apt-add-repository --yes --update ppa:ansible/ansible",
     "sudo apt install -y ansible"
  ]

   connection {
     type        = "ssh"
     user        = "ubuntu"
     host        = self.public_ip
     private_key = tls_private_key.key.private_key_pem
  }
}

 provisioner "local-exec" {
   command = "ansible-playbook -u ubuntu --key-file ansible-key.pem -T 300 -i '${self.public_ip},', app.yml"
}
}

output "public_ip" {
value = aws_instance.ansible_server.public_ip
}

output "ansible_command" {
   value = "ansible-playbook -u ubuntu --key-file ansible-key.pem -T 300 -i '${aws_instance.ansible_server.public_ip},', app.yml"
}

app.yml The content is :

---
- name: Install Nginx
hosts: all
become: true
tasks:
- name: Install Nginx
  yum:
    name: nginx
    state: present

- name: Add index page
  template:
    src: index.html
    dest: /var/www/html/index.html

- name: Start Nginx
  service:
    name: nginx
    state: started

Execute the above code :

$ terraform init && terraform apply -auto-approve

...

aws_instance.ansible_server: Creation complete after 2m7s

[id=i-06774a7635d4581ac]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

ansible_command = ansible-playbook -u ubuntu --key-file ansible-key.pem -T
300 -i '54.245.143.100,', app.yml
public_ip = 54.245.143.100

When you want to cut blue and green , Just do the above... Again ansible Command is enough :

$ ansible-playbook \
-u ubuntu \
--key-file ansible-key.pem \
-T 300 \
-i '54.245.143.100' app.yml

How to write a Provider

When the existing Provider Doesn't meet your needs or doesn't have what you want at all Provider when , What should I do ? What should I do ? What should I do ? That is to write one out . In this case , We can go through Terraform Manage our remote in the form of infrastructure, that is, code API. in other words , As long as there is one RESTful Formal API, Theoretically , We can go through Terraform To manage it . Next, this section describes how to write a Provider. Let's take a look first Terraform What's your workflow like .

Terraform And Provider How to interact

Terraform The official website also has very detailed information on how to develop Plugins Documents , Document links :https://www.terraform.io/plugin. Here are two points to note :

  1. There must be a remote ( Or upstream ) API;( It can be written in any language API)

  2. Operate this API The client of SDK;(Golang client . because Provider yes Go Written , All should also have one Go client SDK)

First of all, there's a RESTful API

Let's look at the directory structure of the code . Code from 《Terraform in Action》 The author of the first 11 Chapter , We modified it . The code uses AWS Of Lambda Functional calculation , Delete the relevant code here , Make it run in any environment . The code uses ORM, It is developed by Chinese people ORM frame jinzhu . The official website address is :GORM - The fantastic ORM library for Golang, aims to be developer friendly. The use of Web The framework is go-gin, Its official website address is :Gin Web Framework (gin-gonic.com) Then take a look at the directory structure and code :

* my-go-petstore git:(dev) * tree
.
├── README.md
├── action
│ └── pets
│   ├── create.go
│   ├── delete.go
│   ├── get.go
│   ├── list.go
│   └── update.go
├── go.mod
├── go.sum
├── main.go
├── model
│ └── pet
│   ├── model.go
│   └── orm.go
└── terraform-petstore

4 directories, 12 files

The code size is relatively small , It's a classic MVC Development model . Let's first look at the definition of the model .

Model definition

// model/pet/model.go

package pet

type Pet struct {
       ID   string `gorm:"primary_key" json:"id"`
       Name  string `json:"name"`
       Species string `json:"species"`
       Age   int  `json:"age"`
}

Service Definition

// model/pet/orm.go

package pet

import (
       "fmt"

       "github.com/jinzhu/gorm"
)

//Create creates a pet in the database
func Create(db *gorm.DB, pet *Pet) (string, error) {
       err := db.Create(pet).Error
       if err != nil {
               return "", err
      }
       return pet.ID, nil
}

//FindById returns a pet with a given id, or nil if not found
func FindById(db *gorm.DB, id string) (*Pet, error) {
       var pet Pet
       err := db.Find(&pet, &Pet{ID: id}).Error
       if err != nil {
               return nil, err
      }
       return &pet, nil
}

//FindByName returns a pet with a given name, or nil if not found
func FindByName(db *gorm.DB, name string) (*Pet, error) {
       var pet Pet
       err := db.Find(&pet, &Pet{Name: name}).Error
       if err != nil {
               return nil, err
      }
       return &pet, nil
}

//List returns all Pets in database, with a given limit
func List(db *gorm.DB, limit uint) (*[]Pet, error) {
       var pets []Pet
       err := db.Find(&pets).Limit(limit).Error
       if err != nil {
               return nil, err
      }
       return &pets, nil
}

//Update updates a pet in the database
func Update(db *gorm.DB, pet *Pet) error {
       err := db.Save(pet).Error
       return err
}

//Delete deletes a pet in the database
func Delete(db *gorm.DB, id string) error {
       pet, err := FindById(db, id)
       if err != nil {
               fmt.Printf("1:%v", err)
               return err
      }
       err = db.Delete(pet).Error
       fmt.Printf("2:%v", err)
       return err
}

Controller definition

  • Get( Look at a resource )

// action/pets/get.go

package pets

import (
       "github.com/jinzhu/gorm"
       "github.com/TyunTech/terraform-petstore/model/pet"
)

//GetPetRequest request struct
type GetPetRequest struct {
       ID string
}

//GetPet returns a pet from database
func GetPet(db *gorm.DB, req *GetPetRequest) (*pet.Pet, error) {
       p, err := pet.FindById(db, req.ID)
       res := p
       return res, err
}
  • List( View all resources )

// action/pets/list.go

package pets

import (
       "github.com/jinzhu/gorm"
       "github.com/TyunTech/terraform-petstore/model/pet"
)

//ListPetRequest request struct
type ListPetsRequest struct {
       Limit uint
}

//ListPetResponse response struct
type ListPetsResponse struct {
       Items *[]pet.Pet `json:"items"`
}

//ListPets returns a list of pets from database
func ListPets(db *gorm.DB, req *ListPetsRequest) (*ListPetsResponse, error) {
       pets, err := pet.List(db, req.Limit)
       res := &ListPetsResponse{Items: pets}
       return res, err
}
  • Create( Create a resource )

// action/pets/create.go

package pets

import (
       "github.com/google/uuid"
       "github.com/jinzhu/gorm"
       "github.com/TyunTech/terraform-petstore/model/pet"
)

//CreatePetRequest request struct
type CreatePetRequest struct {
       Name  string `json:"name" binding:"required"`
       Species string `json:"species" binding:"required"`
       Age   int  `json:"age" binding:"required"`
}

//CreatePet creates a pet in database
func CreatePet(db *gorm.DB, req *CreatePetRequest) (*pet.Pet, error) {
       uuid, _ := uuid.NewRandom()
       newPet := &pet.Pet{
               ID:   uuid.String(),
               Name:  req.Name,
               Species: req.Species,
               Age:   req.Age,
      }
       id, err := pet.Create(db, newPet)
       p, err := pet.FindById(db, id)
       res := p
       return res, err
}
  • Update( Update a resource )

// action/pets/update.go

package pets

import (
       "fmt"

       "github.com/jinzhu/gorm"
       "github.com/TyunTech/terraform-petstore/model/pet"
)

//UpdatePetRequest request struct
type UpdatePetRequest struct {
       ID   string
       Name  string `json:"name"`
       Species string `json:"species"`
       Age   int  `json:"age"`
}

//UpdatePet updates a pet from database
func UpdatePet(db *gorm.DB, req *UpdatePetRequest) (*pet.Pet, error) {
       p, err := pet.FindById(db, req.ID)
       if err != nil {
               return nil, err
      }
   
       if len(req.Name) > 0 {
               p.Name = req.Name
      }
       if req.Age > 0 {
               p.Age = req.Age
      }
       if len(req.Species) > 0 {
               p.Species = req.Species
      }
       fmt.Printf("requested: %v", p)
       err = pet.Update(db, p)
       if err != nil {
               return nil, err
      }
       p, err = pet.FindById(db, req.ID)
       fmt.Printf("new: %v", p)
       res := p
       return res, err
}
  • Delete( Delete a resource )

// action/pets/delete.go

package pets

import (
       "github.com/jinzhu/gorm"
       "github.com/TyunTech/terraform-petstore/model/pet"
)

//DeletePetRequest request struct
type DeletePetRequest struct {
       ID string
}

//DeletePet deletes a pet from database
func DeletePet(db *gorm.DB, req *DeletePetRequest) (error) {
       err := pet.Delete(db, req.ID)
       return err
}
  • main entrance

In the code , We removed redundant comments and AWS Of Lambda Related codes , Make it run in any environment .

package main

import (
       "fmt"
       "net/http"
       "os"
       "strconv"

       "github.com/gin-gonic/gin"
       "github.com/jinzhu/gorm"
       _ "github.com/jinzhu/gorm/dialects/mysql"
       "github.com/TyunTech/terraform-petstore/action/pets"
       "github.com/TyunTech/terraform-petstore/model/pet"
)

var db *gorm.DB

func init() {
       initializeRDSConn()
       validateRDS()
}

func initializeRDSConn() {
       user := os.Getenv("rds_user")
       password := os.Getenv("rds_password")
       host := os.Getenv("rds_host")
       port := os.Getenv("rds_port")
       database := os.Getenv("rds_database")

       dsn := fmt.Sprintf("%s:%[email protected](%s:%s)/%s", user, password, host, port, database)
       var err error
       db, err = gorm.Open("mysql", dsn)
       if err != nil {
               fmt.Printf("%s", err)
      }
}

func validateRDS() {
       //If the pets table does not already exist, create it
       if !db.HasTable("pets") {
               db.CreateTable(&pet.Pet{})
      }
}

func optionsPetHandler(c *gin.Context) {
       c.Header("Access-Control-Allow-Origin", "*")
       c.Header("Access-Control-Allow-Methods", "GET, POST, DELETE")
       c.Header("Access-Control-Allow-Headers", "origin, content-type, accept")
}

func main() {
       r := gin.Default()

       r.POST("/api/pets", createPetHandler)
       r.GET("/api/pets/:id", getPetHandler)
       r.GET("/api/pets", listPetsHandler)
       r.PATCH("/api/pets/:id", updatePetHandler)
       r.DELETE("/api/pets/:id", deletePetHandler)
       r.OPTIONS("/api/pets", optionsPetHandler)
       r.OPTIONS("/api/pets/:id", optionsPetHandler)

       r.Run(":8000")
}

func createPetHandler(c *gin.Context) {
       c.Header("Access-Control-Allow-Origin", "*")
       var req pets.CreatePetRequest
       if err := c.ShouldBindJSON(&req); err != nil {
               c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
               return
      }

       res, err := pets.CreatePet(db, &req)
       if err != nil {
               c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
               return
      }

       c.JSON(http.StatusOK, res)
       return
}

func listPetsHandler(c *gin.Context) {
       c.Header("Access-Control-Allow-Origin", "*")
       limit := 10
       if c.Query("limit") != "" {
               newLimit, err := strconv.Atoi(c.Query("limit"))
               if err != nil {
                       limit = 10
              } else {
                       limit = newLimit
              }
      }
       if limit > 50 {
               limit = 50
      }
       req := pets.ListPetsRequest{Limit: uint(limit)}
       res, _ := pets.ListPets(db, &req)
       c.JSON(http.StatusOK, res)
}

func getPetHandler(c *gin.Context) {
       c.Header("Access-Control-Allow-Origin", "*")
       id := c.Param("id")
       req := pets.GetPetRequest{ID: id}
       res, _ := pets.GetPet(db, &req)
       if res == nil {
               c.JSON(http.StatusNotFound, res)
               return
      }
       c.JSON(http.StatusOK, res)
}

func updatePetHandler(c *gin.Context) {
       c.Header("Access-Control-Allow-Origin", "*")
       var req pets.UpdatePetRequest
       if err := c.ShouldBindJSON(&req); err != nil {
               c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
               return
      }

       id := c.Param("id")
       req.ID = id
       res, err := pets.UpdatePet(db, &req)
       if err != nil {
               c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
               return
      }
       c.JSON(http.StatusOK, res)
       return
}

func deletePetHandler(c *gin.Context) {
       c.Header("Access-Control-Allow-Origin", "*")
       id := c.Param("id")
       req := pets.DeletePetRequest{ID: id}
       err := pets.DeletePet(db, &req)
       if err != nil {
               c.Status(http.StatusNotFound)
               return
      }
       c.Status(http.StatusOK)
}

The following two screenshots are the comparison before and after code transformation :

First , Prepare the database account and password required in the code . This is provided in the form of environment variables :

export rds_user=pet
export rds_password=123456
export rds_host=127.0.0.1
export rds_port=3306
export rds_database=pets

Then you can run the code :

* my-go-petstore git:(dev) * go run .
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST   /api/pets                 --> main.createPetHandler (3 handlers)
[GIN-debug] GET     /api/pets/:id             --> main.getPetHandler (3 handlers)
[GIN-debug] GET     /api/pets                 --> main.listPetsHandler (3 handlers)
[GIN-debug] PATCH   /api/pets/:id             --> main.updatePetHandler (3 handlers)
[GIN-debug] DELETE /api/pets/:id             --> main.deletePetHandler (3 handlers)
[GIN-debug] OPTIONS /api/pets                 --> main.optionsPetHandler (3 handlers)
[GIN-debug] OPTIONS /api/pets/:id             --> main.optionsPetHandler (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Listening and serving HTTP on :8000

You can see , Service running on 8000 port , We go through httpie Command to test whether the interface is available :

#  Create the first test data 

(venv37) * my-go-petstore git:(dev) * http POST :8000/api/pets name=Jerry species=mouse age:=1
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 86
Content-Type: application/json; charset=utf-8
Date: Sun, 13 Mar 2022 03:44:22 GMT

{
   "age": 1,
   "id": "9b24b16d-8b09-47e2-9638-16775ccb8d8a",
   "name": "Jerry",
   "species": "mouse"
}

#  Create a second test data 
(venv37) * my-go-petstore git:(dev) * http POST :8000/api/pets name=Tommy species=cat age:=2  

HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Length: 84
Content-Type: application/json; charset=utf-8
Date: Sun, 13 Mar 2022 03:44:40 GMT

{
   "age": 2,
   "id": "81f04745-c17e-4f38-a3dd-b6e0741f207b",
   "name": "Tommy",
   "species": "cat"
}

View the data :

(venv37) * my-go-petstore git:(dev) * http -b :8000/api/pets
{
   "items": [
      {
           "age": 2,
           "id": "81f04745-c17e-4f38-a3dd-b6e0741f207b",
           "name": "Tommy",
           "species": "cat"
      },
      {
           "age": 1,
           "id": "9b24b16d-8b09-47e2-9638-16775ccb8d8a",
           "name": "Jerry",
           "species": "mouse"
      }
  ]
}

Check some data in the database :

mysql> use pets;
mysql> select * from pets;
+--------------------------------------+-------+---------+------+
| id                                   | name  | species | age  |
+--------------------------------------+-------+---------+------+
| 81f04745-c17e-4f38-a3dd-b6e0741f207b | Tommy | cat     |    2 |
| 9b24b16d-8b09-47e2-9638-16775ccb8d8a | Jerry | mouse   |    1 |
+--------------------------------------+-------+---------+------+
2 rows in set (0.00 sec)

To sum up , The general process is :

Secondly, there is a Client

What is? Client Well ? In fact, it is used to operate API Of , Perform common CRUD operation . Let's look at the code structure :

(venv37) * petstore-go-client git:(dev) tree .
.
├── README.md
├── examples
│   └── pets
│       └── main.go
├── go.mod
├── go.sum
├── openapi.md
├── openapi.yaml
├── pets.go
├── petstore.go
├── type_helpers.go
└── validations.go

2 directories, 10 files

Provider Code structure of

Provider The code is in accordance with the standard CRUD Formally encoded , therefore , We can write according to the routine . Let's first look at the code directory structure :

$ ls
dist example go.mod go.sum main.go Makefile petstore terraform-provider-petstore

$ tree .
.
├── dist
│   └── linux_amd64
│       └── terraform-provider-petstore
├── example
│   └── main.tf
├── go.mod
├── go.sum
├── main.go
├── Makefile
├── petstore
│   ├── provider.go
│   ├── provider_test.go
│   ├── resource_ps_pet.go
│   └── resource_ps_pet_test.go
└── terraform-provider-petstore

4 directories, 11 files

The purpose of the above key documents is as follows :

  • main.go:Provider Entrance point , Mainly some template code ;

  • petstore/provider.go: Contains Provider The definition of , Initialization of resource mapping and shared configuration objects ;

  • petstore/provider_test.go:Provider Test files for ;

  • petstore/resource_ps_pet.go: Used to define management pet Resources CRUD operation ;

  • petstore/resource_ps_pet_test.go:pet Test files for resources ;

Look at the four key functions .

Create

func resourcePSPetCreate(d *schema.ResourceData, meta interface{}) error {
       conn := meta.(*sdk.Client)
       options := sdk.PetCreateOptions{
               Name:    d.Get("name").(string),
               Species: d.Get("species").(string),
               Age:     d.Get("age").(int),
      }

       pet, err := conn.Pets.Create(options)
       if err != nil {
               return err
      }

       d.SetId(pet.ID)
       resourcePSPetRead(d, meta)

       return nil
}

Read

func resourcePSPetRead(d *schema.ResourceData, meta interface{}) error {
       conn := meta.(*sdk.Client)
       pet, err := conn.Pets.Read(d.Id())
       if err != nil {
               return err
      }

       d.Set("name", pet.Name)
       d.Set("species", pet.Species)
       d.Set("age", pet.Age)

       return nil
}

Update

func resourcePSPetUpdate(d *schema.ResourceData, meta interface{}) error {
       conn := meta.(*sdk.Client)
       options := sdk.PetUpdateOptions{}

       if d.HasChange("name") {
               options.Name = d.Get("name").(string)
      }

       if d.HasChange("age") {
               options.Age = d.Get("age").(int)
      }

       conn.Pets.Update(d.Id(), options)
       return resourcePSPetRead(d, meta)
}

Delete

func resourcePSPetDelete(d *schema.ResourceData, meta interface{}) error {
       conn := meta.(*sdk.Client)
       conn.Pets.Delete(d.Id())
       return nil
}

After introducing the above methods , Let's see when they are called . As shown in the figure below :

  After all the above work is completed , We can build Provider Binary , And with the remote API Interact . If there is no problem testing locally , Next, we can customize Provider Publish to Terraform Of Registry above , For small partners in need .

Publish your own Provider

Used GitHub Of Actions Release the code , The screenshot is as follows :

  About ten minutes to complete the release , The binary code of the relevant platform will be generated , It can be downloaded from different platforms .

Release complete , stay Terraform Of Registry The following interface will be displayed .

Publish a Provider Some points need to be paid attention to :

  1. Every time it's released , You need to automatically build binaries for various platforms ; The main use of .goreleaser.yml File implementation , The code is as follows :

    # Visit https://goreleaser.com for documentation on how to customize this
    # behavior.
    before:
    hooks:
       # this is just an example and not a requirement for provider building/publishing
      - go mod tidy
    builds:
    - env:
       # goreleaser does not work with CGO, it could also complicate
       # usage by users in CI/CD systems like Terraform Cloud where
       # they are unable to install libraries.
      - CGO_ENABLED=0
    mod_timestamp: '{
         { .CommitTimestamp }}'
    flags:
      - -trimpath
    ldflags:
      - '-s -w -X main.version={
         {.Version}} -X main.commit={
         {.Commit}}'
    goos:
      - freebsd
      - windows
      - linux
      - darwin
    goarch:
      - amd64
      - '386'
      - arm
      - arm64
    ignore:
      - goos: darwin
        goarch: '386'
    binary: '{
         { .ProjectName }}_v{
         { .Version }}'
    archives:
    - format: zip
    name_template: '{
         { .ProjectName }}_{
         { .Version }}_{
         { .Os }}_{
         { .Arch }}'
    checksum:
    name_template: '{
         { .ProjectName }}_{
         { .Version }}_SHA256SUMS'
    algorithm: sha256
    signs:
    - artifacts: checksum
      args:
         # if you are using this is a GitHub action or some other automated pipeline, you
         # need to pass the batch flag to indicate its not interactive.
        - "--batch"
        - "--local-user"
        - "{
         { .Env.GPG_FINGERPRINT }}" # set this environment variable for your signing key
        - "--output"
        - "${signature}"
        - "--detach-sign"
        - "${artifact}"
    release:
     # Visit your project's GitHub Releases page to publish this release.
    draft: false
    changelog:
    skip: true
  2. So every time we submit code ,Github Of Actions Will automatically build our code , according to Tag Information is automatically constructed Release file ;

  3. Generate GPG Public and private keys ; The relevant commands are as follows :

    #  Generate  GPG  Public private key 
    $ gpg --full-generate-key
    
    #  see  GPG  Information 
    gpg --list-secret-keys --keyid-format=long
    
    sec   rsa4096/C15EAAAAAAAAAAAA 2022-04-06 [SC] #  Need to pay attention to this  ID:C15EAAAAAAAAAAAA
        274425A57102378E4AAAAAAAAAAAAAAAAAAAAAAA
    uid                 [ultimate] Laven Liu <@gmail.com>
    ssb   rsa4096/2BAAAAAAAAAAAAAA 2022-04-06 [E]
    
    #  see  GPG  Private key 
    gpg --armor --export-secret-keys "C15EAAAAAAAAAAAA"
    
    #  see  GPG  Public key 
    gpg --armor --export "C15EAAAAAAAAAAAA"
  4. To configure Github Actions, This step is mainly to configure GPG Public private key of ;

How to use

stay Registry On the interface of , Instructions can be found for . As shown in the figure below :

Prepare the configuration file main.tf

terraform {
required_providers {
  petstore = {
    source = "TyunTech/petstore"
    version = "1.0.1"
  }
}
}

provider "petstore" {
 address = "http://localhost:8000"
}

resource "petstore_pet" "my_pet" {
 name    = "SnowBall"
 species = "cat"
 age     = 3
}

First, execute terraform init initialization :

(venv37) * ch11 terraform init
......
Terraform has been successfully initialized!

Then perform terraform apply,

(venv37) * ch11 terraform apply -auto-approve

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
 + create

Terraform will perform the following actions:

 # petstore_pet.my_pet will be created
 + resource "petstore_pet" "my_pet" {
     + age     = 3
     + id      = (known after apply)
     + name    = "SnowBall"
     + species = "cat"
  }

Plan: 1 to add, 0 to change, 0 to destroy.
petstore_pet.my_pet: Creating...
petstore_pet.my_pet: Creation complete after 0s [id=96bcf678-231f-449a-baf1-a01d2c7ecb9b]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Does it really create resources ? Let's check in the database :

mysql> use pets
mysql> select * from pets;
+--------------------------------------+----------+---------+------+
| id                                   | name     | species | age  |
+--------------------------------------+----------+---------+------+
| 81f04745-c17e-4f38-a3dd-b6e0741f207b | Tommy    | cat     |    2 |
| 96bcf678-231f-449a-baf1-a01d2c7ecb9b | SnowBall | cat     |    3 | -- <-  Created the record 
| 9b24b16d-8b09-47e2-9638-16775ccb8d8a | Jerry    | mouse   |    1 |
+--------------------------------------+----------+---------+------+
3 rows in set (0.00 sec)

Revise it snowball The age is 7 year , And then execute it again , See if the data in the database will change :

(venv37) * ch11 terraform apply -auto-approve
petstore_pet.my_pet: Refreshing state... [id=96bcf678-231f-449a-baf1-a01d2c7ecb9b]

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
~ update in-place

Terraform will perform the following actions:

 # petstore_pet.my_pet will be updated in-place
~ resource "petstore_pet" "my_pet" {
    ~ age     = 3 -> 7
      id      = "96bcf678-231f-449a-baf1-a01d2c7ecb9b"
      name    = "SnowBall"
       # (1 unchanged attribute hidden)
  }

Plan: 0 to add, 1 to change, 0 to destroy.
petstore_pet.my_pet: Modifying... [id=96bcf678-231f-449a-baf1-a01d2c7ecb9b]
petstore_pet.my_pet: Modifications complete after 0s [id=96bcf678-231f-449a-baf1-a01d2c7ecb9b]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Verify the database again :

mysql> select * from pets;
+--------------------------------------+----------+---------+------+
| id                                   | name     | species | age  |
+--------------------------------------+----------+---------+------+
| 81f04745-c17e-4f38-a3dd-b6e0741f207b | Tommy    | cat     |    2 |
| 9b24b16d-8b09-47e2-9638-16775ccb8d8a | Jerry    | mouse   |    1 |
| a159cd59-0a4f-4fdf-9ea7-fda2a59f5c9e | snowball | cat     |    7 |
+--------------------------------------+----------+---------+------+
3 rows in set (0.00 sec)

Delete data

(venv37) * ch11 terraform destroy

petstore_pet.my_pet: Refreshing state... [id=a159cd59-0a4f-4fdf-9ea7-fda2a59f5c9e]

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
 - destroy

Terraform will perform the following actions:

 # petstore_pet.my_pet will be destroyed
 - resource "petstore_pet" "my_pet" {
     - age     = 7 -> null
     - id      = "a159cd59-0a4f-4fdf-9ea7-fda2a59f5c9e" -> null
     - name    = "snowball" -> null
     - species = "cat" -> null
  }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes #  Input  yes

petstore_pet.my_pet: Destroying... [id=a159cd59-0a4f-4fdf-9ea7-fda2a59f5c9e]
petstore_pet.my_pet: Destruction complete after 0s

Destroy complete! Resources: 1 destroyed.

Verify that the data in the database still exists :

mysql> select * from pets;
+--------------------------------------+-------+---------+------+
| id                                   | name  | species | age  |
+--------------------------------------+-------+---------+------+
| 81f04745-c17e-4f38-a3dd-b6e0741f207b | Tommy | cat     |    2 |
| 9b24b16d-8b09-47e2-9638-16775ccb8d8a | Jerry | mouse   |    1 |
+--------------------------------------+-------+---------+------+

2 rows in set (0.00 sec)

Common modules

random

resource "random_string" "random" {
 length = 16
}

output "random"{
   value =random_string.random.result
}

#  Output :
Outputs:

random = "BQa7LGq4RtDtCv)&"

local_file

resource "local_file" "myfile" {
 content = "This is my text"
 filename = "../mytextfile.txt"
}

archive

data "archive_file" "backup" {
 type       = "zip"
 source_file = "../mytextfile.txt"
 output_path = "${path.module}/archives/backup.zip"
}

misarrangement

When the plan fails , What should I do ? Look at the log . For more detailed logs , We can open the class by setting the environment variable trace Level of logging . Such as :export TF_LOG=trace. How to close log ? Put environment variables TF_LOG Set a null value to the value of .

#  Open detailed log 
export TF_LOG=trace

#  Close the log 
export TF_LOG=

Re execution terraform xxx On command , There will be the following output :

2022-03-04T16:37:54.239+0800 [INFO] Terraform version: 1.1.6
2022-03-04T16:37:54.240+0800 [INFO] Go runtime version: go1.17.2
2022-03-04T16:37:54.240+0800 [INFO] CLI args: []string{"terraform", "init"}
2022-03-04T16:37:54.240+0800 [TRACE] Stdout is a terminal of width 135
2022-03-04T16:37:54.240+0800 [TRACE] Stderr is a terminal of width 135
2022-03-04T16:37:54.240+0800 [TRACE] Stdin is a terminal
2022-03-04T16:37:54.240+0800 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2022-03-04T16:37:54.240+0800 [INFO] Loading CLI configuration from /root/.terraformrc
2022-03-04T16:37:54.240+0800 [DEBUG] checking for credentials in "/root/.terraform.d/plugins"

......

Initializing the backend...
2022-03-04T16:37:54.247+0800 [TRACE] Meta.Backend: no config given or present on disk, so returning nil config
2022-03-04T16:37:54.247+0800 [TRACE] Meta.Backend: backend has not previously been initialized in this working directory

......

2022-03-04T16:37:54.252+0800 [TRACE] backend/local: state manager for workspace "default" will:
- read initial snapshot from terraform.tfstate
- write new snapshots to terraform.tfstate
- create any backup at terraform.tfstate.backup
2022-03-04T16:37:54.252+0800 [TRACE] statemgr.Filesystem: reading initial snapshot from terraform.tfstate
2022-03-04T16:37:54.252+0800 [TRACE] statemgr.Filesystem: snapshot file has nil snapshot, but that's okay
2022-03-04T16:37:54.252+0800 [TRACE] statemgr.Filesystem: read nil snapshot

Initializing provider plugins...
- Finding hashicorp/alicloud versions matching "1.157.0"...
2022-03-04T16:37:54.252+0800 [DEBUG] Service discovery for registry.terraform.io at https://registry.terraform.io/.well-known/terraform.json

Recommended learning materials

The content of this article is translated in 《Terraform in Action》 A Book , It's a very good book , Worth reading and practicing . But there are other excellent references , The following is a list of resources , For reference .

Website resource recommendation :

  1. https://lonegunmanb.github.io/introduction-terraform/. Recommended reading :Terraform Command line chapter ;Terraform Module chapter .
  2. https://help.aliyun.com/product/95817.html?spm=a2cls.b92374736.0.0.267599deCIHp3f, If Alibaba cloud is used , You can refer to the above help documents .

Book recommendation :

  1. 《Terraform in Action》

  2. 《Terraform Up and Running》

原网站

版权声明
本文为[Technology d life]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/160/202206090933343021.html