-
Randomness In Go
This is to discuss ways to generate random values in go using the rand package or the crypto rand. They both achieve the same end depending on your need regarding true randomness and performance. Lets start with the common
math/randpackagepackage main import ( "fmt" "math/rand" ) func main() { for i := 0; i < 5; i++ { fmt.Println(rand.Intn(50)) } }$ go run gotest.go 31 37 47 9 31 $ go run gotest.go 31 37 47 9 31As you can see from above, the run produces the same sequence because the default seeding source.Now, lets try to use a different seeding for our random sequence generation.
package main import ( "fmt" "math/rand" "time" ) func main() { r := rand.New(rand.NewSource(time.Now().UnixNano())) for i := 0; i < 5; i++ { fmt.Println(r.Intn(50)) } }// Random numbers are generated by a Source. Top-level functions, such as // Float64 and Int, use a default shared Source that produces a deterministic // sequence of values each time a program is run. Use the Seed function to // initialize the default Source if different behavior is required for each run. // The default Source is safe for concurrent use by multiple goroutines, but // Sources created by NewSource are not.What we simply did was provide new seeding via new source. Please note this source is not safe for concurrent use unlike the shared source used globally.
$ go run gotest.go 44 17 41 23 5 $ go run gotest.go 18 7 37 17 13Now to be truly random, you can use the better
crypto/randbut with less performance.Crypto Rand
package main import ( crypto_rand "crypto/rand" "encoding/base64" "fmt" ) func GenerateRandomBytes(n int) ([]byte, error) { b := make([]byte, n) if _, err := crypto_rand.Read(b); err != nil { return nil, err } return b, nil } func GenerateRandomString(n int) (string, error) { b, err := GenerateRandomBytes(n) if err != nil { return "", err } return base64.URLEncoding.EncodeToString(b), nil } func main() { for i := 30; i < 35; i++ { if s, err := GenerateRandomString(i); err == nil { fmt.Println(s) } } }What we doing here is very simple, we are not using clock to generate our seedling.
-
Go Dependency Management And Dep
You may be aware of the “official experiment” for go dependency management tool called dep but this has since be replaced by go mod. Let’s explore the tool via this opensource project usage.
Gopkg.tomlis where you define your package dependencies whileGopkg.lockcontains snapshot of your project dependencies after evaluatingGopkg.tomlas well as some metadata. Here is;# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md # for detailed Gopkg.toml documentation. required = [ ... "knative.dev/pkg/codegen/cmd/injection-gen", # TODO(#4549): Drop this when we drop our patches. "k8s.io/kubernetes/pkg/version", "knative.dev/caching/pkg/apis/caching", # For cluster management in performance testing. "knative.dev/pkg/testutils/clustermanager/perf-tests", "knative.dev/test-infra/scripts", "knative.dev/test-infra/tools/dep-collector", # For load testing. "github.com/tsenart/vegeta" ] [[constraint]] name = "github.com/tsenart/vegeta" branch = "master" [[override]] name = "gopkg.in/yaml.v2" version = "v2.2.4" ... [[override]] name = "github.com/google/mako" version = "v0.1.0" [[override]] name = "go.uber.org/zap" revision = "67bc79d13d155c02fd008f721863ff8cc5f30659" ... [[constraint]] name = "github.com/jetstack/cert-manager" version = "v0.12.0" ... [[override]] name = "k8s.io/api" version = "kubernetes-1.16.4" ... [[override]] name = "k8s.io/kube-openapi" # This is the version at which k8s.io/apiserver depends on this at its 1.16.4 tag. revision = "743ec37842bffe49dd4221d9026f30fb1d5adbc4" ... # Added for the custom-metrics-apiserver specifically [[override]] name = "github.com/kubernetes-incubator/custom-metrics-apiserver" revision = "3d9be26a50eb64531fc40eb31a5f3e6720956dc6" [[override]] name = "bitbucket.org/ww/goautoneg" source = "github.com/munnerz/goautoneg" [prune] go-tests = true unused-packages = true non-go = true [[prune.project]] name = "k8s.io/code-generator" unused-packages = false non-go = false [[prune.project]] name = "knative.dev/test-infra" non-go = false ... # The dependencies below are required for opencensus. [[override]] name = "google.golang.org/genproto" revision = "357c62f0e4bbba7e6cc403ae09edcf3e2b9028fe" [[override]] name = "contrib.go.opencensus.io/exporter/prometheus" version = "0.1.0" [[override]] name = "contrib.go.opencensus.io/exporter/zipkin" version = "0.1.1" [[constraint]] name = "go.opencensus.io" version = "0.22.0" [[override]] name = "github.com/census-instrumentation/opencensus-proto" version = "0.2.0" [[override]] name="github.com/golang/protobuf" version = "1.3.2"required
Lets talk about this required section; this is where you define what dependencies are required and must be included in the vendor folder;
required = [ ... "knative.dev/pkg/codegen/cmd/injection-gen", # TODO(#4549): Drop this when we drop our patches. "k8s.io/kubernetes/pkg/version", "knative.dev/caching/pkg/apis/caching", # For cluster management in performance testing. "knative.dev/pkg/testutils/clustermanager/perf-tests", "knative.dev/test-infra/scripts", "knative.dev/test-infra/tools/dep-collector", # For load testing. "github.com/tsenart/vegeta" ]If you look at the folder, you will see these dependencies in the folder as these are required.
direct and transitive dependency
A -> B -> C -> DThese are packages that your project imports or includes the required sections. For example above, A directly import B while B imports C and D. In this case, B is the direct dependent of A while C and D are transitive dependency.
constraint
This is how you specify the version of your dependency to use for this project. In our case, we want our project to use version
v2.2.4of thegopkg.in/yaml.v2. To do this,[[override]] name = "gopkg.in/yaml.v2" version = "v2.2.4"You can also use
branchorrevisionto pin your dependency.override
These are like global constraint but supersedes constraints and should be used as last resort. They apply to direct dependencies and transitive dependencies unlike constraint and advisably should be used sparingly.
[[override]] name="github.com/golang/protobuf" version = "1.3.2"prune
When your project has a dependency, it extracts the package along with other files like README.md, LEGAL etc. If you dont want any or all of these files, you can use prune to inform dep.
[prune] go-tests = true unused-packages = true non-go = true [[prune.project]] name = "k8s.io/code-generator" unused-packages = false non-go = false [[prune.project]] name = "knative.dev/test-infra" non-go = falseThe setting above simply tells dep to remove test files, unused-packages as well as non-go files like LEGAL from all dependencies. This is the overriden with a different setting for
k8s.io/code-generatorthat unused-packages and no-go should not be pruned.dep ensure
Once you have configured your Gopkg.toml with your settings, you need to apply these changes, update vendor folder as well as Gopkg.lock.
dep ensurewill do that for you and you have option to tell dep not to update vendor, just Gopkg.lock. -
Extending Aws Eks Cluster Ips
For most kubernetes clusters using EKS, there is the fear of running out of IPs since each pod gets one IP address from the VPC. For large enterprise clusters, this is a problem. Now the question is, how do we solve this? We can leverage one of the feature of AWS VPC to use secondary IP range combined with customize AWS VPC CNI configuration.
Secondary IPs
For thoser that are not aware, AWS released a feature back in 2017 which allows you to extend your VPC with secondary CIDRs. This means, let’s say your primary CIDR is 10.2.0.0/16, you now have the ability to use another secondary CIDR like 172.14.0.0/16 with the VPC also. We will be taking advantage of this to extend our EKS cluster combined with ability of AWS VPC CNI to use custom CNI config.
The Cluster
We will be creating a VPC cluster that has primary CIDR
10.0.0.0/16and additional secondary CIDR172.2.0.0/16. In this VPC, we will create 3 private subnets, two belonging to10.0.0.0/16while the other to 172.2.0.0/16.resource "aws_vpc" "eks_vpc" { cidr_block = "10.0.0.0/16" enable_dns_support = true //These configuration are needed for private EKS cluster enable_dns_hostnames = true tags = { "kubernetes.io/cluster/test-cluster" = "shared" } } ... resource "aws_vpc_ipv4_cidr_block_association" "secondary_cidr" { vpc_id = aws_vpc.eks_vpc.id cidr_block = "172.2.0.0/16" } ... resource "aws_subnet" "private_1" { vpc_id = aws_vpc.eks_vpc.id availability_zone = "us-east-1a" cidr_block = "10.0.3.0/24" tags = { Name = "private_1" "kubernetes.io/cluster/${var.cluster_name}" = "shared" #We are adding this because EKS automatically does this anyway. } } resource "aws_subnet" "private_2" { vpc_id = aws_vpc.eks_vpc.id availability_zone = "us-east-1b" cidr_block = "10.0.4.0/24" tags = { Name = "private_2" "kubernetes.io/cluster/${var.cluster_name}" = "shared" } } #This is the secondary CIDR subnet. resource "aws_subnet" "private_3" { vpc_id = aws_vpc.eks_vpc.id availability_zone = "us-east-1a" #The secondary subnet must be in the same AZ for AWS CNI to use its IPs. cidr_block = "172.2.3.0/24" tags = { Name = "private_3" "kubernetes.io/cluster/${var.cluster_name}" = "shared" } }Now that our VPC has been setup, lets go ahead and create our EKS cluster to launch into
private_1andprivate_2subnets both belonging to10.0.0.0/16CIDR. For our demo, we will be launching our workers node into one of the subnets in us-east-1a.resource "aws_eks_cluster" "test_cluster" { name = var.cluster_name role_arn = aws_iam_role.cluster.arn vpc_config { subnet_ids = [aws_subnet.private_1.id, aws_subnet.private_2.id] #mininum of 2 is required security_group_ids = [aws_security_group.cluster.id] } } ... resource "aws_launch_template" "eks-cluster-worker-nodes" { iam_instance_profile { arn = aws_iam_instance_profile.workers-node.arn } image_id = data.aws_ami.eks-worker.id instance_type = "t3.medium" key_name = "mykey.pem" vpc_security_group_ids = [aws_security_group.workers-node.id] user_data = "${base64encode(local.workers-node-userdata)}" lifecycle { create_before_destroy = true } } resource "aws_autoscaling_group" "eks-cluster-worker-nodes-spot" { ... mixed_instances_policy { ... launch_template { launch_template_specification { launch_template_id = "${aws_launch_template.eks-cluster-worker-nodes.id}" version = "$Latest" } override { instance_type = "t3.medium" } } } ... }To connect the cluster, we will need our
awsauthconfig, thekubeconfigas well asENIConfigwhich informs AWS VPC CNI which subnet to use for a particular node. These will be generated from terraformlocals { kubeconfig = <<KUBECONFIG apiVersion: v1 clusters: - cluster: server: ${aws_eks_cluster.test_cluster.endpoint} certificate-authority-data: ${aws_eks_cluster.test_cluster.certificate_authority.0.data} name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - "${var.cluster_name}" KUBECONFIG config_map_aws_auth = <<CONFIGMAPAWSAUTH apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: ${aws_iam_role.workers-node.arn} username: system:node: groups: - system:bootstrappers - system:nodes CONFIGMAPAWSAUTH awsauth = <<AWSAUTH apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: ${aws_iam_role.workers-node.name} username: system:node: groups: - system:bootstrappers - system:nodes AWSAUTH } resource "local_file" "kubeconfig" { content = "${local.kubeconfig}" filename = "kubeconfig" } resource "local_file" "aws_auth" { content = "${local.config_map_aws_auth}" filename = "awsauth.yaml" } resource "local_file" "eni-a" { content = "${local.eni_a}" filename = "eni-${aws_subnet.private_1.availability_zone}.yaml" } ...DEMO
Once terraform apply is complete, these files will be generated and should be applied following the procedures below;
-
export KUBECONFIG to the generated
kubeconfigfile. -
Run kubectl apply -f
eni-us-east-1a.yamlto create CRD which informs the CNI which subnets to create workers pods on. Update the CNI daemonset.kubectl apply -f eni-us-east-1a.yaml kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone -
Run
kubectl apply -f awsauth.yamlfor the workers node to be able to join the cluster. -
Once joined, you should see the pods scheduled on
172.2.3.0/24subnet rather than the primary interface ENI.
$ kubectl get pods -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES aws-node-7tv4z 1/1 Running 0 2m 10.0.3.232 ip-10-0-3-232.ec2.internal <none> <none> coredns-69bc49bfdd-s5t75 1/1 Running 0 3m9s 172.2.3.218 ip-10-0-3-232.ec2.internal <none> <none> coredns-69bc49bfdd-wk48q 1/1 Running 0 3m9s 172.2.3.230 ip-10-0-3-232.ec2.internal <none> <none> kube-proxy-fm564 1/1 Running 0 2m 10.0.3.232 ip-10-0-3-232.ec2.internal <none> <none>What happened here is that
L-IPAMDlaunches an ENI and instead of attaching secondary IPs from the primary ENI subnet, it uses the IPs from subnet specified in the ENIConfig associated with the node. Note that the subnet in the ENIConfig must be in the same AZ as the subnet of the primary ENI. -
-
Kubernetes, Podsecuritypolicy And Kubeadm
I will assume you have a little background about PodSecurityPolicy and now its time to setup your cluster with PodSecurityPolicy admission controller enabled. For our setup, here is the kubeadm config;
kind: InitConfiguration apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration featureGates: AppArmor: true cpuManagerPolicy: static systemReserved: cpu: 500m memory: 256M kubeReserved: cpu: 500m memory: 256M --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer: extraArgs: enable-admission-plugins: PodSecurityPolicy,LimitRanger,ResourceQuota,AlwaysPullImages,DefaultStorageClassLet’s start the cluster,
$ sudo kubeadm init --config kubeadm.json $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #join a worker node to the master $ sudo kubeadm join 10.100.11.231:6443 --token 6yel1a.ce3le6eel3kfnxsz --discovery-token-ca-cert-hash sha256:99ee2e4ea 302c5270f2047c7a0093533b69105a8c91bf20f48b230dce9fd3f3a $ kubectl get no NAME STATUS ROLES AGE VERSION ip-10-100-11-199 NotReady <none> 109s v1.13.1 ip-10-100-11-231 NotReady master 3m3s v1.13.1As you can see, our cluster is not ready because we need to install a network plugin which you can see if you describe the node.
$ kubectl describe no ip-10-100-11-231 ... Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 31 May 2019 22:50:36 +0000 Fri, 31 May 2019 22:46:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 31 May 2019 22:50:36 +0000 Fri, 31 May 2019 22:46:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 31 May 2019 22:50:36 +0000 Fri, 31 May 2019 22:46:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Fri, 31 May 2019 22:50:36 +0000 Fri, 31 May 2019 22:46:39 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Addresses: ...We will be using calico as our network plugin and you can follow the install instruction right here
$ kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.extensions/calico-node created serviceaccount/calico-node created deployment.extensions/calico-kube-controllers created serviceaccount/calico-kube-controllers createdTo our surprise, there is no pod running in kube-system as we would expect. Let look at calico-node daemonset;
$ kubectl describe daemonset calico-node -n kube-system Name: calico-node Selector: k8s-app=calico-node Node-Selector: beta.kubernetes.io/os=linux Labels: k8s-app=calico-node ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 20s (x15 over 102s) daemonset-controller Error creating: pods "calico-node-" is forbidden: no providers available to validate pod requestOk, it says no provider and that means you do not have any PSP defined that will validate the daemonset pods. It is a very confusing error message. Now, we will apply this PSP below.
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: calico-psp annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' spec: privileged: false allowPrivilegeEscalation: true allowedCapabilities: - '*' volumes: - '*' hostNetwork: true hostPorts: - min: 0 max: 65535 hostIPC: true hostPID: true runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' fsGroup: rule: 'RunAsAny'$ kubectl apply -f calico-psp.yaml podsecuritypolicy.policy/calico-psp created $ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES calico-psp false RunAsAny RunAsAny RunAsAny RunAsAny false *Ok everything should be good now? Well, if you describe the daemonset, you will still see the same message as above. You need to delete and re-apply the calico plugin manifest. After this, let’s go ahead and describe our daemonset again.
$ kubectl describe daemonset calico-node -n kube-system Name: calico-node Selector: k8s-app=calico-node Node-Selector: beta.kubernetes.io/os=linux Labels: k8s-app=calico-node ... Type: HostPath (bare host directory volume) Path: /var/lib/cni/networks HostPathType: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 4s (x12 over 14s) daemonset-controller Error creating: pods "calico-node-" is forbidden: unable to validate against any pod security policy: []The error means we have a psp but not something we can use to validate our daemonset. This is because our daemonset pod container cannot use our PSP and we need to modify the daemonset clusterrole to be able to use this specific PSP and add this rule. Yep, you need to recreate.
# Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-node rules: - apiGroups: ["extensions"] resources: - podsecuritypolicies resourceNames: - calico-psp verbs: - use # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get --- ...After the clusterrole has been modified, we will run into another error;
... host-local-net-dir: Type: HostPath (bare host directory volume) Path: /var/lib/cni/networks HostPathType: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 3s (x11 over 9s) daemonset-controller Error creating: pods "calico-node-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] ...Our calico pods are forbidden because the PSP it is using forbids privileged container and Calico needs one to handle network policy. Now, lets go ahead and fix that in our PSP by updating;
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: calico-psp annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' spec: privileged: true #update this to true allowPrivilegeEscalation: true ...Once the update has been applied, calico will create the pods and our node will become healthy.
... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 3m10s (x16 over 5m54s) daemonset-controller Error creating: pods "calico-node-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] Normal SuccessfulCreate 26s daemonset-controller Created pod: calico-node-hwb2b Normal SuccessfulCreate 26s daemonset-controller Created pod: calico-node-gtrm2 $ kubectl get no NAME STATUS ROLES AGE VERSION ip-10-100-11-199 Ready <none> 7m2s v1.13.1 ip-10-100-11-231 Ready master 7m33s v1.13.1Yayy!!! Our nodes are ready.
-
Lets Talk About Terraform 0.12
Terraform 0.12 was recently released and you can check out the details more in-dept at terraform blogs. The focus of this blog post will be the new features released and how to use them. You can find the code used in this Github repo. Below is the content of the ec2 main.tf.
module "test_ec2" { source = "./ec2_module" //I am passing the subnet object instead of the id subnet = aws_subnet.private.0 instance_tags = [ { Key = "Name" Value = "Test" }, { Key = "Environment" Value = "prd" } ] }Here is the content of the ec2 module;
data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical } resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" subnet_id = var.subnet.id user_data = templatefile("${path.module}/userdata.tpl", { instance_tags = var.instance_tags }) tags = { for tag in var.instance_tags: tag.Key => tag.Value } } variable "instance_type" { default = "t2.micro" } variable "instance_tags" { type = list } variable "subnet" { type = object({ id = string }) }For expression
From my perspective, this is one of the best feature of 0.12 which allows you to iterate over a list or map and you can do whatever you want with each item. From above, you can see that we generated the instance tags property from the user passed in list of maps.
tags = { for tag in var.instance_tags: tag.Key => tag.Value }First class expression
You can see from above that we did not have to interpolate over the variables or object values instead they were used directly as first class variable. This will save some ink and allows for more complex operational usage with terraform.
... ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" subnet_id = var.subnet.id ...Rich Value Type
My impression of this feature so far is that we can have user-defined type and existing type are supported as first class type without quote. We defined a type that expects map with id attribute and if passing a non-matching object, it will be rejected which is pretty cool.
variable "instance_tags" { type = list } variable "subnet" { type = object({ id = string }) }New Type Interpolation
New terraform now gives you the capability to loop in your user data. For our example, here is the userdata;
#!/bin/bash %{ for tag in instance_tags~} cat ${tag.Key}=${tag.Value} %{ endfor ~}We were able to use forloop to go over the tags and use it in instance user data which is lovely without having to through hacks.
One other feature that was not discussed here is the dynamic block which you can checkout on terraform blog page.