Skip to content

Google Cloud

"Google Cloud SDK is a set of tools that you can use to manage resources and applications hosted on Google Cloud Platform. These include the gcloud, gsutil, and bq command line tools. The gcloud command-line tool is downloaded along with the Cloud SDK" -

gcloud CLI Examples

List google cloud projects

gcloud projects list

Switch to a different project

gcloud config set project "$project_name"

Grant a user permission to a docker registry

gsutil iam ch '' 'gs://'

List google compute zones

gcloud compute zones list

List compute nodes

gcloud compute instances list

Create a one-off compute node

gcloud compute instances create $USER-temp-node --zone=us-east4-a

Delete a compute node

Sometimes autoscalers have a hard time scaling down, requiring manual termination of idle nodes.

gcloud compute instances delete "projects/$project_name/zones/us-central1-a/instances/$node_name"

ssh to a compute node

This is useful for getting system level access to an EKS node.

gcloud compute ssh --zone "$ZONE" --project "$PROJECT_NAME" gke-quake3-server-eb0125a5-z5am

Add an EKS context to kubectl

This adds the cluster to your .kube/config with authentication done via an access token.

gcloud container clusters get-credentials "${CLUSTER_NAME}" \
  --region "${REGION}" \
  --project "${PROJECT_NAME}"

Add Separate kubectl configs for different EKS clusters

This keeps each config in a different file, which is useful for requiring explicit enabling of a given environment, vs the normal behavior of inheriting the last used context.

# Set up individual kube config files for dev, prod and staging

gcloud container clusters get-credentials dev-cluster --region vn-west4 --project foo-dev
kubectl config rename-context $(kubectl config current-context) foo-dev

gcloud container clusters get-credentials prod-cluster --region vn-west4 --project foo-prod
kubectl config rename-context $(kubectl config current-context) foo-prod

gcloud container clusters get-credentials staging-cluster --region vn-west4 --project foo-staging
kubectl config rename-context $(kubectl config current-context) foo-staging

Then setup aliases like

# ~/.bash_aliases
alias foo-k-dev="export KUBECONFIG=$HOME/.kube/foo-dev-config ; kubectl config set-context foo-dev --namespace=default ;"
alias foo-k-prod="export KUBECONFIG=$HOME/.kube/foo-prod-config ; kubectl config set-context foo-prod --namespace=default ;"
alias foo-k-stage="export KUBECONFIG=$HOME/.kube/foo-staging-config ; kubectl config set-context foo-staging --namespace=default ;"

List images available in Google Container Registry

gcloud container images list

Pull a docker container from Google Container Registry

gcloud docker -- pull

Control access to registries

"Container Registry uses a Cloud Storage bucket as the backend for serving container images. You can control who has access to your Container Registry images by adjusting permissions for the Cloud Storage bucket.

Caution: Container Registry only recognizes permissions set on the Cloud Storage bucket. Container Registry will ignore permissions set on individual objects within the Cloud Storage bucket.

You manage access control in Cloud Storage by using the GCP Console or the gsutil command-line tool. Refer to the gsutil acl and gsutil defacl documentation for more information." -

Authenticate a private GCR registry in kubernetes

This is likely not copy/paste material, but the flow is generally correct.

gcloud iam service-accounts create $USER
gcloud iam service-accounts keys create \
  --display-name "$USER" \
  --iam-account "$EMAIL" \
gcloud projects add-iam-policy-binding "$PROJECT" \
  --member "serviceAccount:$EMAIL" \
  --role "roles/storage.objectAdmin"
kubectl create secret "docker-pull-$PROJECT" "$PROJECT" \
  --docker-server "" \
  --docker-username _json_key \
  --docker-email "$EMAIL" \
  --docker-password "$(cat key.json)"

Then use the value of docker-pull-${PROJECT} as your ImagePullSecret.

Set cache expiration of GCP bucket items to 5 minutes

By default, GCP bucket items have 1 hour of public cache, which means items can be cached outside of the control of the GCP admin. This means that within that cache time window, any requests for the item will have unpredictable results. Set your Cache-Control max-age to something low for files that change, like page content and indexes, but long for files that never change, like images and archives.

gsutil setmeta -h "Cache-Control: public, max-age=300" gs://

More info:

View the metadata of a GCP bucket item

gsutil stat gs://

The output will be something like:

    Creation time:          Fri, 22 Jul 2020 23:20:11 GMT
    Update time:            Mon, 23 Jul 2020 19:17:53 GMT
    Storage class:          MULTI_REGIONAL
    Cache-Control:          public, max-age=300
    Content-Length:         3195714
    Content-Type:           application/octet-stream
    Hash (crc32c):          vA/Awm==
    Hash (md5):             2AJ32cECSriE0UQStsXxyw==
    ETag:                   COP7ew7D5+CAEoI=
    Generation:             1595460011829230
    Metageneration:         5