Skip to content

Google Cloud

"Google Cloud SDK is a set of tools that you can use to manage resources and applications hosted on Google Cloud Platform. These include the gcloud, gsutil, and bq command line tools. The gcloud command-line tool is downloaded along with the Cloud SDK" -

gcloud CLI Examples

Working with gcloud configurations

When working with several projects, it's best to use multiple configurations, with one for each project. Run gcloud init to create a new configuration. This interactive prompt will let you re-initialize the default, or create a new named config. Once you have done that, you can run gcloud config configurations list to show what you have configured. To activate a different configuration, run gcloud config configurations activate <new-config-name>.

$ gcloud config configurations list
default  False  example-prod
staging  True  example-staging  us-west4-a            us-west4
$ gcloud config configurations activate default
Activated [default].

List google cloud projects

gcloud projects list

Switch to a different project

gcloud config set project "$project_name"

Grant a user permission to a docker registry

gsutil iam ch '' 'gs://'

List google compute zones

gcloud compute zones list

List compute nodes

gcloud compute instances list

List al disks

gcloud compute disks list

Generate ssh commands for all nodes

gcloud compute instances list | awk 'NR>1 {printf "gcloud compute ssh --zone %s %s\n", $2, $1;}'

ssh to a compute node

This is useful for getting system level access to an EKS node.

ssh-to-gke-node() {
  [ "$#" -gt 0 ] || { echo "Usage: ssh-to-gke-node <node_name> [command]" ; return 1 ; }
  read -r zone host < <(gcloud compute instances list --filter="$1" | awk 'NR==2 {print $2, $1 ;}') ;
  gcloud compute ssh --tunnel-through-iap --zone "$zone" "$host" -- "${@}" ;

Loop through all gcloud instances and perform a command

gcloud compute instances list |
awk 'NR>1 {printf "gcloud compute ssh --zone %s %s\n", $2, $1;}' |
while read -r ssh_cmd ; do
  $ssh_cmd -- "docker images" </dev/null
done |
sort -u

Create a one-off compute node

gcloud compute instances create $USER-temp-node --zone=us-west4-a --network-interface=no-address

Leave off the --network-interface=no-address if you want a public IP address.

Delete a compute node

Sometimes autoscalers have a hard time scaling down, requiring manual termination of idle nodes. The following commands are equivalent:

gcloud compute instances delete "projects/$project_name/zones/$zone/instances/$node_name"
gcloud compute instances delete --project "$project_name" --zone="$zone" "$node_name"

To connect to VMs that don't have a public ip address, you need to give --tunnel-through-iap on the CLI and also have IAP-secured Tunnel User permission.

Add an EKS context to kubectl

This adds the cluster to your .kube/config with authentication done via an access token.

gcloud container clusters get-credentials "${CLUSTER_NAME}" \
  --region "${REGION}" \
  --project "${PROJECT_NAME}"

Add Separate kubectl configs for different EKS clusters

This keeps each config in a different file, which is useful for requiring explicit enabling of a given environment, vs the normal behavior of inheriting the last used context.

# Set up individual kube config files for dev, prod and staging

gcloud container clusters get-credentials dev-cluster --region vn-west4 --project foo-dev
kubectl config rename-context $(kubectl config current-context) foo-dev

gcloud container clusters get-credentials prod-cluster --region vn-west4 --project foo-prod
kubectl config rename-context $(kubectl config current-context) foo-prod

gcloud container clusters get-credentials staging-cluster --region vn-west4 --project foo-staging
kubectl config rename-context $(kubectl config current-context) foo-staging

Then setup aliases like

# ~/.bash_aliases
alias foo-k-dev="export KUBECONFIG=$HOME/.kube/foo-dev-config ; kubectl config set-context foo-dev --namespace=default ;"
alias foo-k-prod="export KUBECONFIG=$HOME/.kube/foo-prod-config ; kubectl config set-context foo-prod --namespace=default ;"
alias foo-k-stage="export KUBECONFIG=$HOME/.kube/foo-staging-config ; kubectl config set-context foo-staging --namespace=default ;"

List images available in Google Container Registry

gcloud container images list

Pull a docker container from Google Container Registry

gcloud docker -- pull

Control access to registries

"Container Registry uses a Cloud Storage bucket as the backend for serving container images. You can control who has access to your Container Registry images by adjusting permissions for the Cloud Storage bucket.

Caution: Container Registry only recognizes permissions set on the Cloud Storage bucket. Container Registry will ignore permissions set on individual objects within the Cloud Storage bucket.

You manage access control in Cloud Storage by using the GCP Console or the gsutil command-line tool. Refer to the gsutil acl and gsutil defacl documentation for more information." -

Authenticate a private GCR registry in kubernetes

This is likely not copy/paste material, but the flow is generally correct.

gcloud iam service-accounts create $USER
gcloud iam service-accounts keys create \
  --display-name "$USER" \
  --iam-account "$EMAIL" \
gcloud projects add-iam-policy-binding "$PROJECT" \
  --member "serviceAccount:$EMAIL" \
  --role "roles/storage.objectAdmin"
kubectl create secret "docker-pull-$PROJECT" "$PROJECT" \
  --docker-server "" \
  --docker-username _json_key \
  --docker-email "$EMAIL" \
  --docker-password "$(cat key.json)"

Then use the value of docker-pull-${PROJECT} as your ImagePullSecret.

Set cache expiration of GCP bucket items to 5 minutes

By default, GCP bucket items have 1 hour of public cache, which means items can be cached outside of the control of the GCP admin. This means that within that cache time window, any requests for the item will have unpredictable results. Set your Cache-Control max-age to something low for files that change, like page content and indexes, but long for files that never change, like images and archives.

gsutil setmeta -h "Cache-Control: public, max-age=300" gs://

More info:

View the metadata of a GCP bucket item

gsutil stat gs://

The output will be something like:

    Creation time:          Fri, 22 Jul 2020 23:20:11 GMT
    Update time:            Mon, 23 Jul 2020 19:17:53 GMT
    Storage class:          MULTI_REGIONAL
    Cache-Control:          public, max-age=300
    Content-Length:         3195714
    Content-Type:           application/octet-stream
    Hash (crc32c):          vA/Awm==
    Hash (md5):             2AJ32cECSriE0UQStsXxyw==
    ETag:                   COP7ew7D5+CAEoI=
    Generation:             1595460011829230
    Metageneration:         5

gcloud web console examples

Logs Explorer examples

Show all images pulled by GKE

This will show pulls of all images except with substrings matching the references on line 3. The OR appears to be case sensitive.

-jsonPayload.message : ( "Pulling image \"dade-murphy/leet-image\"" OR "Pulling image \"mr-the-plague/da-vinci\"" )