Deployment on GCP – Google Kubernetes Engine
Google Kubernetes Engine (GKE) is a managed Kubernetes service provided by Google Cloud Platform (GCP). It simplifies the deployment, management, and scaling of containerized applications using Kubernetes. Accessing your GKE cluster from a local machine allows you to manage your Kubernetes resources and applications more efficiently. This guide provides step-by-step instructions to access your GKE cluster from an Ubuntu machine.
Prerequisites
- A Google Cloud Platform account with appropriate permissions.
- A GKE cluster is already set up in your GCP project.
- An Ubuntu machine with internet access.
Steps to Create a GKE Cluster
Access the Google Cloud Console
- Open your web browser and navigate to the Google Cloud Console.
- Log in with your Google account if prompted.
Select or Create a Project
- In the top navigation bar, click the project dropdown (current project name) and select an existing project or create a new one.
- To create a new project, click New Project, enter the project name, and click Create.
Enable the Kubernetes Engine API
- Navigate to the GKE section by clicking on the hamburger menu (three horizontal lines) in the top-left corner.
- Go to Kubernetes Engine > Clusters.
- If the Kubernetes Engine API is not enabled, you will be prompted to enable it. Click Enable.
Create a New Cluster
- On the Clusters page, click Create Cluster.
Configure Cluster Settings
Cluster Basics
- Cluster Name: Enter a name for your cluster.
- Location Type: Choose between Zonal (single zone) or Regional (multiple zones for high availability).
Cluster Location
- Zone: If you choose Zonal, select a zone where you want to deploy your cluster.
- Region: If you chose Regional, select a region and the specific zones where you want to deploy your cluster.
Cluster Version
- Select the Kubernetes version you want to use. You can choose the default version or a specific version if needed.
Node Pools
- Node Pool Name: Enter a name for the node pool.
- Machine Type: Choose the machine type for your nodes. The default is usually n1-standard-1, but you can select a different type based on your needs.
- Number of Nodes: Specify the initial number of nodes in the node pool.
Advanced Settings (Optional)
- Autopilot Mode: Enables a managed mode where Google handles node management for you.
- Networking: Configure network settings such as VPC, subnetwork, and IP address range.
- Security: Set up security features like private clusters, Shielded GKE Nodes, and more.
Review and Create
- Review the cluster configuration you’ve set up.
- Click Create to start creating the cluster.
Wait for Cluster Creation
- The cluster creation process will take a few minutes. You can monitor the progress on the Clusters page.
Access Your Cluster
Once the cluster is created, you can manage it using the GCP Console, gcloud CLI, or kubectl.
To access your cluster using kubectl:
- Click Connect on the Clusters page.
- Copy the provided command to configure kubectl to use the new cluster.
gcloud container clusters get-credentials [CLUSTER_NAME] --zone [CLUSTER_ZONE]
Replace [CLUSTER_NAME] with your cluster name and [CLUSTER_ZONE] with the zone where your cluster is located.
Steps to Access GKE Cluster
Install Google Cloud SDK (gcloud CLI)
The Google Cloud SDK provides the gcloud command-line tool, which is necessary for interacting with GCP services, including GKE.
Install the Google Cloud SDK:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates gnupg curl
echo “deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main” | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key –keyring /usr/share/keyrings/cloud.google.gpg add –
sudo apt-get update
sudo apt-get install -y google-cloud-sdk
Authenticate with Google Cloud
Use the gcloud CLI to authenticate with your Google Cloud account:
gcloud auth login
Follow the prompts in your browser to complete the authentication process.
Set the Project
Set the GCP project that contains your GKE cluster:
gcloud config set project [PROJECT_ID]
Replace [PROJECT_ID] with your GCP project ID. You can find this ID in the Google Cloud Console.
Get Cluster Credentials
Retrieve the credentials for your GKE cluster to configure kubectl to use it:
gcloud container clusters get-credentials [CLUSTER_NAME] --zone [CLUSTER_ZONE]
Replace [CLUSTER_NAME] with the name of your GKE cluster and [CLUSTER_ZONE] with the zone where your cluster is located. You can find the cluster zone in the Google Cloud Console or by listing your clusters:
gcloud container clusters list
Verify Access
Check that you can access your cluster by listing the nodes:
kubectl get nodes
If the setup is successful, you should see a list of the nodes in your GKE cluster.
Install kubectl (Optional)
If kubectl is not already installed, you can install it via the Google Cloud SDK:
gcloud components install kubectl
Downloading the binary
- Create a folder in the home directory and download the NG installation binary using the below command
- wget https://download.vunetsystems.com/_Downloads_/_vuDocker_/vuSmartMaps_NG_2_11.tar.gz –user=<username> –password=<password> –no-check-certificate
💡Note: If you don’t have access to the download server, download the binaries directly from this URL
Please check with [email protected] for getting the credentials for Download server.
- Extract the tar file using
- tar -xvzf vuSmartMaps_NG_2_11.tar.gz
- Once extracted, start the launcher, using
- ./build/launcher_linux_x86_64
- Once the launcher has started successfully, access the launcher User Interface from a web browser using the link available.
Welcome Page
- This will be the starting page for Installation.
- Click on Proceed to install button, to start with the actual NG installation.
Upload License
- Here you need to provide a valid license. This license file will contain the services that are going to be installed along with required resources.
- Upload the valid license and click on Continue.
💡Note: Please get the updated license files from [email protected].
Also, mention the kind of setup (single node/multi node) you’re doing for the deployment.
Installation Environment
- Here, you will be prompted to select your installation environment choice.
- Select the installation environment as Google Cloud.
- Click the Continue button to proceed further.
Upload Kubeconfig file
- Upload kubeconfig which has the super-admin access to the kubernetes cluster.
💡Note: Only YAML file should be uploaded here.
2. On clicking the Continue button, vuLauncher will verify the access to the cluster and get the details of the nodes.
3. In the .kube folder, we have the kubeconfig file. Copy this config file to some other directory and upload that file to the launcher and then click on Continue.
K8s Nodes Selection
- Here, you can exclude some of the nodes where you don’t want to run the services.
- This is helpful in case we don’t want to schedule our services on Master node OR in case of AKS we have a reserved pool of nodes where by default, it doesn’t allow scheduling of any pods. Click on Continue once the details are updated.
Configure IP
- In case you have a Loadbalancer available in your kubernetes cluster(mostly available in case of Managed Kubernetes), you can create a static IP address which will be then used to expose the services.
- If you don’t have the Load Balancer, we can expose the services on worker nodes. In this case you can skip this step.
Configure Disk
- Based on the previous step, 3 storage classes will be configured for each type of storage. Accordingly we need to assign the storage class and encryption setting for each disk.
- Along with the disk storage, select Encrypted option here for the Hyperscale disk settings and click on Continue.
Mapping
VM to Service Mapping (with advanced Configuration)
- Here you can override the Service mapping to VM.
- By default, the vulauncher installation script will allocate resources to the available VM in the best possible way. Click on Continue if you’re fine with the default allocation.
- If the user wishes to override, they can click the ‘Edit’ button. This will prompt them with a list of VMs, where they can increase or decrease the count as needed.
- In the Advanced Configuration section, choose the set of VMs for a service. If a VM goes down, then kubernetes will choose the given set of VMs to bring up this service. By default, all the nodes are chosen.
Customize
- Here users can override the port that the service is running on.
- There may be cases where your enterprise requires you to run standard services on non standard ports. Please configure the port for these services here.
- To override, click the edit button of the respective service, and then write the required port number.
- Click Continue to proceed.
Install
- Here this page shows the summary of the information that the user provided.
- You can click the edit button on the details page, to move back to their respective section and override the change.
- You can also click the name of the stepper window to move.
- Then click Continue, to start the deployment procedure.
Installation Process
- The installation shows each event that is going to be performed.
- Users can click Cancel Installation to stop the ongoing installation. Additionally, they can retry if the process is halted or if the installation stops.
- Once the installation is successful, a prompt will open. Here, users can click Go to vuSmartMaps, and it will redirect to the vuSmartMaps login page.
- Use the Login credentials displayed here, to login to the UI.
Creating LoadBalancer Services in GKE
This document outlines the steps to create LoadBalancer services in Google Kubernetes Engine (GKE) for exposing Traefik for web traffic and Kafka for data ingestion.
- Traefik: Create one LoadBalancer service to handle web traffic.
- Kafka: Create N LoadBalancer services, where N is the number of Kafka nodes, to handle data ingestion.
Prerequisites
- Ensure you have the necessary permissions to create LoadBalancer services in your GKE cluster.
- Have your GKE cluster and relevant services (Traefik and Kafka) already deployed.
Steps to Create LoadBalancer Services
Create a LoadBalancer for Traefik
Update Traefik values.yaml
Ensure your Traefik values.yaml is configured as follows:
service:
annotations: {}
annotationsTCP: {}
annotationsUDP: {}
enabled: true
externalIPs: []
labels: {}
loadBalancerSourceRanges: []
single: true
spec: null
type: LoadBalancer
Apply the Traefik Configuration
If you haven’t already applied the configuration, use Helm to upgrade or install Traefik with the updated values.yaml
helm upgrade < release-name> . -n vsmaps
Kafka LoadBalancers with Static IPs
Update Kafka values.yaml
Add the static IPs to Kafka values.yaml as follows:
loadbalancer:
enabled: true
loadBalancerIP: ["34.72.203.13", "35.224.99.197", "35.238.126.234"]
servicePort: 31092
Apply the Traefik Configuration
Use Helm to upgrade or install Kafka with the updated values.yaml:
helm upgrade < release-name> . -n vsmaps
Verify the Services
Check the status of the services to ensure they are up and running with external IPs assigned.
kubectl get svc -n vsmaps
Creating and Describing Static IP Addresses in GKE
Create the Static IP Address
Run the following command to create the static IP address:
gcloud compute addresses create < Name of the Static-ip > --region= < Specifies the region where the static IP should be reserved >
Ex:
gcloud compute addresses create kafka-static-ip-0 --region=us-central1
gcloud compute addresses create kafka-static-ip-1 --region=us-central1
gcloud compute addresses create kafka-static-ip-2 --region=us-central1
Describe the Static IP Address
To get the details of the static IP address, including the actual IP address assigned, run:
gcloud compute addresses describe < Name > --region=< Region > --format="get(address)"
Ex:
gcloud compute addresses describe kafka-static-ip-0 --region=us-central1 --format="get(address)"
gcloud compute addresses describe kafka-static-ip-1 --region=us-central1 --format="get(address)"
gcloud compute addresses describe kafka-static-ip-2 --region=us-central1 --format="get(address)