DocsDeployment & Installation > On-Prem Deployment & Installation > Deployment on Existing Kubernetes Cluster

Deployment on Existing Kubernetes Cluster

In the case of vuSmartMaps’ deployment in the existing kubernetes cluster, please follow below steps:

Pre Requisites

  • Kubeconfig to access the kubernetes environment should be there with super-admin access.
  • vuLauncher license should have correct github username and token to pull the images.
  • For AKS, static IP address from the same resource group where AKS is provisioned, which will be used to expose services such as kafka, traefik, minio etc Using this IP we will be creating load balancer service.
  • For AKS, If you’re going to use premium storage for Hyperscale hot volume, please make sure you have the appropriate machine type which supports attaching premium volumes.
  • To make sure sitemanager runs properly after the deployment, open the config.yaml file in launcher/static-files/config.yaml, change user, build_dir and executable_path for the current user of vuSiteManager as shown below.
  • Ensure that at least one mount point is available in all the VMs with 200GB disk space to store the Hot storage data for Hyperscale.
  • We should have a “/var” partition in all VMs with minimum 80GB to load the docker images.

💡Note:: Ensure that all the partitions are being used by the same user and user group.

  • sudo chown -R user:user group /data

Ports Description

Before proceeding, ensure that the following ports are properly configured on your system:

SNo

Port

Protocol

Description

1

6443

TCP

Orchestration API port. Ports should be open between all vuSmartMaps nodes and site-manager nodes

2

2379

TCP

Orchestration key value DB port.  Ports should be open between all vuSmartMaps server and site-manager servers

3

2380

TCP

Orchestration key value DB port. Ports should be open between all vuSmartMaps server and site-manager servers

4

10250

TCP

Orchestration service port. Ports should be open between all vuSmartMaps server and site-manager servers

5

10259

TCP

Orchestration service port. Ports should be open between all vuSmartMaps server and site-manager servers

6

10257

TCP

Orchestration service port. Ports should be open between all vuSmartMaps server and site-manager servers

7

9200

TCP

Time Series NoSQL database port. Ports should be open between all vuSmartMaps server and site-manager servers

8

9300

TCP

Time Series NoSQL database port. Ports should be open between all vuSmartMaps server and site-manager servers

9

6379

TCP

In-memory database port. Ports should be open between all vuSmartMaps server and site-manager servers

10

9082

TCP

Kafka API port. Ports should be open between all vuSmartMaps server and site-manager servers

11

9092

TCP

Kafka server port. Ports should be open between all vuSmartMaps server and site-manager servers

12

2181

TCP

Kafka server port. Ports should be open between all vuSmartMaps server and site-manager servers

13

2888

TCP

Kafka server port. Ports should be open between all vuSmartMaps server and site-manager servers

14

3888

TCP

Kafka server port. Ports should be open between all vuSmartMaps server and site-manager servers

15

443

TCP

UI port. Ports should be open between all vuSmartMaps server and site-manager servers. Also it should be accessible from desktop

16

8080

TCP

Installer service port. Ports should be open between all vuSmartMaps server and site-manager servers. Also it should be accessible from desktop

17

5432

TCP

Time Series SQL database. Ports should be open between all vuSmartMaps server and site-manager servers

18

22

,TCP

SSH port. Ports should be open between all vuSmartMaps server and site-manager servers

19

30910,

30901

TCP

Object storage service port. Ports should be open between all vuSmartMaps server and site-manager servers. Also it should be accessible from desktop

20

13000

TCP

Webhook port. Ports should be open between all vuSmartMaps server and site-manager servers

21

8123

TCP

HyperScale database port. Ports should be open between all vuSmartMaps server and site-manager servers

22

9000

TCP

HyperScale database port. Ports should be open between all vuSmartMaps server and site-manager servers

23

8472

UDP

vxLan port. Ports should be open between all vuSmartMaps server and site-manager servers

 

💡Note: For a single node deployment, ports should be opened internally on the same node.

In case of multi node deployment, ports should be opened internally on all the nodes including sitemanager and launcher VMs. Apart from Traefik, other services can’t switch to other ports if there is a conflict with default ports. So, for this release, the default service ports will be used.

Downloading the binary

  1. Create a folder in the home directory and download the NG installation binary using the below command
  • wget https://download.vunetsystems.com/_Downloads_/_vuDocker_/vuSmartMaps_NG_2_9_5.tar.gz  –user=<username> –password=<password>  –no-check-certificate

💡Note: If you don’t have access to the download server, download the binaries directly from this URL 

Please check with [email protected] for getting the Username and Password for getting the credentials for Download server

  1. Validate the downloaded binary using
    • md5sum vuSmartMaps_NG_2_9_5.tar.gz 

The output of the md5sum should be –  <84e3958b0b06eb4c262507a9fd4aa4b7>

    • Please reachout to [email protected] if the md5sum isn’t matching with the above one.
  1. Extract the tar file using
    • tar -xvzf vuSmartMaps_NG_2_9_5.tar.gz
  1. Once extracted, start the launcher, using
    •  ./build/launcher_linux_x86_64
  1. Once the launcher has started successfully, access the launcher User Interface from a web browser using the link available.

Welcome Page

  • This will be the starting page for Installation.
  • Click the Proceed to install button, to move to the next stages.

 

Upload License 

  1. Here you need to provide a valid license. This license file will contain the services that are going to be installed and its required resources.
  2. Upload the valid license and click on Continue

💡Note: Please get the updated license files from [email protected].

Also, mention the kind of setup (single node/multi node)  you’re doing for the deployment.

Installation Environment

  1. Here, you will be prompted to select your installation environment choice
  2. You can select K3S/AKS for the existing kubernetes environment and click on Continue button..

Upload Kubeconfig file

  1. Upload kubeconfig which has the super-admin access to the kubernetes cluster.

💡Note: Only YAML file should be uploaded here.

2. On clicking the Continue button, vuLauncher will verify the access to the cluster and get the details of the nodes.

K8s Nodes Selection

  1. Here, you can exclude some of the nodes where you don’t want to run the services.
  2. This is helpful in case we don’t want to schedule our services on Master node OR in case of AKS we have a reserved pool of nodes where by default, it doesn’t allow scheduling of any pods. Click on Continue once the details are updated.

Configure IP

  1. In case you have a Loadbalancer available in your kubernetes cluster(mostly available in case of Managed Kubernetes), you can create a static IP address which will be then used to expose the services.
  2. If you don’t have the Load Balancer, we can expose the services on worker nodes. In this case you can skip this step.

Configure Disk 

  1. Based on the previous step, 3 storage classes will be configured for each type of storage. Accordingly we need to assign the storage class and encryption setting for each disk.
  2. Along with the disk storage, select Encrypted option here for the Hyperscale disk settings and click on Continue.

Mapping

VM to Service Mapping (with advanced Configuration)

  1. Here you can override the Service mapping to VM.
  2. By default, the vulauncher installation script will allocate resources to the available VM in the best possible way. Click on Continue if you’re fine with the default allocation.
  3. If the user wishes to override, they can click the ‘Edit’ button. This will prompt them with a list of VMs, where they can increase or decrease the count as needed.
  4. In the Advanced Configuration section, choose the set of VMs for a service. If a VM goes down, then kubernetes will choose the given set of VMs to bring up this service.  By default, all the nodes are chosen.

Customize

  1. Here users can override the port that the service is running on.
  2. There may be cases where your enterprise requires you to run standard services on non standard ports. Please configure the port for these services here.
  3. To override, click the edit button of the respective service, and then write the required port number.
  4. Click Continue to proceed.

Install

  1. Here this page shows the summary of the information that the user provided.
  2. You can click the edit button on the details page, to move back to their respective section and override the change.
  3. You can also click the name of the stepper window to move.
  4. Then click Continue, to start the deployment procedure

💡Note: Once you start the deployment, you cannot edit the configuration you provided.

Installation Process

  1. The installation shows each event that is going to be performed.
  2. Users can click the “View Information” text button, to view the installation information.
  3. Users can click Cancel Installation to stop the ongoing installation. Additionally, they can retry if the process is halted or if the installation stops.

    💡Note: If for some reasons, your browser or laptop closes and you lose this page, please execute this command and restart your configuration deployment. Please restart launcher using ./build/launcher_linux_x86_640

  4. Once the installation is successful, a prompt will open. Here, users can click Go to vuSmartMaps, and it will redirect to the vuSmartMaps login page.
  5. Use the Login credentials displayed here, to login to the UI.

Post Deployment Steps

  1. Follow the below steps in the master node, once the deployment is successful.
  2. To find out which node is the master node, execute the following command in the node where the Kubernetes cluster is running. These details are provided during the initial deployment.
  • kubectl get nodes -n vsmaps

         

In the above output, the e2e-69-187 node is the master node, since the Role is assigned as Master.

3. Run the following command to own the kube config file

sudo chown -R vunet:vunet /etc/kubernetes/admin.conf

Resources

Browse through our resources to learn how you can accelerate digital transformation within your organisation.

Quick Links