Docs > Configuration > Log Analytics > Exploring Log Data

On-Prem Deployment & Installation

Introduction

vuLauncher is a specialized application designed to facilitate the installation of vuSmartMaps™  within a customer managed kubernetes cluster or Virtual Machine (VM) based environment only. It offers a user-friendly graphical user interface (UI) through which users can input essential details about the target environment.

An extension to vuLauncher called vuSiteManager is a monitoring and management application that observes, manages, and orchestrates vuSmartMaps. It is installed as part of the vuSmartMaps installation and has its own front-end UI and backend datastore.

vuSiteManager offers several functionalities, including:

  1. Self-Observability: It continuously observes the vuSmartMaps installation and notifies the administrator of any issues in vuSmartMaps performance, enabling users to take corrective action if necessary.
  2. Alerts: Stay informed instantly about any downtime experienced by VM hosts or K8s components. These notifications are displayed as Alerts, providing detailed descriptions in an easy-to-read tabular format.

Deployment prerequisites

Deployment Environment: Managed Kubernetes Cluster

vuSmartMaps requires a managed Kubernetes cluster provided by the customer. Supported Kubernetes clusters include:

  • Native Kubernetes clusters managed by the customer.
  • Amazon Elastic Kubernetes Service (Amazon EKS).

Prerequisites for Managed Kubernetes Cluster

  • Dedicated VM:
    • A separate VM is required to install vuSiteManager, which manages the vuSmartMaps installation. Specifications for this VM are detailed in the sizing sheet.
  • Supported Operating Systems:
    • Kubernetes nodes must run one of the following OS versions:
      • Ubuntu: 20.04, 22.04
      • RHEL: 7.x, 8.x, 9.x
      • Oracle Linux: 7.x, 8.x
  • Storage Requirements:
    • Storage class provisioned as per the sizing sheet.
    • The platform must have permission to create Persistent Volumes (PV) and Persistent Volume Claims (PVC). The storage class must support dynamic PVC provisioning.
  • Network and Permission Requirements:
    • Access to the host network is required.
    • Permission to create and use Custom Resource Definitions (CRDs).
    • CNI (Container Network Interface) must be configured to support network policies (e.g., Calico CNI).
    • Full access to Kubernetes APIs is necessary (kubectl auth can-i ‘*’ ‘*’ must return “yes”).
    • An external load balancer is preferred for exposing web services and ingesting data (logs, traces, and metrics).

Deployment Environment: VM-Based Installation (If Kubernetes Cluster Is Not Feasible)

If providing a managed Kubernetes cluster is not feasible, VuNet can support VM-based installation. This requires additional time and effort to set up a native Kubernetes cluster using VMs.

  • Supported Operating Systems for VMs:
    • Ubuntu: 20.04, 22.04
    • RHEL: 7.x, 8.x, 9.x
    • Oracle Linux: 7.x, 8.x
  • Security Considerations:
    • If the VMs have undergone security hardening, ensure all Kubernetes dependencies are allowed/whitelisted.

General Network and Node/VM Permissions

  • Network Access:
    • Access requirements as specified in VuNet Documentation.
    • Sudo user privileges are required for SiteManager VM and all Kubernetes nodes/VMs.
    • Internet access on Kubernetes nodes or VMs during installation is required for installing any missing Linux package dependencies. If complete internet access is not feasible, applicable package managers should be whitelisted.
    • Package manager URLs should be allowed post-installation for managing upgrades.
  • Storage and Partitioning:
    • Kubernetes nodes/VMs and storage must be provisioned as per the sizing sheet.
    • Ensure all partitions are used by the same user and group.
    • At least one mount point must be available on the VM for storing hot storage data for Hyperscale.
    • A /var partition with a minimum of 20 GB storage is required on all Kubernetes nodes/VMs.
  • Additional Permissions:
    • SSH and/or restart permissions for Kubernetes nodes/VMs are preferred.
    • VMs/Kubernetes nodes should not be part of an existing Kubernetes/Docker Swarm cluster.
    • Any previous non-NG version of vuSmartMaps must be fully removed before installing the NG version. It must be a freshly installed VM.

Special Requirements for Specific Kubernetes Platforms/Operating Systems

A few kubernetes base platform / Operating systems need specific requirements for vuSmartMaps installation, below table captures the additional requirements.

Kubernetes base platform / Operating systemsSpecial RequirementsReferences/Comments 
UbuntuInstall SELinux packages if it doesn’t existssudo apt-get install selinux-basics selinux-policy-default
RedHatInstall iSCSI packages if it doesn’t exists

yum install iscsi-initiator-utils

systemctl start iscsid.service

systemctl status iscsid.service

RedHatDisable SELinux

How to Disable SELinux Temporarily or Permanently

In RH 8.x install setenforce:

sudo apt-get install selinux-basics selinux-policy-default

Launcher Pre-requisites

Before utilizing vuLauncher, ensure the following prerequisites are met:

  1. Kubernetes nodes / VMs and Storage provisioned as per storage sizing requirements .
  2. We should have a “/var” partition in all VMs with a minimum of 20GB storage.

💡Note: Ensure that all the partitions are being used by the same user and user group.

  • Sudo access should be provided to the user in order to set up a Kubernetes cluster
  • Please follow the below steps to own the partitions by the same user and user group. The same user user group should be present across all the VMs
  • Below steps mentions about creating a group called rvunet and adding vunet user to the group

    1. Create the group using – groupadd rvunet
    2. Add the users to the group
      • usermod -aG rvunet root
    1. To change the group ownership of /data, run the following command
      • chown -R :rvunet /data
    1. Set group write permissions accordingly
      • chmod -R g+w /data
    1. Set the setgid bit for the same partition
      • chmod g+s /data

  1. Create 2 more mount points, if you wish to store Hyperscale data for a longer duration (warm and cold) and follow the above steps to own the partitions.
  2. 💡Note: The partition should be of same name across all the VMs
  3. For RHEL OS Disable Swap and Allow mount point to detach automatically
  4. Disable the Swap with below command
    sudo swapoff -a
  5. Add the below line to detach the mount point automatically in path: /etc/sysctl.conf
    fs.may_detach_mounts=1
  6. Supported OS
    1. Ubuntu – 20.04, 22.04
    2. RHEL –   8.x
    3. Oracle Linux – 7.x,8.x

Access Requirements

Before proceeding, ensure that the following ports are properly configured on your system.

External Communication

External Access Requirements to download vuSmartMaps binary (for installations and during upgrades)

Divider

SNo. Source Destination Ports Protocol Description
1 download.vunetsystems.com/ 216.48.186.166 Server where vuLauncher is executed 443 TCP To download vuSmartMaps installation package.
2 https://ghcr.io/vunetsystems/ Server where vuLauncher is executed and all the vuSmartMaps Server 443 TCP VuNet’s GitHub container repository
3 https://pkg-containers.githubusercontent.com Server where vuLauncher is executed and all the vuSmartMaps Server 443 TCP VuNet’s GitHub package containers

Along with the above whitelisting, Internet access is required on the Kubernetes nodes or VMs during the installation. This is required for installing any missing Linux Package dependencies. If complete internet access is not feasible, applicable package managers (apt, dpkg, yum, dnf, rpm, zypp, pacman, portage, apk-tools, xbps, pkgtools or nix) to be whitelisted. We recommend package manager URLs to be allowed post installation as well to manage upgrades.

Generic External Access Requirements

SNo. Source Destination Ports Protocol Description
1 Users and administrators of vuSmartMaps vuSmartMaps Server 22,443, 8080 TCP vuSmartMaps Installation, Launch and configuration
2 Administrator who is installing and setting up vuSmartMaps vuSmartMaps Server 8082, 30901, 30910 TCP Accessing longhorn and MinIO UI during this installation. Post installation, these ports are not required.

Data Source specific Access Requirements for telemetry collection.

SNo. Source Destination Ports Protocol Description
1 Servers in which vuSmartMaps agents are installed vuSmartMaps Server 31092 TCP Port of agents to send Logs & Metrics to vuSmartMaps
2 Application Servers in which open telemetry based instrumentation or agents are setup vuSmartMaps Server 4317, 4318 TCP Port for open telemetry based traces, logs and traces using OLTP
3 vuSmartMaps Server Network Devices providing SNMP based access to telemetry 161 UDP Port for SNMP polling from vuSmartMaps onto network/security devices
4 vuSmartMaps Server Systems to which vuSmartMaps need to connect over http for telemetry collection 443 TCP Port for http polling, prometheus scraping etc
5 vuSmartMaps Server Databases to which vuSmartMaps need to connect over JDBC for performance metrics collection Various Database ports (3306 – MySQL, 1521 – Oracle, 5432 – Postgres, 1433 – MS SQL) TCP Port for JDBC based polling
6 Network devices sending SNMP traps vuSmartMaps Server 162 UDP Port for sending SNMP traps from network devices to vuSmartMaps
7 Network devices sending syslogs vuSmartMaps Server 514 UDP Port for sending syslogs from network devices to vuSmartMaps

Intra Cluster Communication

Within the vuSmartMaps cluster, various services including Kubernetes control plane interact across nodes. This requires the respective communication ports (TCP) to be opened.

It is preferred that there is unrestricted across nodes within the cluster. However, if access control policies are in place using Firewalls or hypervisors, the following communication ports are to be opened.

Node Types – there will be two types of nodes (Virtual Machines) allocated for vuSmartMaps installation.

  • vuSiteManager VM. This VM runs in the installation, management and self monitoring software and is sitting outside the vuSmartMaps Kubernetes cluster. This VM requires access to nodes/services in vuSmartMaps Kubernetes cluster for installation, deployment, management and monitoring.
  • vuSmartMaps Nodes. These are the VMs in vuSmartMaps cluster running vuSmartMaps services.

Ports/Communication between vuSiteManager VM and vuSmartMaps VMs

SNo.PortProtocolDescription
16443TCPOrchestration API port. This port should be open for communication from Site Manager VM to all vuSmartMaps VMs.
222TCPSSH port. This port should be open for communication from Site Manager VM to all vuSmartMaps VMs.
3443TCPUI port. Ports should be open between all vuSmartMaps server and site-manager servers.
49000TCPHyperScale port. This port should be open for communication from Site Manager VM between all vuSmartMaps VMs.

Ports/Communication between all vuSmartMaps VMs

SNo.PortProtocolDescription
16443TCPKubernetes Orchestration API port. This port should be open for communication between all vuSmartMaps VMs.
210250TCPKubernetes Orchestration service port. This port should be open for communication between all vuSmartMaps VMs.
310259TCPKubernetes Orchestration service port. This port should be open for communication between all vuSmartMaps VMs.
410257TCPKubernetes Orchestration service port. This port should be open for communication between all vuSmartMaps VMs.
58472UDPKubernetes vxLan port. This port should be open for communication between all vuSmartMaps VMs.
62379TCPKubernetes Orchestration key value DB port.  This port should be open for communication between all vuSmartMaps VMs.
72380TCPKubernetes Orchestration key value DB port. This port should be open for communication between all vuSmartMaps VMs.

💡Note: For a single node deployment, ports should be opened internally on the same node. In case of multi node deployment, ports should be opened internally on all the nodes including sitemanager and launcher VMs. Apart from Traefik, other services can’t switch to other ports if there is a conflict with default ports. So, for this release, the default service ports will be used.

VM Based Installation

Single Node Installation

You must have access to the VM with the minimal configuration mentioned in the prerequisites below. You should have the ‘ssh’ credentials/key to be able to ssh to the virtual machine. The OS of the VM should be linux.

Downloading the binary

  1. Create a folder in the home directory and download the NG installation binary using the below command
  • wget https://download.vunetsystems.com/_Downloads_/_vuDocker_/vuSmartMaps_NG_2_12_0.tar.gz  --user=<username> --password=<password>  --no-check-certificate

💡Note: If you don’t have access to the download server, download the binaries directly from this URL

Please check with [email protected] for getting the credentials for Download server.

  1. Extract the tar file using
    • tar -xvzf vuSmartMaps_NG_2_12_0.tar.gz
  1. Once extracted, start the launcher, using
    •  ./build/launcher_linux_x86_64

💡Note:  If we want to make sure the launcher is running smoothly even if the terminal is exited.

  • Use the below utilities to run the launcher binary in the background.
  • tmux,nohup or screen.
  1. Once the launcher has started successfully, access the launcher User Interface from a web browser using the link available.

Welcome Page

  • This will be the starting page for Installation.
  • Click on Proceed to install button, to start with the actual NG installation.

Upload License

  1. Here you need to provide a valid license. This license file will contain the services that are going to be installed along with its required resources.
  2. Upload the valid license and click on Continue.

💡Note: Please get the updated license file from [email protected]

Installation Environment

  1. Here, you will be prompted to select your installation environment choice
  2. Select the installation environment as Virtual Machine.
  3. Click the Continue button to proceed further.

Configure VM

  1. Here you need to provide the VM Credentials, along with either private key or password-based authentication details.
  2. Under the IP Address section. Add the Public IP address of the VM where you want to install the vuSmartMaps.
  3. After providing the credentials, the metrics for the VM will be displayed on the right-hand side.
  4. Verify the VM details and click on Continue.

💡Note: The VM Credentials will be shared, along with the VM details.

Configure Data Store

For Hyperscale data tier configuration, we have below options:

  1. Hot: Most frequently used data will be stored here, so preferably choose a storage class that has high Disk IOPS.
  2. Warm: The data that is not mostly accessed will be stored in warm disk space. So a default storage class would be sufficient.
  3. Cold: This is where we will store data for Archival purposes. We store this data in S3 bucket (Minio).
  4. In this step, we need to select the Nodes where we have the Hot, Warm, and Cold mounts available. And based on this section, the storage classes will be configured and used for storing data in the Hyperscale database.
  5. Based on the requirements, choose the disk(s) required for the installation and click on Continue.

💡Note: HOT Disk should always be selected.

Configure Disk

  1. Based on the previous step, 3 storage classes will be configured for each type of storage. Accordingly, we need to assign the storage class and encryption setting for each disk.
  2. Along with the disk storage, select the Encrypted option here for the Hyperscale disk settings and click on Continue.

💡Note: Since we’ve only one mount point, we’re going with only Hot disk configuration. Select Warm as Longhorn-hdd and Cold as Longhorn-archive in case 2 more mount points are added.

Mapping

VM to Service Mapping (with advanced Configuration)

  1. Here you can override the Service mapping to VM
  2. By default, the vulauncher installation script will allocate resources to the available VM in the best possible way. Click on Continue if you’re fine with the default allocation.
  3. If the user wishes to override, they can click the ‘Edit’ button. This will prompt them with a list of VMs, where they can increase or decrease the count as needed.
  4. In the Advanced Configuration section, choose the set of VMs for a service. If a VM goes down, then kubernetes will choose the given set of VMs to bring up this service.  By default, all the nodes are chosen.

Customize

  1. Here users can override the port that the service is running on.
  2. There may be cases where your enterprise requires you to run standard services on non standard ports. Please configure the port for these services here.
  3. To override, click the edit button of the respective service, and then write the required port number
  4. Click Continue to proceed.

Install

  1. Here this page shows the summary of the information that the user provided.
  2. You can click the edit button on the details page, to move back to their respective section and override the change.
  3. You can also click the name of the stepper window to move.
  4. Then click Continue, to start the deployment procedure.

💡Note: Once you start the deployment, you cannot edit the configuration you provided.

Installation Process

  1. The installation displays each event that will be performed.
  2. Users can click Cancel Installation to stop the ongoing installation. Additionally, they can retry if the process is halted or if the installation stops.

💡Note: If for some reason, your browser or laptop closes and you lose this page, please execute this command and restart your configuration deployment. Please restart launcher using ./build/launcher_linux_x86_640

3. Once the installation is successful, a prompt will open. Here, users can click Go to vuSmartMaps, and it will redirect to the vuSmartMaps login page.

4. Use the login credentials displayed here to access the UI.

Post Deployment Steps

Once the deployment is successful, run the following command in the master node to own the kube config file and to list the storage classes

  • sudo chown -R vunet:vunet /etc/kubernetes/admin.conf
  • kubectl get sc

Along with the above, please verify the below scenarios

S No.Description
1Sufficient PVC allocation for al the resources
2Kafka and Clickhouse replica and instances in case of multi node deployment
3

Post jobs should be deployed successfully which includes below

  •  Default system dashboards
  • Enrichment connector
  • Notification tables under vusmart database in Hyperscale
  • O11y Sources available in this NG version
  • Agent binaries, vublock templates and vustream template  should be available in respective MinioUI buckets
  • Public, report, hs-archives and vublock buckets should be created along with required images and folders

Default Timezone 

Each vuSmartMaps installation will have a default timezone configured in the About page. By default, this is set to UTC. This time zone serves as the base timezone for the platform and can only be updated by the Admin. The default timezone is used for:

  • The user interface (observability): viewing Alerts, Dashboards, Reports, Log Analytics, and downloading Reports and Dashboards as PDFs with a Global time selector.
  • Scheduling: The timezone for the scheduled time for Alerts and Reports.
  • Distributed channels: Timezone of the content sent via Emails, SMS, WhatsApp, ITSM, etc.

User-specific Timezone

User-specific timezones can also be configured by each user from the Profile page, allowing customization of the timezone settings for individual preferences while the platform-wide operations adhere to the default timezone.

To specify the user-specific timezone, navigate to the User-Specific Timezone icon at the top right, which displays the timezone set by the user in their profile.

You can change this timezone by navigating to the profile section.

Select your desired timezone from the User Specific Timezone dropdown menu, and the system will update to reflect the chosen timezone.

Default Retention Settings

Each vuSmartMaps installation will have default data retention settings available under Platform Settings -> Data Retention -> Hyperscale DataStore.

Update the default settings accordingly as per the requirements

Multi-Node Installation

You must have access to the VM with the minimal configuration mentioned in the prerequisites below. You should have the ‘ssh’ credentials/key to be able to ssh to the virtual machine.

In the case of vuSmartMaps multi-node cluster installation, it is recommended to have a separate node apart from all vuSmartMaps nodes where we will run vuLauncher and vuSiteManager.

A dedicated VM for SiteManager with a minimum configuration of 200GB disk space, 16 cores, and 64GB RAM. This VM must have connectivity to all other nodes where vuSmartMaps will be installed. The OS of the VM should be linux.

License configuration will be different for multi node installation, please check with [email protected] for getting the license according to the installation environment.

A  user with passwordless sudo privilege to be present on all the VMs in the case of multi node deployment.

Ensure uniform credentials across all VMs i.e. same password or private key across all VMs

💡Note: For a multi node deployment, ports should be opened internally on all the nodes including sitemanager and launcher VMs. Apart from Traefik, other services can’t switch to other ports if there is a conflict with default ports. So, for this release, the default service ports will be used.

Downloading the binary

  1. Create a folder in the home directory and download the NG installation binary using the below command
  • wget https://download.vunetsystems.com/_Downloads_/_vuDocker_/vuSmartMaps_NG_2_12_0.tar.gz  --user=<username> --password=<password>  --no-check-certificate

💡Note: If you don’t have access to the download server, download the binaries directly from this URL

Please check with [email protected] to get the credentials for the Download server.

  1. Extract the tar file using
    • tar -xvzf vuSmartMaps_NG_2_12_0.tar.gz
  1. Once extracted, start the launcher, using
    •  ./build/launcher_linux_x86_64
  1. Once the launcher has started successfully, access the launcher User Interface from a web browser using the link available.

Welcome Page

  • This will be the starting page for Installation.
  • Click on Proceed to install button, to start with the actual NG installation.

Upload License

  1. Here you need to provide a valid license. This license file will contain the services that are going to be installed along with its required resources.
  2. Upload the valid license and click on Continue.

💡Note: Please get the updated license file from [email protected]

Please mention the number of nodes when requesting the license, you’re using in case of multi-node deployment.

Installation Environment

  1. Here, you will be prompted to select your installation environment choice
  2. Select the installation environment as Virtual Machine.
  3. Click the Continue button to proceed further.

Configure VM

  1. Here you need to provide the VM Credentials, along with either private key or password-based authentication details. Ensure that a passwordless connection is there for all the VMs.
  2. Under the IP Address section. Add the Public IP address of all the VMs where you want to install the vuSmartMaps.
  3. After providing the credentials, the metrics for the VM will be displayed on the right-hand side.
  4. Verify the VM details and click on Continue.

Configure Data Store

For Hyperscale data tier configuration, we have below options:

  1. Hot: Most frequently used data will be stored here, so preferably choose a storage class which has high Disk IOPS.
  2. Warm: The data which is not mostly accessed will be stored in warm disk space. So a default storage class would be sufficient.
  3. Cold: This is where we will store data for Archival purposes. We store this data in S3 bucket (Minio).
  4. In this step, we need to select the Nodes where we have the Hot, Warm, and Cold mounts available. And based on this section, the storage classes will be configured and used for storing data in the Hyperscale database.
  5. Based on the requirements, choose the disk(s) required for the installation and click on Continue.

💡Note: HOT Disk should always be selected.

In the case of multi-node installation, select the Data Store configuration accordingly.

Configure Disk

  1. Based on the previous step, 3 storage classes will be configured for each type of storage. Accordingly, we need to assign the storage class and encryption setting for each disk.
  2. Along with the disk storage, select the Encrypted option here for the Hyperscale disk settings and click on Continue.

💡Note: Since we’ve only one mount point, we’re going with only Hot disk configuration. Select Warm as Longhorn-hdd and Cold as Longhorn-archive in case if 2 more mount points are added.

Mapping

VM to Service Mapping (with advanced Configuration)

  1. Here you can override the Service mapping to VM
  2. By default, the vulauncher installation script will allocate resources to the available VM in the best possible way. Click on Continue if you’re fine with the default allocation.
  3. If the user wishes to override, they can click the ‘Edit’ button. This will prompt them with a list of VMs, where they can increase or decrease the count as needed.
  4. In the Advanced Configuration section, choose the set of VMs for a service. If a VM goes down, then Kubernetes will choose the given set of VMs to bring up this service.  By default, all the nodes are chosen.

Customize

  1. Here users can override the port that the service is running on.
  2. There may be cases where your enterprise requires you to run standard services on non standard ports. Please configure the port for these services here.
  3. To override, click the edit button of the respective service, and then write the required port number
  4. Click Continue to proceed.

Install

  1. Here this page shows the summary of the information that the user provided.
  2. You can click the edit button on the details page, to move back to their respective section and override the change.
  3. You can also click the name of the stepper window to move.
  4. Then click Continue, to start the deployment procedure.

💡Note: After starting the deployment, you cannot edit the configuration.

Installation Process

  1. The installation shows each event that is going to be performed.
  2. Users can click Cancel Installation to stop the ongoing installation. Additionally, they can retry if the process is halted or if the installation stops.

💡Note: If for some reasons, your browser or laptop closes and you lose this page, please execute this command and restart your configuration deployment. Please restart launcher using ./build/launcher_linux_x86_640

3. Once the installation is successful, a prompt will open. Here, users can click Go to vuSmartMaps, and it will redirect to the vuSmartMaps login page.

4. Use the Login credentials displayed here, to login to the UI.

Post Deployment Steps

Follow the below steps in the master node, once the deployment is successful.

  1. To find out which node is the master node, execute the following command in the node where the Kubernetes cluster is running.
    • kubectl get nodes -n vsmaps

In the above output, the e2e-69-187 node is the master node, since the Role is assigned as Master.

  1. Run the command below from the command line to access kubectl cli.
- echo unset KUBECONFIG >> ~/.bash_profile
  1. Run the following command in the master node to own the kube config file and to list the storage classes
-kubectl get sc

In addition to the above, please verify the following scenarios.

S No.Description
1Sufficient PVC allocation for al the resources
2Kafka and Clickhouse replica and instances in case of multi node deployment
3

Post jobs should be deployed successfully which includes below

  •  Default system dashboards
  • Enrichment connector
  • Notification tables under vusmart database in Hyperscale
  • O11y Sources available in this NG version
  • Agent binaries, vublock templates and vustream template  should be available in respective MinioUI buckets
  •  public, report, hs-archives and vublock buckets should be created along with required images and folders

Default Timezone 

Each vuSmartMaps installation will have a default timezone configured in the About page. By default, this is set to UTC. This time zone serves as the base timezone for the platform and can only be updated by the Admin. The default timezone is used for:

  • The user interface (observability): viewing Alerts, Dashboards, Reports, Log Analytics, and downloading Reports and Dashboards as PDFs with a Global timeselector.
  • Scheduling: The timezone for the scheduled time for Alerts and Reports.
  • Distributed channels: Timezone of the content sent via Emails, SMS, WhatsApp,ITSM, etc.

User-specific Timezone

User-specific timezones can also be configured by each user from the Profile page,allowing customization of the timezone settings for individual preferences while the platform-wide operations adhere to the default timezone.

To specify the user-specific timezone, navigate to the User-Specific Timezone icon at the top right, which displays the timezone set by the user in their profile.

You can change this timezone by navigating to the profile section.

Select your desired timezone from the User Specific Timezone dropdown menu, and the system will update to reflect the chosen timezone.

Default Retention Settings

Each vuSmartMaps installation will have default data retention settings available under Platform Settings -> Data Retention -> Hyperscale DataStore.

Update the default settings accordingly as per the requirements

Upcoming Enhancements

Anticipate the following improvements in future releases:

  1. Enhanced UI experience.
  2. Log downloading in case of installation failure.
  3. Provision of an executable build instead of a tar file.
  4. Support for different authentication keys for each VM.
  5. Implementation of HTTPS security.
  6. Integration of fasthttp for improved performance.

Further Reading

FAQs

Before deploying vuSmartMaps, ensure your VM has at least 6 cores, 64GB memory, and 200GB disk space. Additionally, you need to create three mount points (data1, data2, data3) and ensure ports are properly configured. For a detailed list of prerequisites, refer to the Prerequisites section.

To install vuSmartMaps on a single node, download the installation binary, extract the tar file, and start the vuLauncher. Access the launcher UI from a web browser. For step-by-step instructions, visit the Single Node Installation section.

During the installation, you can configure three types of storage classes: Hot, Warm, and Cold. Each class serves different purposes based on data access frequency. Detailed instructions can be found in the Configure Data Store section.

For a multi-node cluster installation, you need a dedicated VM for the SiteManager with a minimum configuration of 200GB disk space, 6 cores, and 64GB RAM. Ensure this VM has connectivity to all other nodes. For more details, see the Multi-Node Cluster Installation section.

You need to upload a valid license file that contains the services to be installed and their required resources. If you encounter issues, ensure the license format is correct and retry. For more information, check the Upload License section.

vuSmartMaps supports Ubuntu (20.04, 22.04), RHEL (7.x, 8.x, 9.x), CentOS (7.x, 8.x), Rocky (9.x), and Oracle Linux (7.x, 8.x).

Ensure that specific ports, such as 6443, 2379, and 10250, are open and properly configured on your system. A comprehensive list of required ports and their purposes can be found in the Ports Description section.

After deployment, ensure you run specific commands on the master node to finalize the setup. For detailed post-deployment steps, visit the Post Deployment Steps section.

Resources

Browse through our resources to learn how you can accelerate digital transformation within your organisation.

Quick Links