4.1 Configuring Observability Sources
4.1.1 Observability Sources
4.1.2 Data Pipeline and Parser Configuration
4.1.3 Data Enrichment Techniques
4.2 Configuring RCABot and ML Models
7.2 Authentication and Security
7.3 Data Management and Data Model Handling
7.3.1 Storage
7.3.2 Retention
7.3.3 Export/Import
7.3.4 Working with Data Model
7.4 Control Center
7.4.1 License Entitlements
7.5 Platform Settings
7.5.1 Definitions
7.5.2 Preferences
7.5.3 About
Data Streams >
Data Streams are an integral part of the data flow process within the vuSmartMaps™ platform. They play a pivotal role in acquiring, processing, and storing data for further analysis. Understanding how data flows from the source to its ultimate destination is essential for efficient monitoring and analysis.
Data Stream pipelines are constructed using blocks, each of which executes a series of plugins. These pipelines are highly versatile, enabling users to structure them in various ways to suit their specific needs. The primary function of Data Stream pipelines is to read data from a data stream, process it, and then send it to another Data Stream. This multi-stage process paves the way for data transformation and enrichment, preparing it for storage and further analysis.
The data is collected from the target system through Observability Sources. The data then undergoes a significant transformation during the data processing phase, thanks to Data Streams. This phase occurs in distinct sections: I/O Streams, Data Pipeline, and DataStore Connectors, with each having a unique role in processing data. Additionally, the Flows tab gives a display of the data processing journey.
With Data Streams, users can better comprehend the data flow within the vuSmartMaps platform. This understanding helps optimize system performance, enabling proactive monitoring, and extracting valuable insights from the data processed within the platform.
In the subsequent sections of this user guide, we will explore Data Streams in more detail, including their configuration, key functionalities, and how they contribute to an enhanced data flow experience.
The Data Streams page can be accessed from the platform left navigation menu by navigating to Data Ingestion > Data Stream.
The Data Streams landing page will look like this where you can configure with the different options.
The user interface of the Data Stream section is composed of four primary tabs, each designed to facilitate specific actions and configurations, enhancing your ability to harness the full potential of data stream management. Let’s take a closer look at these tabs:
These tabs collectively empower you to configure, manage, and visualize the data flow within your system effectively, facilitating smoother data processing, storage, and analysis. In the upcoming sections, we will delve into each tab’s functionalities to provide a comprehensive understanding of their roles in the data stream management process.
I/O Streams serve as temporary storage units in the data processing journey. Each I/O stream is uniquely named across the entire data stream cluster, ensuring clear and distinct identification for your data.
You can configure the I/O Streams in the following ways:
With these functions, you gain the flexibility to tailor your I/O streams to match your specific data organization preferences, enhancing data management within the Data Streams feature.
The Data Pipeline plays a pivotal role in converting raw data into a format that holds more significance for the end user. This transformation is achieved through the utilization of a diverse range of plugins, including enrichment, manipulation, and more. A Data Pipeline reads data from an I/O stream and, after applying these transformations, sends it to another I/o stream.
Data Pipeline offers the following configurations:
With these configurations, Data Pipeline empowers you to efficiently process and enhance your data, making it more valuable and meaningful to the users within the Data Streams feature.
The transformed data residing within data streams finds its way to a permanent storage destination, such as Elasticsearch or MySQL, via the DataStore Connector.
You can configure the DataStore Connectors in the following ways:
With these configurations, DataStore Connectors facilitate the secure and efficient transfer of data from Data Streams to a permanent storage unit, ensuring data integrity and accessibility for end users.
Flows serve as a dynamic visual representation, elucidating the intricate data flow within the system. It vividly illustrates the path data follows, originating from the source data, traversing through collection agents, and concluding in permanent storage.
Flows empower you to visualize, adapt, and optimize the data’s journey from its origin to permanent storage, enhancing your understanding and control over the data processing pipeline.
Browse through our resources to learn how you can accelerate digital transformation within your organisation.
VuNet Systems is a next-gen visibility and analytics company that uses full-stack AI & Big Data analytics to accelerate digital transformation within an organisation. We provide deep observability into business journeys to reduce failures and enhance overall customer experience.