Data Management > Data Source > Adding a New Data Source 

Adding a New Data Source

Before you can start creating dashboards, you need to add your desired data source. Follow these steps:

  1. Click on the Add new data source button on the data sources landing page.




  2. In the search dialog, enter the name of the specific data source you want to add. You can filter the options by data source to narrow down the list.


  3. Click on the desired data source and configure the data source by following the instructions specific to that particular data source.

💡Note: For specific instructions on configuring ElasticSearchPostgreSQL, or HyperScale, refer to the respective sections in the user guide dedicated to each data source.

ElasticSearch

Elasticsearch serves as a powerful search and analytics engine, offering versatile query capabilities for visualizing logs and metrics stored in Elasticsearch. vuSmartMaps seamlessly integrates with Elasticsearch, providing built-in support for efficient data retrieval and visualization.

The following details the configuration options for the Elasticsearch data source in vuSmartMaps:

  1. Name – The data source name used for reference in panels and queries (e.g., elastic-1, elasticsearch_metrics).
    1. Default – Toggle to select as the default data source option. This becomes the default selected data source when creating panels or using Explore.
  2. URL – The URL of your Elasticsearch server.
    1. For a local server, use http://localhost:9200.
    2. For a server within a network, specify the URL with the port where Elasticsearch is running (e.g., http://elasticsearch.example.orgname:9200).
  3. Authentication – There are several authentication methods you can choose in the Authentication section. Select one of the following authentication methods from the available options.
  4. HTTP headers – Click + Add header to add one or more HTTP headers. HTTP headers pass additional context and metadata about the request/response.
    1. Header – Add a custom header. This allows custom headers to be passed based on the needs of your Elasticsearch instance.
    2. Value – The value of the header.
  5. Elasticsearch details – Specific settings for the Elasticsearch data source:
    1. Index name – Specify the default time field and index name using a time pattern (e.g., YYYY.MM.DD) or a wildcard.
      Here are some examples:
      1. Time Pattern Example (YYYY.MM.DD):
        If your indices follow a daily pattern, such as logs-2023.01.01 you can set the index name to logs-YYYY.MM.DD. This configuration tells vuSmartMaps to recognize the date pattern and use it as the time field.
      2. Wildcard Example:
        If your indices have a consistent prefix followed by a date or version, you can use a wildcard. For instance, if your indices are named logs-v1, logs-v2, and so on, you can set the index name to logs-*. This allows vuSmartMaps to query all indices with the specified prefix.
    2. Pattern – Select the matching pattern for your index name (options include: no pattern, hourly, daily, weekly, monthly, or yearly).
    3. Time field name – Name of the time field (default: @timestamp).
    4. Max concurrent shard requests – Sets the number of shards queried simultaneously (default: 5).
    5. Min time interval – Defines a lower limit for the auto group-by time interval.
    6. X-Pack enabled – Toggle to enable X-Pack-specific features and options, providing additional aggregations such as Rate and Top Metrics in the query editor.
  6. Logs – Configure fields for log messages and log levels:
    1. Message field name -Grabs the actual log message.
    2. Level field name – Name of the field with log level/severity information.
      1. When a level label is specified, the value of this label is used to determine the log level and update the color of each log line accordingly. If the log doesn’t have a specified level label, we try to determine if its content matches any of the supported expressions. The first match always determines the log level. If vuSmartMaps cannot infer a log-level field, it will be visualized with an unknown log level.
  7. Data links – Data links create a link from a specified field that can be accessed in Explore’s logs view. You can add multiple data links by clicking + Add. Each data link configuration consists of:
    1. Field – Sets the name of the field used by the data link.
    2. URL/query – Sets the full link URL if the link is external. If the link is internal, this input serves as a query for the target data source.
    3. URL Label (Optional) – Sets a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
    4. Internal link – Toggle on to set an internal link. For an internal link, you can select the target data source with a data source selector. This supports only tracing data sources.

PostgreSQL

vuSmartMaps comes with a built-in PostgreSQL data source plugin that empowers you to query and visualize data from a PostgreSQL-compatible database. PostgreSQL, known for its reliability and extensibility, is a robust choice for applications requiring structured data storage.

Below are the detailed configuration options for the PostgreSQL data source in vuSmartMaps:

  1. Name – The data source name used for reference in panels and queries.
    1. Default – Toggle to select as the default data source option. This becomes the default selected data source when creating panels or using Explore.
  2. PostgreSQL Connection – Configure connection details:
    1. Host – IP address/hostname and optional port of your PostgreSQL instance (do not include the database name).
    2. Database – Name of your PostgreSQL database.
    3. User – Database user’s login/username
    4. Password – Database user’s password
    5. TLS/SSL Mode – Determines the priority of negotiating a secure TLS/SSL TCP/IP connection with the server.
  3. Connection limits – Set limits for connections:
    1. Max open – The maximum number of open connections to the database (default: 100).
    2. Max idle – The maximum number of connections in the idle connection pool (default: 100).
      1. Auto (max idle) – If set, it will set the maximum number of idle connections to the number of maximum open connections (default: true).
    3. Max lifetime – The maximum amount of time in seconds a connection may be reused (default: 14400/4 hours).
  4. PostgreSQL details – Additional details for PostgreSQL:
    1. Version – Determines available functions in the query builder.
    2. TimescaleDB – A time-series database built as a PostgreSQL extension. When enabled, vuSmartMaps displays TimescaleDB-specific aggregate functions in the query builder.
    3. Min time interval – A lower limit for the auto group by time interval, is recommended to be set to write frequency (e.g., 1m if data is written every minute).

HyperScale

The HyperScale data store plugin in vuSmartMaps provides robust support as a backend database. The HyperScale data source plugin in vuSmartMaps is a robust columnar database known for high-performance analytics on extensive datasets. HyperScale‘s columnar storage and parallel processing capabilities make it a compelling choice for real-time analytics and reporting.

Below are detailed configuration options to set up the HyperScale data store seamlessly:

  1. Name – The data source name used for reference in panels and queries.
    • Default – Toggle to select as the default data source option. This becomes the default selected data source when creating panels or using Explore.
  2. Server
    • Server Address – The URL of the HyperScale DS Host address.
    • Server Port – Hyperscale DS server port (8123: HTTP protocol or 9000: other).
    • Protocol – Native or HTTP for server protocol.
    • Secure Connection – Toggle on if the connection is secure.
    • HTTP URL Path – Additional URL path for HTTP requests
  3. HTTP headers – Click + Add header to add one or more HTTP headers. HTTP headers pass additional context and metadata about the request/response.
    • Header – Add a custom header. This allows custom headers to be passed based on the needs of your Elasticsearch instance.
    • Value – The value of the header.
  4. TLS/SSL Settings
    • Skip TLS Verify
    • TLS Client Auth
      1. Client Cert – Provide Client Certificate
      2. Client Key – Provide RSA Private Key
    • With CA Cert
      1. CA Cert – CA Certificate
  5. Credentials
    • Username – Hyperscale DS username
    • Password – Hyperscale DS password
  6. Additional settings – Additional settings are optional settings that can be configured for more control over your data source. This includes the default database, dial and query timeouts, SQL validation, and custom Hyperscale DS settings.
    • Default DB and table
      1. Default database – the default database used by the query builder
      2. Default table – the default table used by the query builder
    • Query settings
      1. Dial Timeout (seconds) – Timeout in seconds for connection
      2. Query Timeout (seconds) – Timeout in seconds for read queries
      3. Validate SQL – Validate SQL in the editor.
    • Logs configuration –  (Optional) default settings for log queries
      1. Default log database – the default database used by the logs query builder
      2. Default log table – the default table used by the logs query builder
      3. Default columns – Default columns for log queries. Leave empty to disable.
        • Use OTel – Enables Open Telemetry schema versioning
        • Time column – Column for the log timestamp
        • Log Level column – Column for the log level
        • Log Message column – Column for log message
    • Traces configuration – (Optional) Default settings for trace queries
      1. Default trace database – the default database used by the trace query builder
      2. Default trace table – the default table used by the trace query builder
      3. Default columns – Default columns for trace queries. Leave empty to disable.
        • Use OTel – Enables Open Telemetry schema versioning
        • Trace ID column – Column for the Trace ID
        • Span ID column – Column for the Span ID
        • Operation Name column – Column for the Operation Name
        • Parent Span ID column – Column for the Parent Span ID
        • Service Name column – Column for the Service Name
        • Duration Time column – Column for the duration time
        • Duration Unit – The unit of time used for duration time
        • Start Time column – Column for the start time
        • Tags column – Column for the trace tags
        • Service Tags column – Column for the service tags
    • Custom Settings – Click the + Add custom setting button and add Setting and Value.

Resources

Browse through our resources to learn how you can accelerate digital transformation within your organisation.