Demo video on how to build a pipeline in 10 minutes and get started.

Real-time enrichment of streaming data with lookup into static stores.

Advance features such as pipeline integration explained through use case demos.

Quick Start

Quick tutorial on how to setup a workspace, create first data pipeline for a sample use-case, and monitor the running application.

Data Pipeline

Data Pipeline allows you to easily create a data pipeline using built-in operators on a drag & drop editor.

Pipeline Management

StreamAnalytix provides tools, such as Import/Export, Pipeline Integration, Versioning, Data Lineage, Pipeline Inspect, and Error Mining, to manage an application pipelines at all the stages of the application lifecycle.

Landing Dashboards

Landing Dashboards provide details about Pipelines, Metrics, SAX Web Health, Connections, Alerts, and License Summary.


Emitters defines the destination stage of a pipeline that could be a NoSql store, Indexer, Relational database, or third party BI tool.


Data access in StreamAnalytix is recognized by Channels that are built-in drag and drop operators to consume data from various data sources such as message queues, transactional databases, log files, and sensors for IOT data.


Processors are the built-in operators for processing the streaming data by performing various transformations and analytical operations.

Know the Super User

Super User can manage functions which are not permissible to Admin or Normal users. Only Super User can: manage Workspaces, start/stop default Sub-Systems, and upload/upgrade Licenses.

User Roles

User Roles determines the level of permissions that are assigned to a user to perform a group of tasks.

Configure Alerts

Alerts will notify you as soon as an unwanted activity takes place within the system or every time the given criteria for an alert is satisfied. You can create rule based alerts as per your own criteria at run- time.


Complex Event Processing, CEP is used in Operational Intelligence (OI) solutions to provide insight into business operations by running query analysis against live feeds and event data.

Persistence & Indexing

Persistence allows you to persist the data in any NoSQL store like HBase or Cassandra, and Indexing allows you to index it in Solr or Elasticsearch.

Streaming Configuration and Filtering

StreamAnalytix provides you the functionality to process multiple streams of the data in parallel and filter the message data at the run-time.

Register Entities

Register Entities allows you to register custom components i.e. custom parsers, channels and processors to be used in the pipelines.

Pipeline Inspect

Pipeline inspection is a mechanism through which the processing components of a pipeline can be investigated.

Data Lineage

Data Lineage provides complete audit trail of the data from its source to its destination.

Scope Variables

Variables allows you to create and use the variables in data pipelines as per the scope. If you have a running pipeline and you want to update the variable value at runtime that is where you can use it to edit and continue.


A transformation rule explains how the value of field in a message should be transformed to be used in predictive model. Transformation variable is associated with a message and can be defined on any field in the message. Once it is defined, it can be used as any other message field, while defining the model.

Pipeline Versioning

Versioning of Pipelines enables you to create different versions of the same pipeline and rollback to a previous version for testing purpose.


StreamAnalytix allows you to monitor each component’s performance graphically and define alerts on the basis of graph value. You can monitor both the Application and the System data.

RT Dashboards

Dashboards provide a powerful means to explore and analyze real-time data using charts and graphs. Dashboards displays the status of metrics and key performance indicators for a pipeline. You can integrate everything you must keep track of, however disparate, onto a single screen through real-time (RT) dashboard.

Dynamic CEP

DynamicCEP allows registration of queries with pre-defined actions ("PUBLISH_TO_RABBITMQ", "INVOKE_WEBSERVICE_CALL" and “CUSTOM_ACTION") applied on the running pipeline.

Read Data from File

Log Agent reads files data from remote data sources and ingests the data into Kafka or RabbitMQ channels.

Custom Processor

Custom processor allows you to implement your custom logic in a pipeline. You can write your own custom code to ingest data from any data source and build it as a custom channel. You can use in your pipelines or even share it with other workspace users.

More on Data Types

While configuring a new message, you need to define the Message Parser Type. Supported Parser Types are Delimited, JSON, AVRO, Regular Expression, and Custom.