Designer offers an intuitive collection of components that mask the complexity of Big data application development

Designer 2018-03-14T14:03:43+00:00

What Designer Does? 

It enables application developers to build and test data processing applications using a pre-built set of components.

The Designer provides the following features for quick and easy application development.

bigdata engineering htrunk designer  

big data engineering data lake temenos designer transformation components


Data Processing Components

No matter where and how the data is stored on Hadoop the components can use the data from any Hadoop Eco-system sources like HIVE, HDFS, HBASE, PARQUET etc or in external RDBMS, NoSQL or cloud systems. with a simple drag and drop approach. This makes it ideal for Extract-Transform-Load (ETL) data pipelines or Iterative data processing.

The data can be processed In-memory on the fly without storing data on Hadoop and also provides components to integrate the data between Hadoop and any external system.

The designer provides components to convert the unstructured data sources like word documents, PDFs, ODFs, RTFs and log files into a structured format and use the structured data for further processing.

Data Transfer Components

Moves a subset or all of the data between external systems ,EDWs and Hadoop to optimize cost-effectiveness of combined data storage and processing.

Pre-built with components to perform CRUD operation on any database. The components enable high performance data transfer to leverage Apache Hadoop capabilities and complement existing systems with out undergoing any changes to existing architecture.

big data engineering data lake temenos designer db components



The Designer provides a quick and easy way to create, implement business logic and define the data flow. It provides an easy way to manage private and shared repositories for powerful and conflict free application development. Designer includes a version controlling system to efficiently track project history over time and collaborate easily with a co-located team or a community of scattered developers.


Job Execution

The Designer enables quick, one click job execution on a smaller data set as part of development or application testing. This enables  the team to test and validate before the code is deployed on a fully distributed environment. The quick execution runs the job locally simulating the distributed execution.



Hadoop’s “Schema on Read” architecture makes Hadoop cluster a perfect storage platform of heterogeneous data (both structured and unstructured) which adds a lot of complexity. Designer presents a relational view of data and can be used across the hTRUNK™ components.