Ibm Cognos Data Manager

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Pentaho Data Integration (PDI), a.k.a. Kettle, is one of the most popular open source business intelligence data integration products available for working with analytical databases. It has a meta-driven approach and has a strong and a very easy GUI. Pentaho has a community of 13,500 registered users. Informatica PowerCenter is the market share leader in commercial data integration suite. The Company’s sole focus is data integration. It is expensive, requires training but is considered very fast. It is the market share leader in commercial data integration products. IBM Cognos Data Manager creates data warehouses and data repositories for reporting, analysis, and performance management. Data Manager can transfer data into single database tables and dimension tables. This paper attempts to explore some of the facets of each tool to help understand the differences an ETL can offer. Extraction, transaction and load tools represent the optimal powerful technologies utilized today.

Table of Contents

Introduction

Objectives of the paper: The objectives of the paper are to inform the reader of three different Extraction, Transformation and Data Load Data Mining Tools. In using the guidelines provided by Center for Data Insight and Dr. Kweku Bryson-Osei it will provide a tool evaluation and discovery of the differences of the three tools presented.

Pentaho Data Integration (Kettle)

IBM Cognos Data Manager

Informatica PowerCenter

Limitations of the paper: The paper is limited by the inability to select a reference tool to guide the selection process. Normally, platforms, architectures, business use cases all help determine the weight of criterion that is most selectively important for the organizations choice of tool-in that sense, we cannot evaluate. Also, papers are strictly limited to the reader’s determination through the written word. Unfortunately, most of what is being critiqued is without real hands-on experience.

Overview of ETL Process:

The ETL Process is the Extract, transformation, load functions that are utilized as a database functionality tool to read and move data from one source and load it into the data warehouse. The transformation of data involves transforming data to the data warehouse format, cleansing of data, and transporting to the target system for further processing. The further loading is writing the data to the target database.

Evaluation Criteria Definitions:

Connectivity/Adapter-refers to a program or device’s ability to link with other programs and devices.

Data Profiling-technology for discovering and investigating data quality issues, such as duplication, lack of consistency, and lack of accuracy and completeness. This is accomplished by analyzing one or multiple data sources and collecting metadata that shows the condition of the data and enables the data steward to investigate the origin of data errors. (Gartner, 2012)

Data Cleaning-is the process of detecting and correcting or removing corrupt or inaccurate records for a record set, table or database. (Wikipedia, 2013)

Data Transformation-converts data from a source data format into a destination data target

Data Load- To copy a program into the memory of a computing device so that it can later be used for processing. (Gartner, 2012)

Real Time ETL- The description for a system that responds to an external event within a short and predictable time frame.

Parallel Processing - The solution of a single problem across more than one processor. Little parallel processing is done today outside of research laboratories, because it is difficult to decompose tasks into independent parts, and the compiler technology does not yet exist that will extensively parallelize applications code. (Gartner, 2012)

Platform variety- An underlying computer system which application programs can run or any base of technologies on which other technologies or processes are built.

Data Size Scalability-The measure of a system’s ability to increase or decrease in performance and cost in response to changes in application and system processing demands. (Gartner, 2012)

Efficiency-the ratio of actual operating time to scheduled operating time of a computer. (Farlex, 2003)

Robustness-a system that does not break down easily or is not wholly affected by a single application failure. (Webopedia, 2013)

Metadata Repository-physically stores and catalogues metadata. The metadata that is stored should be generic, integrated, current, and historical. Metadata repositories used to be referred to as a data dictionary. (Marco, 2004)

Data Lineage-a search that seeks to identify the tables, columns, and transformations that have an impact on a selected table or column. (SAS DATA Integration, 2013)

Impact Analysis Report- a technique designed to analyze the "unexpected" negative effects of a change on a system.

User Friendly feature – refers to anything that makes it easier for novices to use a computer or application, i.e. GUI’s

Design Architecture – the overall design of a computing system and the logical and physical interrelationships between its components. The architecture specifies the hardware, software, access methods and protocols used throughout the system. (Gartner, 2012)

Error Report and recovery – A list produced by a computer showing the error conditions, such as overflows and errors resulting from incorrect or unmatched data, that are generated during program execution.

Scheduling-the method by which threads, processes or data flows are given access to system resources. This is usually done to load balance a system effectively or achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking (execute more than one process at a time) and multiplexing (transmit multiple flows simultaneously). (Wikipedia, 2013)

Reports – formatted result of database queries and contain useful data for decision making and analysis. (Janalta Interactive Inc., 2010)

Security – a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible. (Janalta Interactive Inc., 2010)

Pentaho Data Integration

ETL Functionality

Pentaho Data Integration (PDI, also called Kettle) is the component of Pentaho responsible for the Extract, Transform and Load (ETL) processes. Pentaho is software is Open Source cost effective software which means that the software is licensed under a license that gives you the right to use and update (its source code) freely.

Pentaho Data Integration Suite or PDI can be used as a standalone application, or it can be used as part of the larger Pentaho Suite. As an ETL tool, it is the most popular open source tool available. PDI supports a many input and output formats, including text files, data sheets, and commercial and database engines. The transformation capabilities of PDI allow you to manipulate data with very few limitations. Spoon is the graphical transformation and job designer associated with the Pentaho Data Integration suite — also known as the Kettle project. Kettle is an acronym for "Kettle E.T.T.L. Environment." Kettle is designed for ETTL needs, which include the Extraction, Transformation, Transportation and Loading of data. Kettle is an acronym for "Kettle E.T.T.L. Environment." Spoon is a graphical user interface that allows you to design transformations and jobs that can be run with the Kettle tools — Pan and Kitchen. Pan is a data transformation engine that performs functions such as reading, manipulating, and writing data to and from various data sources. Kitchen is a program that executes jobs designed by Spoon in XML or in a database repository. (Baker, 2011)

Pentaho’s connectivity is modern and extensible, 100% Java for a cross-platform deployment with pluggable architecture for adding connectors, transformations and user-defined expressions. (Integration of all data - in one platform; maximum scalability) It has connectivity to 40 databases via native JDBC, flat files, xml files, excel files and web services. (Pentaho Corporation, 2010-2013)

Data Cleaner is fully integrated within Pentaho Kettle / PDI and you can profile your data directly within Spoon.

The primary focus of Data Cleaner is analysis, often during such analysis you will find yourself

actually improving data by means of applying transformers and filters on it. Data Transformation is handled by the transformers that are components used to modify the data before analyzing it. Sometimes it's necessary to extract parts of a value or combine two values to correctly get an idea about a particular measure. (Scribd, 2013)

Transformations and Jobs can describe themselves using an XML file or can be put in a Kettle database repository. Pan or Kitchen can then read the data to execute the steps described in the transformation or to run the job. In summary, Pentaho Data Integration makes data warehouses easier to build, update, and maintain.

A Pentaho Data Integration Transformation can be executed in many ways:

From the "Spoon" GUI ETL interface by running locally

Published to the ETL Server "Carte" by running remote from Spoon

From command line processes "Kitchen" and "Pan"

From the Pentaho BI Server via Pentaho Action Sequence (.xaction file) (Reference Documentation 3.1.2., 2008-2011)

In addition to using JDBC and SQL we can source data in real-time from SOAP (WSDL) and REST (HTTP) based web services by using our own data integration layer. Send an XML document or single line of data into Java Messaging Service, platform calls PDI, PDI does transform and inserts into the database. (Pentaho Community)

Usually it's sufficient to have nightly jobs in place to satisfy your requirements. In fact, the vast majority of all Kettle jobs and transformations are nightly batch jobs. However, there are exceptions for those types of jobs that need to get source data in the hands of users quicker. When you make the interval between batches smaller, usually between a minute and an hour, the jobs are referred to as micro-batches or small periodic batches. If you make the interval between batches even smaller, you can speak of near real-time data integration. (Casters, 2010) It is deemed an excellent tool for real time data integration.

Performance

The reason for partitioning data up is linked to parallel processing since it makes it possible to execute certain tasks in parallel where this is otherwise impossible. Kettle supports parallel execution of job entries, but has no direct support for synchronizing them, which simply means that Kettle has no built in support to wait for specific threads to finish, or detect whether all threads have finished. (Chodnicki, 2010)

Pentaho provides integration of all data - Hadoop, NoSQL and relational - in one platform. Building a scalable and recoverable solution with Pentaho Data Integration can involve a number of different parts. It is not a check box that you simply toggle when you want to enable or disable it. It involves careful design and planning to prepare and anticipate the events that may occur during an ETL process. PDI includes a variety of components including complete ETL logging, web services and variables that can be used to build recover-ability, availability, and scalability scenarios into your PDI ETL solution. (Pentaho, 2011)

Pentaho acknowledges the contributions of members of its community forums where efforts have allowed Pentaho to respond efficiently to the changing needs of users. Kettle’s strength comes from the ability to use shell scripting, efficient option to writing custom code, JavaScript, user-defined Java classes, custom programmed plug-ins and the ability to modify source code to meet the needs of the project. Loading is efficient with many bulk loading options for major database servers (Grecik)

Metadata management

Pentaho Metadata is a feature of the Pentaho BI Platform designed to make it easier for users to access information in business terms.

With Pentaho's open source metadata capabilities, administrators can define a layer of abstraction that presents database information to business users in familiar business terms. Administrators identify relationships between tables in the database, create business-language definitions for complex or cryptic database tables and columns, set security parameters to limit data access to appropriate users, specify default formatting for data fields, and provide additional translations for business terms for multi-lingual deployments. Business users can then use Pentaho's new ad hoc query capabilities to choose the specific elements they would like to include in a given report, such as order quantities and total spending by customer grouped by region. The SQL required to retrieve the data is generated automatically. (Pentaho, 2010) Pentaho DI does not have a metadata repository, if you want a data pump and governance is not important, then generally Pentaho is fine.

Kettle makes it easy to track down sources and targets in the user interface. To solve this challenge with Kettle on a transformation level, it reports the name of the transformation and the step as well as the used database and table name.

In the context of a transformation, lineage means that you want to learn where information is coming from, in which steps it is being added or modified, or in which database table it ends up. In a Kettle transformation, new fields are added to the input of a step in a way that is designed to minimize the mapping effort. The rule of thumb is that if a field is not changed or used, it doesn't need to be specified in a step. This minimizes the maintenance cost of adding or renaming fields. The row metadata architecture that the developers put in place not only allows you to see which fields are entering a step and what the output looks like, but it can also show you where a field was last modified or created. (http://my.safaribooksonline.com/book/databases/business-intelligence/9780470635179, 2010)

An impact analysis report can be created by using the Transformation (Menu) Impact is a report feature/icon that determines how data sources will be affected by the transformation if it is completed successfully.

Design and Development

Spoon is the design interface for building ETL jobs and transformations. Spoon provides a drag and drop interface allowing you to graphically describe what you want to take place in your transformations which can then be executed locally within Spoon, on a dedicated Data Integration Server, or a cluster of servers. (Pentaho Corporation, 2010-2013)

The Data Integration Server is a dedicated ETL server whose primary functions are:

"Executing ETL jobs and transformations using the Pentaho Data Integration engine.

Allows manage users and roles (default security) or integrate security to your existing security provider such as LDAP or Active Directory.

Provides a centralized repository that allows you to manage ETL jobs and transformations.

Provides the services allowing you to schedule and monitor activities on the Data Integration Server from within the Spoon design environment." (Pentaho Corp, 2005 - 2012)

Administrative Utilities

For error handling, an error hop, defined in the Table Output step with a right click, Define Error handling, and a dialog that allows configuration of the error handling. You can not only specify the target step to which you want to direct the rows that caused the error. You can also include extra information in the error rows so that you know what went wrong. (Casters, Matt casters on data integration, 2013)

The Pentaho BI platform currently employs Quartz as its scheduler. Access to the scheduler is through the org.pentaho.plugin.quartz. Job Scheduler Component by implementing an Action Sequence. (Pentaho Community)

This suite of open-source reporting tools allows you to create relational and analytical reports from a wide range of data-sources and output types including: PDF, Excel, HTML, Text, Rich-Text-File and XML and CSV outputs of your data. (Pentaho)

IBM Cognos Data Manager

ETL Functionality

IBM Cognos Data Manager creates data warehouses and data repositories for reporting, analysis, and performance management. It works by extracting operational data from multiple sources transforming and merging the data to facilitate enterprise-wide reporting and analysis delivering the transformed data to coordinated data marts. It provides dimensional ETL capabilities.

IBM Cognos Business Intelligence Data Manager creates data warehouses and repositories for reporting, analysis and performance management. It is also supported by relational databases through JDBC connectivity (IBM, 2012)

The Data Manager engine consists of a number of programs that you can run from either Data Manager Designer (on Windows) or directly on the command line (on Windows, UNIX, or Linux). For most applications, you can design and prototype using Data Manager Designer on a Windows computer. You can then deploy your builds to a Windows, UNIX, or Linux computer that has the Data Manager engine. The Data Manager engine can be installed on Windows, UNIX, or Linux computers.

Use Cognos Data Manager Network Services to execute builds and job streams on remote computers from a Data Manager design environment computer. For example, if you installed the Data Manager engine on a UNIX or Linux computer, you can also install the Data Manager Network Services server so that you can execute builds and job streams on that server from Data Manager Designer. Data Manager Network Services includes a server component that must be installed with Data Manager Engine. The server enables communication, either directly through a socket connection, or through an application server, between Data Manager Designer and the Data Manager engine. (IBMDM, 2012)

Data Manager has capabilities allowing the data to be analyzed through the development process. These include:

The ability to explore hierarchies.

The ability to execute both the Data Source and the Data Stream.

In addition when builds are executed, data that fails on referential integrity will be delivered to reject tables for re-processing. (IBM , 2005)

Extracting source data is the first step. This requires analysis and understanding of source systems and moving the data to the staging area for use. It may also involve some merging of data. The data staging area of the data warehouse is where data is merged, cleansed, and transformed. It is everything in between the source system and the presentation layer. IBM Cognos Data Manager performs the necessary extraction, merging, cleaning, and transformation of data for this phase in developing the data warehouse. (IBM, 2009)

With data merging, aggregation, and transformation capabilities of IBM Cognos Data Manager, data can be merged from multiple sources including traditional legacy files, purchased data, and ERP data sources. The graphical interface of the design environment makes defining and implementing transformation processes. The transformation engine handles large volumes of data and provides for all major relational databases.

The graphical design environment in IBM Cognos Data Manager displays the data flows of a build and allows for direct access to build object properties, enabling rapid deployment. The transformation process includes error correction and warning systems to ensure data integrity. Once established, the ETL process can run automatically according to the desired schedule. Once the source data has been transformed, IBM Cognos data integration software loads it into the target database. IBM Cognos Data Manager supports delivery of dimensional information to any appropriate storage or access platform.

Organizations can partition information between databases and access tools according to specific requirements. Flexible partitioning also lets the organization send data to multiple tables or targets at the same time within one fact build, when different departments of an organization need to be provided with different data summaries. (IBM , 2005) By tailoring the data loading process to the data, information is updated more quickly, and with less demand on the source system. Tables defined as static contain data that changes infrequently. Therefore, they require refreshing on an ad hoc basis only. With this flexibility, data updates can mean a complete refresh, updates, or maintenance of a slowly changing dimension. (Cognos Incorporated, 2005)

A Job Stream has nodes to automate data extraction, data transformation, data loading, exception/error handling, and logging/notification. These tasks include coordinating fact and dimension builds, data staging, cleaning data prior to data mart creation, pre- and post-processing SQL, different arrival rates of source data, and partitioning tasks to use multiple CPUs. These tasks, or job nodes, can be performed in sequence or parallel. Conditional nodes can dictate the next step in the process. Notifications can be sent out via e-mail or notes can be written to the log about the status of each job node (IBM, 2009)

IBM Cognos Data Manager makes it possible of supporting a "real-time" data warehouse by using it’s the strong data warehouse orientation.

Performance

When data sources are read in parallel, IBM Cognos Data Manager orders the data sources using the order of the dimensions that you specified on Fact Build Properties window. If there are multiple data sources, processing in parallel forces the ordering to apply to all the dimensions. This affects performance, because Data Manager compares the dimension values from the current row of each data source, to the next in order is. (IBM Parallel)

Data Manager uses a dimensional model to coordinate the management of data marts of all shapes and sizes and across many different platforms. Data Manager can perform large-scale data merging and aggregation in a fraction of the time possible with hand-made solutions or with SQL-based data warehouse loaders. (IBM , 2005)

Metadata Management

All metadata is exported directly to a component of IBM Cognos called Framework Manager. By using the metadata export utility in Data Manager instead of using deliveries in fact builds.

The business view displays high-level information that describes the data item and the package from which it comes. This information is taken from IBM Cognos Connection and the IBM Cognos Framework Manager model. The Framework Manager model is the metadata model for report authorizing without needing to focus on SQL commands or on the physical structure of the data. The technical view is a graphical representation of the lineage of the selected data item. The lineage traces the data item from the package to the data sources used by the package. You can also view lineage information in IBM Cognos Viewer after you run a report. (IBM, 2012)

Design and Development

In IBM Cognos Data Manager Designer*, you use builds to specify a set of processing rules that determine how Cognos Data Manager acquires data from the source databases. It transforms the data, and delivers it to the target database. This information is stored in a Cognos Data Manager catalog. (IBM) Data Manager Designer can be installed only on Windows computers.

Administrative

The Cognos user has the ability to configure IBM Cognos Data Manager to report both the error reason and the failed row of data when a relational delivery exception is encountered. The failed row of data can be delivered to a file or to a relational table.

The IBM Cognos Data Movement Service* allows users to run and schedule builds and Job Streams on remote computers using IBM Cognos Connection, the user interface for IBM Cognos Business Intelligence. To use the Cognos Data Movement Service*, Cognos BI must be installed in your environment, and the IBM Cognos Data Manager engine must be installed in the same location as the Cognos BI server components. (IBM)

IBM Cognos Business Intelligence provides reports, analysis, dashboards and scoreboards to help support the way people think and work when they are trying to understand business performance. To use the Cognos Reporting feature you must have the BI installed in your environment.

Data Manager Network Services * provides various options for ensuring the security of the Data Manager environment.

By default, the Data Manager Network Services processes are run using a privileged user account. (IBM, 2010) Data Manager Network Services supports an enhanced security model that allows only specific Data Manager Network Services client computers to access a server. This requires that each client installation must be set up with a known service access password that is set for each server. Without this password, the server will not accept any request from a Data Manager Network Services client, even if a valid catalog database connection is provided.

By default, Data Manager Network Services uses hypertext transfer protocol (HTTP) when transmitting using SOAP service protocol. However, security can be enhanced by using secure hypertext transfer protocol (HTTPS).

* Cognos Data Manager Components

Informatica PowerCenter

ETL Functionality

Informatica PowerCenter comes in Different editions however, predominately is highly scalable from the Standard to the Data Virtualizations Edition. It accesses and integrates data from virtually any system in any format and delivers that data. Informatica PowerCenter has broad data source and delivery options, the sources can be JDBC/ODBC, SOAP/WSDL, LDAP, HTML and flat files. The Delivery options are JDBC, OBDC and SOAP.

Informatica’s data profiling solution, Data Explorer, employs powerful data profiling capabilities to scan data record, from any source, to find anomalies and hidden relationships. Informatica’s data profiling solution gives a complete and accurate picture of data.

For business analysts and data stewards, this allows them to determine data quality themselves and support data governance procedures. (Informatica, 2012)

Informatica B2B Data Transformation provides accessibility to complex file and message formats based on a comprehensive, enterprise-class solution to your transformation challenges. "It features the best technology for extracting data from any file, document, or message—regardless of format, complexity, or size—and transforming it into a usable form." (Informatica, 2012)

Comply with industry standards and regulatory requirements in real time, avoiding penalties and loss of data (Informatica, 2012)

Informatica PowerCenter provides a transformation that enables "any to any" data transformation. By supporting a broad range of file and message formats. Informatica PowerCenter also incorporates large batch file and real time messaging, XML, HIPAA, HL7, NCPDP, ACORD, DRCC, MVR, EDI, EDI-Fact, SWIFT, FIX, NACHA, and Telekurs.

The Informatica Server loads the transformed data into the mapping Targets. The Platforms supported are: Windows NT/2000, UNIX/Linux and Solaris. Informatica customers are building data warehouses to be operational. Informatica PowerCenter offers IT operations with a real-time data integration environment. (Informatica, 2012)

Performance

Informatica PowerCenter automatically aligns partitions with database table partitions to improve performance and optimizes jobs for parallel processing at run time based on data-driven, key driven, or database-supplied partitioning schemes, with claims of increasing PowerCenter’s performance. The Partitioning Option or dynamically parallel data processing has been instrumental in establishing PowerCenter’s industry performance leadership. (Informatica, 2012)

Designed to promote standardization and reuse, the Informatica Platform is particularly well-suited for IT organizations that are being tasked to "do more with less." Informatica Persistent Data Masking creates and centrally manages masking processes from a single, environment that handles large data volumes. The scalability and robustness of the Informatica Platform and its enterprise-wide connectivity can mask sensitive data regardless of database (e.g., Oracle, DB2, SQL Server, Sybase, Teradata), platform (e.g., Windows, UNIX/Linux, z/OS), or location. (Informatica, 2012)

Metadata Management

Informatica PowerCenter stores all the information about mapping, session, transformation, workflow etc. in a set of database tables called metadata tables. While these tables are used internally by Informatica, one can get useful information by accessing it separately

The term "metadata" is often used for the purpose of denoting "data about data". Although this definition does not apply strictly for Informatica PowerCenter, a better suggestion can be "structural metadata" which specifically apply to the data about the structures in Informatica. Informatica stores the data transformation logic in the form of PowerCenter Designer Mapping and the physical connection details etc. in the form of PowerCenter Manager Session. Informatica PowerCenter also stores the information about Workflows, Repositories, and folders etc. All this information is collectively called Informatica Metadata and is stored in a structured data model called Informatica Repository. (DWBI Concepts, 2011)

Design and Development

Informatica’s Designer is the application used to manage and create sources targets and the mappings between them. The Designer consists of the following windows: navigator, to connect to and work in repositories, Workspace, to view or edit sources, targets, transformations and mappings and Output, which provides details when you perform certain task such as saving your work.

Administrative Utilities

Informatica supports various error handling reports, they occur when the errors are captured at the following stages: business rules checks, error logging, and error when loading data.

Informatica allows scheduling to run continuously, or it can be manually started. Each schedule is a repository that can be edited or reusable. The Integrative Service does have limitations of running the workflow under certain conditions, such as prior workflow fails, removal of workflow or running in safe mode. All scheduling is executed in the Workflow Designer. (Informatica Tutorial, 2012)

Informatica allows users to set security policies from a unified consol with a view across the enterprise. The enforced policies use Persistent Data Masking and Dynamic Data Masking. Informatica is not encrypting, changing application source code or changing the database. It does not cause performance overhead therefore an efficient security analyzer.

Comparative Analysis

The ETL tools that are compared are Informatica PowerCenter, IBM Cognos Data Manager and Pentaho Kettle. Informatica PowerCenter is a very good commercial data integration suite. It is very expensive and requires training for staff to use it. More appropriately used for systems that is larger and can be optimized by source database to do the transforming. The company’s sole focus is data integration. Informatica’s integrated profiling and cleansing and its open architectural approach, the repository, the collection of all objects which is housed in a standard relation database, may keep this as the commercial front runner.

IBM Cognos Data Manager is optimized to produce ‘star schemas’ dimensional data marts. It is a tool well suited to an environment with many different data sources. Data transformation can be highly automated by Job streams which provide detail logging.

Pentaho Data Integration is a very intuitive tool, with some basic concepts, conceptually very simple and powerful. PDI is a transformation engine and can be a bit slower, Pentaho’s GUI is user-friendly and you can place Kettle on many different servers. Pentaho doesn’t require huge costs and it is very powerful when writing transformation tasks.

These are three very different ETL Tools that would meet the needs of very different enterprises. Business use cases and methodologies are useful for comparison and selection of tools, however fitting the need of the enterprise can be achieved with proper consideration of these criteria.

Criteria

Weight

Pentaho Data Integration

Cognos DataManager

Informatica PowerCenter

 

Rating

Score

Rating

Score

Rating

ETL Functionality

 

 

 

 

 

 

Connectivity/Adapter

.05

 

.50

 

.30

 

Data Profiling

.05

 

.30

 

.40

 

Data Cleaning

.05

 

.30

 

.35

 

Data Transformation

.05

 

.75

 

.45

 

Data Loading

.05

 

1.00

 

.40

 

Real-time ETL

.05

 

.75

 

.25

 

ETL Functionality Score

 

3.6

2.15

3.7

Performance

 

 

 

 

 

 

Parallel Processing

.05

 

.15

 

.25

 

Platform Variety

.05

 

.75

 

.35

 

Data Size Scalability

.05

 

.50

 

.50

 

Efficiency

.05

 

.30

 

.50

 

Robustness

.05

 

.50

 

.50

 

Performance Score

 

2.2

2.1

3.75

Metadata Management

 

 

 

 

 

 

Metadata Repository

.05

 

.05

 

.40

 

Data Lineage

.05

 

.50

 

.50

 

Impact Analysis Report

.05

 

.15

 

.15

 

Metadata Management

 

0.7

 

 

1.55

Design and Development

 

 

 

 

 

 

User Friendly Features

.05

 

.75

 

.45

 

Design Architecture

.05

 

.35

 

.45

 

Design and Development

 

1.1

 

 

1.25

Administrative Utilities

 

 

 

 

 

 

Error Report and Recovery

.05

 

.40

 

.30

 

Scheduling

.05

 

.15

 

.50

 

Reports

.05

 

.50

 

.75

 

Security

.05

 

.25

 

.50

 

Administrative Utilities

 

1.3

2.05

2.7



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now