Secret #2 to Maximize Pervasive Data Integrator

What
When using an Excel document as a source, the header row is used to determine the names of the fields for the source connector. In order to ensure the field names will be consistent, one can insert a row into the beginning of each document before it is processed. For example, today our client sent us an Excel document that contained the following header in column A: “Account Number”, but yesterday, the value in column A was: “Acct Num”.

Why
Dynamically inserting a static header row into an Excel document allows for the processing of Excel documents regardless of whether or not they contain a consistent header row from the client.

When
This should be done when you are asked to process an Excel document that is missing a header row or does not have a consistent header row.

Who
The use of consistent column headers is beneficial to the:

  • Developer – Implements the code to add a header row to the Excel document prior to processing.
  • End User – Is able to review and utilize the new data loaded into the system.

Where
Inserting a header row to an Excel document is implemented using a RIFL step within Pervasive Data Integrator Process Designer.

How
Before the Excel document is processed, use a RIFL step to open the document and insert a static header row at the beginning of the document that matches the column names identified in the Map’s source schema. If the file may come in with or without a header, you can add a Source Filter to your Map that processes the Excel document. The filter can validate each row to filter out any extra or unwanted header rows and rows that only contain whitespace, which will allow you to process the data in the document successfully.

More
Subscribe to the Emprise Technologies YouTube channel to access our library of video demos.

Advertisements

Secret #1 to Maximize Pervasive Data Integrator

What
In most cases, Pervasive Data Integrator users want to react and handle errors has they occur. By default, Pervasive will exit a Process as soon as an error is encountered. Alternatively, a Process can be configured to allow errors, which will allow the Process to continue executing through completion regardless of an error occurring. What we have found is that neither option is practical with Pervasive Data Integrator.

Responsive error handling can be introduced to ensure that the business needs are met and all errors are handled appropriately as they occur. For example, if a step in a Process cannot find a file, the Process should not abort, nor should it continue execution as if the file exists. Instead, use responsive error handling to manage the missing file exception and send an email to the appropriate recipients.

Why
With Pervasive Data Integrator, custom error handling should be used to ensure that a Process is behaving as intended and as defined by business rules. By introducing responsive error handling, you can gather and react to errors as they occur, versus digging through error logs to identify why a Process aborted.

When
Data Integrator Processes that contain steps which are dependent upon the success of a previous step, are ideal candidates for implementing responsive error handling.

Who
The use of responsive error handling is beneficial to the:

  • Developer – Implements the code for responsive error handling.
  • End User – Defines business rules and reviews results of responsive error handling.

Where
Responsive error handling is implemented within Pervasive Data Integrator Process Designer.

How
During the execution of a Process, Pervasive stores metadata about the Process and its steps and session objects. The metadata can be accessed via RIFL Script in both RIFL steps and Event Handlers. One approach is to create a RIFL Script step immediately after each step that ignores an error. This ensures that errors are caught immediately as they occur and can be handled appropriately. To implement responsive error handling, the Process must not be configured to “Break on First Error”. Additionally, all steps should be set to “Ignore Error”. Then, insert RIFL steps to handle errors appropriately.

More
Subscribe to the Emprise Technologies YouTube channel to access our library of video demos.

David Linthicum’s Data Integration Predictions for 2013

David Linthicum recently made 3 Data Integration Predictions for 2013 in his blog post on Pervasive Data Integration blog.  As a CEO whose business it is to help IT organizations get the most our their data, I concur with Linthicum’s predictions.

Whether you’re tired of hearing about “Big Data” or not, it’s here to stay. And the data will only get bigger and more complex. That means companies have to create business processes as well as IT processes that enable them to manage and integrate that data as it grows. Otherwise, far too much of the organizations resources will be spent on trying to manage the unmanageable  and not on their core business.

With government requirements for healthcare organizations to convert their data from flat files to 837 EDI files and corporations looking to increase their “Business Intelligence” (BI) for better decision-making, the Cloud will continue to drive IT teams to integrate data that’s on premise to the cloud. We see many customers moving to a hybrid combination of Cloud and on-premise. As Linthicum says, “It’s a certainty that data integration will become more important as the years progress.”