3 Ways to Improve Conversion of PDF and Image Files

It can be argued that a large percentage of hospitals and medical facilities are using technology decades behind present-day patient and billing management systems. Agency owners are reminded of that debate every month when their new business file is once again delivered in a PDF file or even worse, a scanned image format. The painstaking process of manually entering every account and patient record resumes. Inefficiencies aside, the element of human error is ever-present and the probability of it is very high. It will not take much for someone to mistype a patient account number, procedure code, patient balance or some other critical data element. When asking your client for an electronically formatted final, you are left with a response something like, “we are working on it” or simply “no.” What else can be done?

A simple Internet search will result in a seemingly endless list of options. It’s not likely you will find a solution that works or more importantly, works for your use case. Essentially, the options fall into three different categories, which are listed and detailed below. If you are not lucky enough to stumble upon the perfect solution, consult with your end users and IT staff or vendor to understand the process of getting the data from the unstructured format into the collection software. This type of conversation will help direct you to the most logical path.

First

The most ideal option is working directly with the PDF or image file to electronically extract the data. This method does not mean the task can be completed without additional software. Rather, it means working with the raw file (source of the data). One positive attribute of the PDF or image file is that is has structure. This is obvious because it most cases, you can see it. The picture is clearly organized into columns and rows, usually with headings and summaries as well. Copy and paste is a common mistake. Any structure that was there is immediately corrupted. This lack of structure makes it tough to make sense of the data and you may have unknowingly lost some data during the transition.

Second

The first option assumes you discovered some out of the box software that accurately reads the image file you are working with. There is a second option very much like the first, but involves a completely custom software application to understand and parse the image file. This is going to require a skilled IT staff or services from an outside vendor. With careful analysis of the image file to identify regular patterns, splits, and trends, technologies such as Java or Python can be used to identify and accurately extract the data fields into a more structured and workable format. Many custom developed applications will output to HTML or XML. This is often easier than trying to move data directly from the image file to Excel or CSV. This output of the data becomes the new source for further processing or for electronically loading into your collection software.

Third

A more extreme option involves utilizing features of next generation tools which may otherwise be of no use to you. (If these tools are of use to you, it is likely in another division of the business so if you are part of a large organization, check with other departments because they may have something you can utilize.) Big Data technology, like Hadoop, is designed for extremely large datasets and robust environments, uncommon in most agencies, but may have features for reducing image files to understand and extract the data fields. Similarly, some Business Intelligence (BI) tools have features for reading PDF or image file sources, identifying the structure, and accurately extracting the data. For example, one of the latest releases of Tableau contains such features. If you opt for this route, there is a good chance you will find some other cool features and potential use cases for the Big Data or BI technology.

Finding a way to electronically manage PDF and image files is critical. It greatly improves operational efficiency and reduces the risk of human error. An element I haven’t even touched on involves culture and the work environment. In probably all cases, those performing the manual work to key new account and patient information into the system are likely not doing the work they were hired to do and are not doing work that is more rewarding, both personally and for the company. I hope you make it a priority to create an exciting and more productive environment through the elimination of manual processes.

* This article is also published by Collection Advisor

Advertisements

Synchronizing Collection Software with Hospital Data

As a server of medical accounts, agencies often find themselves facing some additional technology concerns. Every vertical brings its own technological challenges. In most verticals these challenges are shared but the healthcare vertical adds complexity when considering items such as system of record and the detailed nature of industry accepted data standards.

Medical collection software exists for any serious agency in the healthcare space. However, I see and hear agencies are simultaneously working in two different systems, their own collection software and the hospital’s system. (Throughout this column, I will repeatedly reference the “hospital’s system” but the terms “medical facility’s system,” “insurance company’s system,” “claims management system,” and so on could easily take its place.) The questions I will address in this column are: Why are there disparate systems and how can these agencies work out of one system? Most often the “why” is one of three scenarios. One, the hospital requires the agency to work directly in their system. Two, the agency software does not have the same data points as the hospital’s system. Three, there is no seamless integration between the two systems so real-time updates are not sent back and forth between the two pieces of software. In any case, system of record becomes a question. When working in two separate systems, which one wins when there is a conflict? No wrong answer to this question exists and the better question to answer is how can the conflicts be avoided? Unless you have a compelling case, the hospital will insist your reps work directly in their system and will not be convinced your receivables management software is sufficient enough to allow otherwise. You can utilize your technology to build this compelling case by addressing the integration scenario previously presented.

Before detailing a seamless integration between systems, there are a few prerequisites. Ideally, the following statements are all true:

  • Both the agency and the client have technical teams available to implement and test the integration.
  • There are application programming interfaces or web services available for both the source and target systems.
  • The source and the target system include functionality to track data changes and events that occur as users are working.

Consider your collection software as the source. This is the system you are familiar with and the system you prefer to work with. A data push can be developed using system inherent data integration tools or an external data integration tool with connectors to the system database. This push process will need to be constructed in a manner that captures the data level events as they occur and will need to be shared with the hospital’s system. Simultaneously, a pull or retrieve process will need to be in place for the hospital. This process will grab the real-time data events being provided by the agency’s system. After some analytic, transformation, or validation routines are performed, the data from the agency’s system will ideally auto-update in the hospital’s system. Additionally, reverse processes will need to be engineered for the hospital to automatically share real-time data events with the agency’s system. Achieving this sort of real-time integration will allow your representatives to work exclusively in your system while keeping the hospital’s system in sync and vice versa.

Prior to engineering the solution described above, there will need to be a requirements gathering session with your client to identify the data elements that are important to share and are able to be shared between the two systems. A significant takeaway from these sessions needs to be a requirements document. At a minimum, the following should be detailed in this documentation:

  • A very detailed listing of the data fields to exchange.
  • Any business rules related to data transformations, data validations, and data extract/load procedures.
  • Instructions for using any web service technology.

Utilizing existing data standards is almost essential in the healthcare space. These standards are also helpful in achieving real-time system integration. Your agency is likely already using data standards like Health Level Seven (HL7), HIPAA formats, or something EDI related in order to exchange data with clients. Transaction and code standards like these provide a uniform method for sending and receiving healthcare related data in near real-time. It is important to understand these standards not only because your clients will expect it but also because it displays a higher level of sophistication regarding your operation. The client’s only expectation is you are staying up to date with the ever-changing healthcare standards. The October 1, 2015 countdown to ICD-10 (10th revision of the International Statistical Classification of Diseases and Related Health Problems – Clinical Modification/Procedure Coding System) is on the horizon. Are you ready?

* This article is also published by Collection Advisor

The Data Flow Architecture Two-Step

Robin Bloor's Data Two Step
The Two Step Data Process

In his latest post on the Actian corporation hosted Data Integration blog, data management industry analystRobin Bloor laid out his vision of data flow architecture. He wrote, “We organize software within networks of computers to run applications (i.e., provide capability) for the benefit of users and the organization as a whole. Exactly how we do this is determined by the workloads and the service levels we try to meet. Different applications have different workloads. This whole activity is complicated by the fact that, nowadays, most of these applications pass information or even commands to each other. For that reason, even though the computer hardware needed for most applications is not particularly expensive, we cannot build applications within silos in the way that we once did. It’s now about networks and grids of computers.”

Bloor said, “The natural outcome of successfully analyzing a collection of event data is the discovery of actionable knowledge.” He went on to say, “Data analysis is thus a two-step activity. The first step is knowledge discovery, which involves iterative analysis on mountains of data to discover useful knowledge. The second step is knowledge implementation, which may also involve on-going analytical activity on critical data but also involves the implementation of the knowledge.” Read more->

The Power of Crystal Clear Decision Making

Crystal Clear Decision Making
Clarity of goals is key to
crystal clear decision making.

Are you the type of person who easily assesses all angles of a decision and calmly arrives at the point of clarity? Or are you the type of person who is overwhelmed by all of the information you need to consider, becoming frozen by indecision, as if you are a deer in the headlights? Does how well you navigate decision-making depend on the type of decision you need to make? Maybe you find making big decisions easy, but smaller ones, like what to order for dinner, leave you stymied.

Effective decision-making requires much more than just the ability to gather and process information. It requires focusing on the very core of the decision, rather than getting mired in the details that can so often derail good decision-making.

Jill Johnson, MBA is an award winning management consultant who has impacted nearly $2.5 billion worth of business decisions and she spoke on this topic at the ACA International’s 74th Annual Convention & Expo in San Diego, CA last week.

What impact does clear decision-making have on companies in the collections business? Let’s start with the decision of which collection software to use. Artiva, DAKCSCollectOneWindebt, Titanium ORE (DM9) and FACS are some of the most frequently used credit and collections software used in the industry. Which one is best for your company? Let’s answer that with a question. What is the single most important thing your business needs this software to do?  Is it:

1. Compliance
2. Process automation
3. Vendor integrations
4. User friendly

What’s key to making the right technology decision, is to focus on the mission critical business outcome.

Once you’ve identified the primary business goal for purchasing collections software, you evaluate each product’s ability to achieve that goal. Software bells and whistles that don’t help your company achieve the primary outcome are extraneous details that should be tossed out. Next, look at other key factors that will affect your company’s ability to execute on your core business. What resources does your company have available to integrate, implement and maintain and the software? Which software syncs most closely with your team’s capabilities?

Your company may have a few other key factors to include in the software selection process. Prioritize them and then score each software solution for effectiveness with those factors.

Finally, there’s budget. It’s last because addressing the primary goal and key factors are mission critical to a clear decision-making process. Without the information about implementation and resources required to maintain the new software, total cost of ownership (TCO) cannot be determined. Quantifying the TCO of software is far more accurate than the purchase price. Focusing on gathering the best information about the primary goals and key factors will provide the path to crystal clear decision-making.