Health knowledge made personal
Join this community!
› Share page:
Go
Search posts:

Strategies Involved in Data Cleansing

Posted Apr 06 2013 5:13am

Over the years, a business or organization has to store large volumes of information and data in a number of ways. Starting from user records to client updates and other details, everything needs to be stored consistently so that it can be used for records later. In order to store such volumes of data, it is very important to maintain a systematic database that will ensure the quality of data. In addition, timely data cleansing is needed to eliminate corrupt or invalid data and include new data to maintain consistency of records. Duplication of data is also one the most common problems in data storage for most companies. The only way to make sure data accuracy can be maintained is by preventing corruption. However, it would not be wise if you gave some plan of dealing with the data problems even before it arises so that corrective measures can be deployed.

 

Data Duplication

Duplication is one of the most frustrating events to handle. Data auditors have their worst times in reorganizing the duplicate data. Even using the front-end data screen will fail to catch some types of data duplication; for example, in the address table, if you have entered the addresses as 12 Park Street and 12 Park St. – it will appear as two different addresses whereas both the addresses are the same.

This problem can be resolved while using either of the two methods – data correction or data removal. Data correction updates all the instances of duplicate data to one single value that is agreed upon. On the other hand, data removal is the process of deleting one instance of duplicate set. There are software packages available for these needs that run on algorithm. It helps in identifying the outliners depending on the standard clusters, deviation or defined criteria. The outliners are then evaluated by an expert who will determine what should be done with the unanticipated entry.

ETL Method

The method of Extract, Transform, Load (ETL) is commonly used for moving and cleaning data. Whilst there is no need for manual cleaning, there are some automated task involved in the phase of translate. For instance, when the source table is storing data as “F” and “M” and there is a destination table that stores “female” and “male”, a script should run for translating the data to its new values.

Once data cleansing is done and validated, it will be imported to its destination table. It is also possible that the data will be imported to the old table to be overwritten. This works relatively well as the entire data column of table has to be altered.

Legacy System Updating

Updating the legacy system of a business typically involves sourcing data to another intermediate location, or a particular staging location, where it will undergo an automated and manual process of cleansing. This has to be done in order to avoid making irreversible mistakes to the data legacy before it is imported into another system. It is important to note that this legacy data should be updated so that it is possible to avoid preserving two different datasets. 

For more Information visit this site http://www.winpure.com/Data-Cleansing-Tool.html

Post a comment
Write a comment: