To get the best mapping results, it's important to prepare your data prior to beginning the mapping process.
Data in the real world is rarely clean and tidy.
Data may have:
- An underlying data structure and that could be represented with indents
- White space (leading, trailing)
- Truncated text
- Data types
- Headers, footers
- Non text characters (? # / , - + * @ =)
- Misspelling
- Abbreviations
Even when data is coded, the data may not be as clean and tidy as expected.
Data could be
- Used out of context (repurposed fields)
- Used as proxy – best/easiest closest thing
- Underlying coding often organic and uncontrolled
- Duplicates
- Erroneous synonymy
- Conjugated terms
- Ambiguous
- different meanings interpreted depending on the context/reader
Other data quality checks include
- Are all the terms uniquely identified?
- Are there any duplicates?
- Are there any null values?
- Is there any meaningful metadata that needs to be accounted for.
All of these things should be considered and rules should be developed and documented on how these things will be handled so that there is consistency through the process and between personnel. Sometimes these decisions require expertise of workflow within the implementation and not just clinical expertise.
For example:
If this is in your data | Possible meaning 1 | Possible meaning 2 | Possible meaning 3 |
---|---|---|---|
# | fracture | number | |
/ | and | or | |
? | possible | probable | suspected |
++ | moderate severity | getting better | increased |
Disease 1, Disease 2 | Both (co-morbid) | Disease 1 causes Disease 2 | Disease 2 underlies Disease 1 |
Feedback