Human review of all new and updated content in an extension is important to ensure the quality and correctness of the extension. This allows editorial rules that can not be automatically tested to be checked by an author who was not directly involved in the authoring of the given content. Post authoring review should be performed by individuals with both knowledge of SNOMED CT editorial principles as well as adequate clinical knowledge. In some situations, additional national linguistic or modelling guidelines also apply. A reviewer should, for example, be able to assess whether the definition of a concept has been authored correctly. The table below describes a range of different types of post authoring review, and provides some examples of each. For further information, please refer to the editorial guide.
Review Type | Purpose | Examples |
---|---|---|
Components | To validate that the components created within the extension comply with editorial guidelines and are clinically correct. |
|
Reference Sets | All reference set types To validate that all reference sets meet their user requirements, such as the scope, size, functionality and user acceptance criteria. For more information, please refer to reference set review and quality assurance. |
|
Subsets To validate that all members of the subset are within the intended scope of that subset, and that no component is missing from the subset that is required to meet the subset's intended purpose. |
| |
Maps To validate that the map between each SNOMED CT component and the associated codes from the other code system is correct |
|
Review Approaches
Collaborative authoring and review approaches are recommended to produce high quality content. Examples of authoring and review approaches include:
- Single author with single reviewer
- One author develops the terminology content. Another author reviews the changes and either accepts them or reports issues that need to be considered and resolved.
- Multiple authors with multiple reviewers
- Two or more authors work on independent authoring tasks. Two or more reviewers then review the work of the authors. In most cases, the authors themselves act as the reviewers for the other authors' work.
Dual blind authoring with adjudicator
Two authors work on the same task independently. For example, the authors may both map the same set of concepts to a target code system. Any discrepancy between the work of the two authors is automatically detected, An independent adjudicator then reviews the discrepancies and decides which author's work to approve.
Feedback