Human review of all new and updated content in an extension is important to ensure the quality and correctness of the extension. This allows editorial rules that can not be automatically tested to be checked by an author who was not directly involved in the authoring of the given content. Post authoring review should be performed by individuals with both knowledge of SNOMED CT editorial principles as well as adequate clinical knowledge. In some situations, additional national linguistic or modelling guidelines also apply. A reviewer should, for example, be able to assess whether the definition of a concept has been authored correctly. The table below describes a range of different types of post authoring review, and provides some examples of each. For further information, please refer to the editorial guide.
To validate that the components created within the extension comply with editorial guidelines and are clinically correct.
All reference set types
To validate that all reference sets meet their user requirements, such as the scope, size, functionality and user acceptance criteria. For more information, please refer to reference set review and quality assurance.
To validate that all members of the subset are within the intended scope of that subset, and that no component is missing from the subset that is required to meet the subset's intended purpose.
To validate that the map between each SNOMED CT component and the associated codes from the other code system is correct
Collaborative authoring and review approaches are recommended to produce high quality content. Examples of authoring and review approaches include:
Dual blind authoring with adjudicator
Two authors work on the same task independently. For example, the authors may both map the same set of concepts to a target code system. Any discrepancy between the work of the two authors is automatically detected, An independent adjudicator then reviews the discrepancies and decides which author's work to approve.