Page tree

4891 View 21 Comment In discussion Comments enabled In the category: Undefined

What to do with Evaluation Procedures?

Contributors (5)

21 Comments

  1. At Nebraska we have a contract with the FDA to evaluate some of their proposed tools for interoperability of laboratory data.  In preparing our material for them, I authored the enclosed document (also attached to last minutes) which discussed end-to-end information modelling from the clinician order to the final result placed in the EHR.  It employs some old ideas we have discussed at LOINC committee in the past but attempts to frame a proposed response to the discussion we had at our last F2F meeting on the question of Evaluation Procedures and semantic overlap with Observable entities. 

    In short, I suggest that we use a subset of the Observables concept model to fully define Evaluation Procedures and to move the body of concepts into Observable entites as a first level subhierarchy.  I explain the rationale in the enclosed.

    Jim

  2. James R. Campbell ,

    II am not sure I understand the need for a "performable" hierarchy.   I was also unclear about the difference between the orderable and the resultable in your example. They look to have the same attributes and values.

    1. Recall that an ontology is a computable conceptual view of the real world.  Speaking as clinician and scientist, I don't think that 'Evaluation Procedures' are necessarily needed for interoperation of EHR results but it is not surprizing that they were the first elements to appear within these semantics in SNOMED CT because it was created by a bunch of lab directors!  They viewed the reporting of these lab data as a procedure that results in some observation result that is then passed to the clinician.  The clinician often doesn't care about the details of how the glucose level was determined and so their conceptualization includes other defining elements that the lab director ignores such as assuring that the blood draw was two hours after a meal.

      I was making the observation that a conceptual definition of these Evaluation Procedures seems possible using a subset of attribute/values from the Observables concept model.

      So, given the semantic space in SNOMED CT currently occupied by Evaluation procedures, we could say we don't care anymore and deactivate the concepts but the poor blokes who have been using these concepts within their extension namespace as defining values for the 'Interprets' attribute will suddenly find they have a lot of work to do to keep up with SNOMED CT evolution.  I think they would have every right to be outraged.

      Given I assert that the Observables concept model allows us to fully define PERFORMABLES=Evaluation Procedures within the proposal I forwarded, we can support everyone with their own corner of the sandbox: clinicians, lab directors, researchers and public health.

      Jim Campbell

  3. Daniel Karlsson wrote:

    Jim and Jim,

    I’m not that comfortable with creating two concepts with the same definition and just the addition of “orderable” as a primitive parent. Although it’s better than what we have today, I believe there will still be more than a slight chance that users will be confused about which kind to choose for storing and querying. There are still the pure orderables which cannot in any meaningful way be reported separately. I don’t think that pure observables (i.e. things that inherently can be observed but not ordered) exist.

    The proposal to merge Observables and Evaluation procedures I’m all for, but for me it could go either way and it seems from the previous discussion that acceptance might be less of an issue moving O à E than vice versa, but I’m open for anything that would resolve the stand still.

    Thanks,
    Daniel

  4. I understand your point Daniel, but what I am proposing, by segregating out the methodological and technical aspects of the process into a separate hierarchy in Observable entities would essentially be creating a 'methods/procedures grouper class in Observable entities.  I think that is how 'Evaluation procedures' should actually be modelled semantically as reflected in the ontological view imposed by the SNOMED CT concept model.  Even though I don't find a compelling use case for that 'grouper class', I suggest that we are not looking at this from the standpoint of the scientists who actually manage the clinical laboratories where this subhierarchy of observables would basically represent the view of ‘all test results by measurement protocol/procedure.  For example:


    The PERFORMABLE (Quality) class would be defined

    ISA Observable entity

    Component <<Substance

    Inheres in <<Substance

    (Property <<Measurement technique\Lab chemistry technique)


    We would redefine the Evaluation procedure 36048009 as

    36048009|Glucose measurement procedure (procedure)| è(fully defined)

    ISA Observable entity

    Component = Glucose (substance)

    Inheres in = Blood (substance)


    And the classifier would create a ‘lab scientist view’ which has 36048009 as a node that subsumes all the lab tests that are blood glucose measurement results, like:


    Fingerstick blood sugar two hour post prandial (fully defined)

    Component Glucose (substance)

    Inheres in Blood (substance)

    Direct site = Capillary blood specimen

    Precondition = 2 hours after a meal

    Property = Mass concentration

    Time aspect = Single point in time

    Technique = Measurement technique\Lab chemistry technique\Automated test strip technique

  5. Looking at this example leads me to think that creating a separate PERFORMABLE class could possibly be unnecessary if we crafted the migration plan to indicate the information management implications of the node in the classified Observable entity ontology.  Just as Clinical findings and Body structures have a number of semantic subclasses, perhaps the utility of the entire semantic space would be enhanced by explicity declaring the subtype of information within an information model superstructure.  Hence 36048009|Glucose measurement procedure (procedure)| becomes 36048009|Glucose measurement procedure (measurement process)| and

    Fingerstick blood sugar two hour post prandial (observable entity)  becomes Fingerstick blood sugar two hour post prandial (measurement orderable).


    Moving content from Procedures would require that true duplicates with Observables would be retired and ambiguous or vague concepts would be migrated with 'may be a' in the history.

    Jim Campbell


  6. We are all attempting to build lab result sets out in the world of course and need a clear line.  I share everyone's need for a solution.  To me it seems that we are talking about two different things:

    a) how to report the method by which a test was done when reporting a result and

    b) how to manage the orderable/performable/reportable split. 

    Neither of these are necessarily a SNOMED issue in isolation (FHIR) but they seem to be being treated that way.  Lab performables of course also relate to kits and devices so that is something different again.  I suggest we stick to what Clinical SNOMED can do as performables impinges on drugs and devices territory.  We might usefully treat that as a 'black box' for the purposes of orderables/reportables here?

    We have an existing model for a) above.  There is a clear current model using <272394005 | Technique (qualifier value) | as a value set for Technique (attribute).  This easily supports the addition of method to the 'thing/property/system' triple.  It is not helpful to have intimations of radical change when working in this area.  It would not be disruptive to simply populate and restructure the incomplete and unstructured hierarchy <272394005 | Technique (qualifier value) | into a more servicable form with a header to group evaluation methods more efficiently.  This is long overdue and is holding back development.  If the content that populates this hierarchy is transferred from the procedure hierarchy then great (methods have always sat uncomfortably there) as long as the content is of a technique/method qualifier type and not a semantic procedure class usable directly in a record. 

    I think the fingerstick example is fine but this is no place for the act of fingersticking or measuring bas a performable.  From the above,we seem to be radically loosening semantic principles by turning clear 'act' procedure content such as 'measurement of x' type procedures (performables or orderable tests) into observable entities while they still read semantically as acts. 'Measurement of x (procedure' is not a concept that can be transformed into an observable.  'Observation method' is a qualifier value.  'Measurement method'  is a qualifier value.  RT PCR is a method qualifier value.  I can't see how 'Evaluation procedure ISA Observable entity' is tolerable.

    I certainly have no wish to preserve measurement procedures (NB: in the lab domain) and I agree they are more a legacy artifact (some of which came from minimally worded Read codes in the UK because crudely classed there as 'procedures but useable as observables' and therefore elaborated without use case into SNOMED procedure formalisms)  than a useful set.  I would prefer to see them inactivated and pointed to <272394005 | Technique (qualifier value) |  However, I believe, having analysed the evaluation procedure hierarchy and compared with panel/battery type results, that there is still a subset of these that need to be retained or at least reviewed because they don't have neat replacements as observables or method qualifier values.

    I can't see an ontologic case for ignoring act/state semantic rules and making performables into observables or having both somehow..  The value set for Technique works (if updated).  It ain't broke.  Otherwise this calls into question the whole basis of procedure-observable-finding as an overall model.

    FWIW I think the observable (which is in a sense a question without an answer), or at least a non-specific form of it can be deployed as the orderable with a null value and returned with the value appended as the reportable (or used as a container code for panels/batteries.  This is simple, trackable and non-disruptive to implement.  One concept.  No confusion.

    1. Thanks Sarah Harry,

      I believe there are a number of different categories involved in the current Evaluation procedure including things that are techniques as well as observables but also other more generic concepts that do not fit well into either category. Imaging would be one particular area where the evaluation procedures are not about a particular property while being more than application of a technique. Assessments of X and Evaluation of X system are two other frequent pattern that is neither a pure technique nor a distinct property, some may be panels.

      Regarding ontology, there is no difference between an observable and (most, see above) evaluation procedures; both are representations of procedures for observing some property or a property observed by some procedure. It's the same thing looked at from two perspectives. SNOMED CT represents types of things, so the distinction between ordered or "resulted" is not relevant either.

      /Daniel

      1. EPR instances of a measurement procedure class can be undated (or future dated) e.g. as the means of expressing a plan to measure something. The method by which and the location where they are to be measured may legitimately not be stated, or even known. More generally, they can meaningfully be tracked around an Act Lifecycle state transition engine and may in fact never be fulfilled.

        EPR instances of an actual observed result class by definition exist only at one specific point in time, and it would be meaningless to track them around an Act Lifecycle state transition engine. All aspects of their measurement are necessarily linked to instances: who made the measurement, how, using exactly which analyser, in which room of which building of which organisation etc etc.

        If the sets of all EPR instances to which the notions of "requestable" and "resulted" would attach will behave so fundamentally differently and on such clearly dichotomous lines that are determinable by whether its a request or a result, to my eyes this strongly suggests that those two underlying classes are not interchangeable either.

        Of course, there are real challenges in devising a naming convention to clearly reflect which of the orderable-resulted distinction you're looking at. And, in combination with information systems that don't attempt to enforce (or leverage) the simple underlying dichotomy, the result has been coding chaos. 

        But my concern has always been that conflating the request and its result into a supposed single ontological class is not the only way to resolve this problem. What we see is confusion, not redundancy of representation. And so the other way to go would be a more rigid separation, but this is a solution that would obviously require most of the refactoring work to be performed by system developers and not by SNOMED itself.

        I remain concerned that we're favouring a less good solution mainly because it is within our gift to deliver it, and not because it is the best solution.

        1. If it is expected that a single SNOMED CT code should comprise an entire EPR instance, I agree we have a problem. However, although technically possible, it seems like a corner case supporting such poor information model implementations. All known information model standards help keep those two cases separated removing the need for duplication in SNOMED CT. If this works well for LOINC and NPU terminologies, why wouldn't it for us?

          There are still issues to discuss and solutions to compare which in turn need to be balanced with need for stability in existing implementations.

          There is duplication of content today e.g. blood pressure taking vs. blood pressure and at least our SNOMED users are seriously confused about which to choose.

          1. Trouble is that whilst it may be true that current information standards include information model-side classes to capture and enforce themselves the distinction between "is this a request or a result?" and so without the terminology also having to make that distinction, many operational and deployed health care systems are not compliant with these modern standards but are based on much older and more naive information models.

            We're caught in a deadly embrace: if we engineer to the future we risk making the present seriously unsafe. If we engineer to the present, the future can never happen.

            1. I've been lucky not to stumble upon such systems (wink). The systems I have seen, mostly only from the "outside", have supported the use of EDIFACT, CDA, and various homebrew standards from the 90ies until today (still no FHIR in any known production systems). The 90ies lab systems (likely designed in the late 80ies) supported the use of the same NPU code for both requesting and reporting.

              1. Yes - I think most are able to map their real internal information models to HL7 and now FHIR messaging paradigms, but I am suspicious that some of the mappings required may be driven off tests on the terminological class of the coded part of EPR statements, such as: how to tell which EPR entries should be sent as a FHIR Observation and which as a FHIR Service Request. Its a tricky area to investigate, but I have learned that it is seldom possible to underestimate the level of sophistication truly out there.

                1. I realize that I have assumed that the scope is to represent not a request or a report but that what is requested or reported on. Guess we need to agree on a scope first. Thanks for helping me see my prejudices!

  7. Sarah Harry ,

    The short response to your missive is that I agree with 99% of what you say, especially around the use of Technique and your comment "I think the observable (which is in a sense a question without an answer), or at least a non-specific form of it can be deployed as the orderable with a null value and returned with the value appended as the reportable".  This is generally the approach LOINC has taken from the beginning with is ORD/OBS designation in the LOINC database.  The challenge has always been as we try to consolidate these two hierarchies is that some users have an expectation that there be a "different" code for the orderable from the reportable/resultable.  If I am understanding you correctly, you are moving away from that expectation.  

    The other issue that you refer to is the notion of orderable panels and the use of that order as a container of resultables.  I would like you to expand on how that would work and given the limitations of the observable model to single observations, how an integration of the twi hierarchies using panels might be attempted.  You comments are especially relevant as much of the pushback on the combining of the two hierarchies has come from the UK.

    1. Thankyou for the response Jim Case.  I think the resistance is more on what is being done to merge the hierarchies than the principle of merging them.

      You reach the nub when identifying how to handle containers and their relationships to other containers and singletons.  Some of this is in the FHIR valdidation space but needs consistent SNOMED content to work on.  I believe my reviews/analyses of panel/battery/profile types show ways this might operate using consistent SNOMED representations without destabilising the observable entity model and semantics.  

    2. At Nebraska we have 100% of all clinical laboratory (327 Million facts), anatomic pathology (35,661 facts) and molecular pathology (524,3b3) results coded with the fully defined Observable entities concept model.  As Resultable observations, they employ all of the Quality and Process Observable defining attributes and the classifier organizes them for us nicely and allows DL queries on our clinical/research datasets serving all the analytics use cases of our enterprise. 

      We have deployed as a test case observable panel codes for both lab and clinical purposes.  We have done this using the TECHNIQUE attribute and employing valuesets from Assessment Scales such as Folstein Minimental Status exam for clinical and common US panel definitions such as Basic Metabolic Panel for laboratory. 

      From the DL perspective, the panels are simply SNOMED CT groupers but correspond in the order-management cycle to Orderables.  They are not Resultable themselves but may organize the set of observation results that are spawned from the Orderable.  The concept  for the MMSE panel is:

      1095731000004106|Folstein Mini-Mental Status Observable panel(observable entity) (fully defined)

      ISA Observable entity panel

      TECHNIQUE 273617000|Mini-Mental State Examination (assessment scale)|

      If the enterprise chooses, they could propagate the TECHNIQUE from the Order instance to the final Resultable instance and add the panel TECHNIQUE as a refinement of the RESULTABLE code for lab.  But in practice, linking the Result back to the Order within the EHR information model is tractable and serves the analytics use case to separate Results ordered as panels as opposed to standalone testing request (a serum sodium may be ordered within the BMET but also as a single test).

      Jim Campbell

  8. Sarah Harry,

    We will be discussing this topic at the EAG Conference call on October 7.  If you are available to provide your thoughts, that would be helpful in getting this issue resolved.

  9. I am posting for comment a presentation I plan to make Tuesday to the editorial advisory group.  Pleasespeak your mind

    Jim

    Semantic Interoperation of Lab with SNOMED CT.pptx  attached

    1. Thankyou James R. Campbell Jim.  I certainly acknowledge your more hands-on understanding of the field but I'll share some concerns for how this fits into HL7 messages well and about the orderables and performables use cases that might help you in your presentation.   

      I think the reportables part of this is fine but:

      Orderables: I am sceptical that a clinician could reasonably know enough about the result form to order the orderable you describe or even want to, given the prescriptive level of detail (and also noting the existence of a specimen container with the orderable that might be reported in a different specimen type).  My expectation would be for a more generic or open ended orderable, something considerably less prescriptive of what is performed or reported.  

      Performables: I think I noted before a concern about how lab performables might best be recorded and that the test kit, test kit type, it's batch number, analyser, analyser ID, operative, time-stamp, etc etc seem to come into play here and the use case seems to need a different shaped solution and one that need not necessarily be reflected much in the reportable or orderable as lab protocols and SOPs are laid out independently.  I shall bow out if lab information managers are content with your performables example, but I'd question whether it meets their needs in QA, validation and audit, methodology reporting etc.  I would question the centrality of the performable as the driver for the orderable and result if characterised as above because there seem to be quite a distance between say 'Respiratory virus screen (orderable)' the set and sequential line of performable lab tests thus triggered and the report received by the requesting clinician.  Likewise 'Acylcarnitines profile (orderable)' Autoantibody profile (orderable), x serology etc etc.

      Service requests need a container code.  If we have all orderables and reportables in the observable entity hierarchy we have then created some observables (panels) that cannot take values and don't model properly with singleton observables and can't be recognised as containers in HL7.   If, alternatively, we allow for procedure orderables we risk ending up with a confusing duplicate procedure code for every reportable observable code.  For me, the solution is to retain panel/profile/screen/battery types as procedures and thus designated container codes while a reportable singleton observable entity code or perhaps a very generic substance/cell/organism measurement procedure code (and subsidiary panel codes) can be placed in the orderable container.  This provides for an orderly and validatable procedure/observable entity split in FHIR profile fields.  I think there is an irreducible nub of panel/battery/profile/screen etc orderables  that just has to remain as a procedure set for the message model to work predictably.  Others will be able to correct me.

      I trust this is helpful.  It may merely show up my ignorance.

      Sarah.


  10. Daniel Karlsson   Hi Daniel et al.  A late question arose in Observables meeting 2022/3/21 as to whether there is a contradiction between a. maintenance of procedure codes for evaluations using assessment scales leading to a scale core and b. the inactivation of such procedure codes for laboratory evaluations.  Much of this is reviewed above so I shan't repeat it (much) but merely say that there are different types of orderables so the binary is not simple (a passing thought that SNOMED would benefit from a classification of orderables, at leaast for lab, happy to pitch in on that from work I've done in the past).  My main point is that an equivalence is being suggested that IMHO isn't there so the question needs that assumption unpacking.

    I think a true equivalence would need to bring lab orderables into logically expressive line with scale orderables but I don't think that is useful or do-able.  It would mean something like:

    Assessment of renal function of subject by measurement of creatinine in serum. (Not simply, 'measurement of creatinine in serum')

    This example mirrors (I think) the rough model for scale orderables although the reason is implicit in the scale definition.  There is no case for this model in lab procedures.  Indications for procedures should not be combined with procedures in principle (although there are many examples unfortunately).  A clinician monitoring dialysis efficacy in an anephric patient is not looking at renal function.  It's also a simplistic analogue of renal function. What is performed as a result of the orderable is not always and cannot simply be defined in the orderable and is a labspace issue much as a detailed surgical procedure is a theatrespace issue.  Such nesting is not needed.  (Note I am not party to the UK review of dual coding lab procedure and observable for everything.)

    I think there is a deeper division of observing as observer and measuring as actor which is sometimes useful, sometimes not.   The same goes for how evaluation, examination, measurement, observation, capture and recording are deeply nested.  Basically I don't think we are comparing like with like and exploding all things into aligned forms on this nesting while logical/reproducible isn't obviously useful.  Therefore I support the existing pattern of scale procedure orderable with score observables and the proposed lab pattern of retaining panels and other orderables (screens, profiles, studies etc) as procedures with individual reportable results as observables.  I think the distinction is justifiable notwithstanding inevitable edge cases.