Date & Time
20:00 to 22:00 UTC Wednesday 25th March 2020
Location
Zoom meeting: https://snomed.zoom.us/j/471420169
Goals
- To finalize syntax for term searching in ECL
Attendees
- Chair: Linda Bird
- Project Group: Anne Randorff Højen Daniel Karlsson Ed Cheetham Michael Lawley Peter Jordan
Agenda and Meeting Notes
Description | Owner | Notes |
---|---|---|
Welcome and agenda | ||
Concrete values | Linda Bird | ON HOLD: SCG, ECL, STS, ETL - Ready for publication, but on hold until after MAG meeting in April confirming requirement for Boolean datatype. |
Expression Constraint Language | Linda Bird |
|
Querying Refset Attributes | Linda Bird | Proposed syntax to support querying and return of alternative refset attributes (To be included in the SNOMED Query Language)
|
Returning Attributes | Michael Lawley | Proposal (by Michael) for discussion
For example, I can write: << 404684003|Clinical finding| : 363698007|Finding site| = <<66019005|Limb structure| << 404684003|Clinical finding| . 363698007|Finding site| But I can't get all the attribute names that are used by << 404684003|Clinical finding|
|
Reverse Member Of | Michael Lawley | Proposal for discussion What refsets is a given concept (e.g. 421235005 |Structure of femur|) a member of?
|
Expression Templates |
Examples: [[+id]]: [[1..*] @my_group sameValue(morphology)] { |Finding site| = [[ +id (<<123037004 |Body structure (body structure)| MINUS << $site[! SELF ] ) @site ]] , |Associated morphology| = [[ +id @my_morphology ]] }
Note that QI Project is coming from a radically different use case. Instead of filling template slots, we're looking at existing content and asking "exactly how does this concept fail to comply to this template?" For discussion: [[0..1]] { [[0..1]] 246075003 |Causative agent| = [[+id (< 410607006 |Organism| ) @Organism]] }Is it correct to say either one of the cardinality blocks is redundant? What are the implications of 1..1 on either side? This is less obvious for the self grouped case. Road Forward for SI
Additional note: QI project is no longer working in subhierarchies. Every 'set' of concepts is selected via ECL. In fact most reports should now move to this way of working since a subhierarchy is the trivial case. For a given template, we additionally specify the "domain" to which it should be applied via ECL. This is much more specific than using the focus concept which is usually the PPP eg Disease. FYI Michael Chu | |
Description Templates | Kai Kewley |
|
Query Language - Summary from previous meetings | FUTURE WORK Examples: version and dialect
Notes
| |
Confirm next meeting date/time | Next meeting is scheduled for Wednesday 22nd April 2020 at 20:00 UTC. |
16 Comments
Michael Lawley
https://www.w3.org/TR/charmod-norm/#performNorm
Ed Cheetham
Following up on our homework: UCA/CLDR/Case/accent folding + Unicode collation - What advice should we be giving in the specification?
I have personally found trying to answer this torture!
Ideally we want to be try and get predictable (per locale) search behaviour. This could then be neatly summed up in a sentence in the guidance something like this:
“The search specification assumes that descriptions are indexed for search using the default UCA, or UCA tailored for a specific language or locale according to CLDR. The selected locale can be specified using the ‘language=[ISO 639-1 code]’ filter. Descriptions indexed this way are compared with unmodified search tokens.”
However, it looks as though ‘default UCA’ doesn’t ignore case (but bafflingly how case is handled is predominantly specified using a parameter called ‘strength’!). The UCA specification states that “…Language-sensitive searching and matching are closely related to collation…”, but this also indicates that they are not the same. The required collation strength for case insensitive searching is ‘secondary’, whilst the default for collation is ‘tertiary’. This may be explained here and/or here , and is probably buried somewhere deep in here, but to me is actually most clearly described by the kind people who maintain the mongoDB documentation.
If we therefore need to add something about case insensitivity to the assumption statement above (and possibly even make case sensitivity configurable in our filters), could we just say ‘“The search specification assumes that descriptions are indexed for search using case insensitive default UCA…”?
From a practical point of view this is tempting (commercial product configurations seem to use the “_CI” notation when setting collation (e.g. “>>mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci"). However if we are going to reference UCA then it’s worth noting that the Unicode materials don't seem to use the phrase ‘case insensitive’. Instead they talk in terms of secondary or tertiary ‘strength’ (as does the configuration page of mongoDB).
On balance I suspect that if we make case sensitivity configurable then we should name the filter ‘case=’ with values of ‘case sensitive’ and ‘case insensitive’ (implicit default). The alternative is to name the filter ‘strength’ with values of ‘secondary’ and ‘tertiary’ and so on. Whilst the latter looks more principled I suspect it’s just confusing.
I’ll stop there, but will just add for info that the W3C reference we looked at last time was coming at this from a different direction. Their concern relates to string matching as it applies to the syntactic content of web pages etc. Consequently their recommendation is for a normalization step that changes nothing - to avoid changes in element names/markup. Other content (what that paper calls natural language content) may well benefit from extensive normalisation - closer to case insensitive UCA transformation.
Ed
Michael Lawley
Thanks Ed, this research was really valuable input
Ed Cheetham
Thanks Michael - appreciated. Ed
Michael Lawley
The documentation for this stuff in Lucene (https://lucene.apache.org/core/8_5_1/analyzers-icu/index.html) has now led me to chapter 3 of the Unicode spec https://www.unicode.org/versions/Unicode13.0.0/ch03.pdf that talks about "Default Caseless Matching" with a variety of rules (eg D144, D145, ...) that might be a helpful reference for some.
Ed Cheetham
Thanks Michael
Again, the Lucene reference, rather like the MongoDB one, is refreshing in its simplicity when compared to the all-or-nothing complexity of the Unicode materials! After a bit of reading I'm not sure I'm any nearer an easy way of saying 'so long as everyone do this...they should get the same language-specific search behaviour'; maybe the Unicode conformance chapter provides a useful shorthand - I'm just not sure I can translate into something easy to understand without losing an important aspect.
Meanwhile, I did find out that I can adopt an Emoji, and have just spent a playful half-hour playing with the international, Swedish and Danish SI browsers and a few search term variants:
angstrom
Ångström
ÅNGSTRÖM
ångström
Ångstrøm
ångstrøm
ÅNGSTRØM
angstrøm
Ång
tår
deja
déjà
DÉJÀ
sjögren
sjøgren
I'm going to have to assume they behave as intended in each language - not what I expected comparing it with the asymmetric search tables, but I don't think I understand them either! Linda - did you get a chance to ask about how the SI browser handles language-specific indexing?
Kind regards
Ed
Linda Bird
Thanks again Ed!
Re my homework .... I'm working on it. Will hopefully have something to report at the meeting later.
Kai Kewley
Hi Ed Cheetham
The International Browser uses the Snowstorm API which uses Elasticsearch. We fold diacritic characters into their simpler form to allow matching with or without the diacritics. For example in English some terms have é, for example "Déjà vu" but in English we expect to get a match when using the same characters with or without diacritics, for example using "Dejà" or "deja".
However, as I am sure you are aware the expectation of character folding is language dependant...
In Danish a term with the character "ø" should not be found when searching using "o". For example the concept "Ångstrøm" should not be found when searching using the term "Ångstrom".
Conversely in Swedish the character "ø" is not considered as an additional letter in the alphabet so the concept "Brønsted-Lowrys syra" should be found when searching using the term "Bronsted".
This all works as expected using the Snowstorm API. We have implemented language specific character folding for each of the extensions we host. More languages can be added via configuration (see Snowstorm configuration).
Implementation notes: terms are indexed twice in Elasticsearch, in their raw form and with language specific character folding. When querying for matches we fold the search term using each configured strategy with a constraint to only match that folding against that language.
I created the configuration for this behaviour by asking members. I was not able to find an official source for this information.
I hope that helps!
Kai
Ed Cheetham
Thanks Kai
Yes, it does help. The challenge we're facing is to try and get the search filter elements of ECL to behave as predictably across implementations as its graph-based elements. Maybe this is an unrealistic goal, but it seems a little early to admit defeat. You've clearly given a lot of thought to how to optimise search (generally steps to increase sensitivity) whilst respecting the features of individual languages (generally in the direction of specificity). We also need to figure out the search requirements for QA, which I believe to have a greater emphasis on specificity.
I see that Elasticsearch uses Lucene, and Michael's Lucene reference above gives a really nice distillation of the text normalisation functions that can be configured. As with my recent Unicode documentation odyssey however, the really tricky bit is tying up the 'standards' way of describing what can be configured with each application-specific way they are managed. There is no shortage of 'official' material (the Unicode consortium certainly like words) but I am struggling to turn this into a suitably terse form to describe a (language-specific) 'default' indexing and then systematic mechanisms for varying from this default.
The search.language.charactersNotFolded.{LanguageCode}={Characters} settings in Snowstorm, for example, make a lot of sense, but are the sets of characters identified in any way 'standard'?
As I say above, I was hoping that the significance of accented characters in search terms would be something akin to Unicode's explanation of asymmetric search, but instead I see that whilst å is explicitly NotFolded in Swedish, the token ång returns both en and sv descriptions that begin with a simple/unaccented 'ang' - I was expecting it only to return terms that began with ång. Hopefully others on the call tonight will be able to enlighten me!
Thanks again
Ed
Kai Kewley
Ed Cheetham the reason you see some descriptions starting 'ang' when searching using 'ång' is that the API has returned a mixture of Swedish descriptions starting 'ång' and also some English descriptions starting 'ang' ... because folding the 'å' character is acceptable in English.
If you filter the search results by the Swedish language (controls on the left) you will see only matches starting 'ång'.
Standards, yes... I was astounded when after spending many hours looking for a standard in this area which covered the most common international languages I found nothing. Crowd sourcing some good configuration by asking tech savvy SNOMED members seemed like a good alternative
Ed Cheetham
Thanks Kai - yes, I think we figured the 'ång' thing out on the call!
Makes sense now, but 'Filter the search results by the Swedish language' is more stringent than what I assumed it meant. I assumed it would just leave behind all the Swedish terms, but it's really 'filter the results according to the Swedish language matching rules'. Consequently 1729 'Swedish' term matches are reduced to 234 'really Swedish' matches.
Ed
Michael Lawley
Thanks Kai, that is helpful.
On a slightly separate note, I noticed that the UK module is configured there as well:
codesystem.config.SNOMEDCT-UK=UK|999000031000000106
but that it appears to be out of date.
Kai Kewley
Michael Lawleythanks for being vigilant, if you have more accurate information I'm all ears.
Michael Lawley
I'm not exactly sure when it happened (sometime last year?), but there was a change in the module structure for UK and 83821000000107 is now the module that aggregates the Clinical and Drug extensions.
Linda Bird
I'll second that Michael. Thank you very much Ed - really appreciate your research on this!
Ed Cheetham
Thanks Linda! Ed