Today I facilitated an IHE Workshop for the third time. This is always an interesting program. The first half of this half-day program is spent describing how IHE is structured, what it does, why, the benefits, how the processes work, et cetera.
The second half is done in five stages.
Step 0: We review the IHE profile proposal template.
Step 1: This step includes brainstorming interoperability problems in healthcare. We simply come up with an unconstrained list of problems that IHE might be able to solve.
Step 2: The next step divides the room up into teams of 3-5 people and having them select a particular problem to solve. Each team has to develop a profile proposal (short form), which includes four things:
A) Problem and Value Statement
B) Use Case
i) Current State
ii) Desired future state
C) Available Standards
D) Systems involved
Step 3: Teams present their proposal in 5-10 minutes, and answer any questions on them from the class. I also gently critique the proposals, explaining what might be done to make them a little better.
Step 4: The class votes on the proposals. Because of the timing, the class dwindled a bit from 12 to 6 people (it was the last class on the last day of HL7). So, I didn't let teams vote for their own proposal.
One proposal today got overwhelming support, and it was from the Opthomologist who worked solo with a bit of help from me. So, I'll be flying that one past some friends in IHE Eyecare to see if I can find a supporter of it to present at the next opportunity.
The problem described was lack of consistency in data collection on cataract surgeries. This is apparently a very common surgery (in general as well as in that specialty). The proposal was to develop either an OpenEHR Archetype or a CDA Document template that could be used to gather and report on the data pre and post-surgery, with use of SNOMED CT or LOINC Vocabulary, and restriction to appropriate units to report on vision acuity. A 20/20 in the US translates into 6/6 here in Australia, but there are also log scales.
I'll get the complete proposal from my student in my e-mail. We also looked at remote (home) monitoring, but that one didn't "win the prize". It had some valuable points also, and was well done, it just didn't have the same focus as the Cataract Surgery one. So, I'll take the output from that group and forward it to some folks in PCD next week also, and make sure that the team at least gets feedback on what is available.
The last proprosal was for ePrescribing, and had participation from AU, NZ, and CZ. The challenge here is that there really are NO common standards available for the electronic prescribing acrosss these regions, so the proposal was not terribly feasible. Even so, I promised to point them to the work being done by epSOS as a possible starting point.
Everybody gained something.
Next week I'll be doing something similar, but with much more limited time. Students will identify problems, and use existing IHE profiles they've had described to them earlier in the day to design a solution to an interoperability problem. I won't have to provide as much background for them because they'll have been at the Connectathon conference and will have also already toured the floor.
-- Keith
The Cataract Surgery project sounds very interesting and ideally suited to openEHR archetype or HL7-DCM based clinical content work-up leading to an IHE/CDA profile.
ReplyDeleteThe issue you have already raised about visual acuity (6/6 UK/AUS vs. 20/20 US) is at the heart of this kind of detailed modelling effort. If you aim for a specific use-case e.g. US realm Cataract Surgery report document, the traditional domain analysis approach can work reasonably well, though still frustrating in trying to secure clinical consensus. The problem is, of course, that the models developed are applicable only in that domain and use-case, whereas, the need for a Visual acuity model lies across the clinical practice from Diabetes care to Optometry. The difficulty is that all of these sub-domains may require somewhat different levels of granularity. The wider the realm, the wider the clinical context, and the harder it is to reach clinical consensus.
This UK Cataract referral document http://bit.ly/ee3raS (page 16) shows the much more granular model demanded by an optometrist. This is not required for diabetes care, nevertheless we want all parties to communicate 'Visual acuity' at some minimal level of coherence. Traditionally, this will have been defined via a ‘minimal dataset’ but since there are multiple overlapping use-cases it becomes impossible to maintain semantic coherence and the more granular, detailed requirements not covered by the minimum standard are worked up by different groups in isolation losing 'information liquidity'.
The paradigm of openEHR archetype development is to set a clear scope for the clinical content of the archetype but within that scope to model as inclusively as possible. This inevitably leads to a somewhat bloated model and we use openEHR templates to constrain the archetype down to specific use-cases. My understanding of HL7 Detailed Clinical Model development is that it follows a similar 'maximal dataset' approach. Whilst this approach cannot give wholesale interoperability in one sweep, it does force clinical debate into a clear and carefully scoped space. If you as a clinician want to add anything to the idea of 'Visual acuity', this is where you do it. If you have a radically new idea (perhaps yet another unit/scale) , this is where you discuss it. This does mean that competing 'standards' and legacy approaches will often sit alongside each other within an archetype/DCM where consensus cannot be reached e.g. multiple pain scales.
Some modellers, even in the openEHR community find this uncomfortable, believing that it may encourage bad or legacy practice but I take the view that, at any point in time, clinical semantic models have to cope with a range of clinical practice which may be legacy, standard or innovative, and inevitably, at times in conflict. It is up to actual communities of care to apply more rigorous standards of common practice, via Templates (HL7 or openEHR).
As co-editor of the openEHR Clinical Knowledge Manager, along with Heather Leslie @omowizard, I would certainly be interested in helping to develop a Visual acuity archetype or DCM. I am more interested in establishing a workable cross-clinical domain approach to modelling based on DCMs/archetypes/templates, with CDA as an output, than fretting over which technical formalism (HL7/openEHR) should be used. I think our shared DCM/archetype experience is that defining the scope of the detailed model or archetype is actually the hardest task - having defined the boundaries of what is in scope, getting clinical engagement and positive discussion is relatively easy
IMO, SNOMED rather than LOINC would be a better option, if working internationally, and where non-lab terminology is required, but our experience is that there are big gaps in all reference terminologies when it comes to detailed modelling. This should not be seen as a criticism, just a reflection of the breadth and depth of coverage required, especially while post-coordination remains a challenge.
Ian
Hi Keith,
ReplyDeleteThe Cataract Surgery project sounds very interesting and ideally suited to openEHR archetype or HL7-DCM based clinical content work-up leading to an IHE/CDA profile.
The issue you have already raised about visual acuity (6/6 UK/AUS vs. 20/20 US) is key to this kind of detailed modelling effort. If you aim for a specific use-case e.g. US realm Cataract Surgery report, the traditional domain analysis approach can work reasonably well. The problem is, of course, that the models developed are applicable only in that domain and use-case, whereas, the need for a Visual acuity model lies across the clinical practice from Diabetes care to Optometry. The difficulty is that all of these sub-domains may require somewhat different levels of granularity. The wider the realm, the wider the clinical context, and the harder it is to reach clinical consensus.
This UK Cataract referral document http://bit.ly/ee3raS (page 16) shows the much more granular model demanded by an optometrist. This is not required for diabetes care, nevertheless we want all parties to communicate 'Visual acuity' at some minimal level of coherence. Traditionally, this will have been defined via a ‘minimal dataset’ but since there are multiple overlapping use-cases it becomes impossible to maintain semantic coherence and the more granular, detailed requirements not covered by the minimum standard are worked up by different groups in isolation losing 'information liquidity'.
The paradigm of openEHR archetype development is to set a clear scope for the clinical content of the archetype but within that scope to model as inclusively as possible. This inevitably leads to a somewhat bloated model and we use openEHR templates to constrain the archetype down to specific use-cases. My understanding of HL7 Detailed Clinical Model development is that it follows a similar 'maximal dataset' approach. Whilst this approach cannot give wholesale interoperability in one sweep, it does force clinical debate into a clear and carefully scoped space. If you as a clinician want to add anything to the idea of 'Visual acuity', this is where you do it. If you have a radically new idea (perhaps yet another unit/scale) , this is where you discuss it. This does mean that competing 'standards' and legacy approaches will often sit alongside each other within an archetype/DCM where consensus cannot be reached e.g. multiple pain scales.
This might seem to encourage bad or legacy practice but I take the view that, at any point in time, clinical semantic models have to cope with a range of clinical practice which may be legacy, standard or innovative, and inevitably, at times in conflict. It is up to actual communities of care to apply more rigorous standards of common practice, via Templates (HL7 or openEHR).
I think our shared DCM/archetype experience is that defining the scope of the detailed model or archetype is the hardest task - having defined the boundaries of what is in scope, getting clinical engagement and positive discussion is relatively easy.
Ian
Note that the eye care community has been active in the DICOM Standards Committee. They have established DICOM Information Object Definitions for refractive and acuity measurements, as well as various ophthalmic diagnostic methods (ophthalmic photography and optical coherence tomography).
ReplyDeleteThe cataract surgery project sounds very interesting and ideally suited to DCM based clinical content work-up leading to an IHE/CDA profile.
ReplyDeleteThe issue you have already raised about visual acuity (6/6 UK/AUS vs. 20/20 US) is key to this kind of detailed modelling effort. If you aim for a specific use-case e.g. US realm Cataract Surgery report, the traditional domain analysis approach can work reasonably well. The problem is, of course, that the models developed are applicable only in that domain and use-case, whereas, the need for a Visual Acuity model lies across the clinical practice from Diabetes care to Optometry. The difficulty is that all of these sub-domains may require somewhat different levels of granularity.
I tried to find the DICOM models but drew a blank - any suggestions for locating the Eye Care Object definitions relatively easily.
Hi Keith, it is good to see interest in the cataract surgery project. In particular I agree with Ian McNicoll that we should work at 'establishing a workable cross-clinical domain approach to modelling based on DCMs/archetypes/templates, with CDA as an output, than fretting over which technical formalism (HL7/openEHR) should be used.' Eyecare data is often very straightforward; for example intra ocular pressure or length of the eyeball are simple numerals and machine dumps into the PMS so there is no agony about what data is like. Competing notations such as for visual acuity or corneal curvature have well known conversion factors. I proposed the project partly because the very simplicity of the data is well suited to the IHE process, because getting accord on what the data is should be easy. A cataract surgery profile also needs to use standards for the devices involved, as well the QA aspect and allow global contributions to the international bench marking process as suggested by a UK team: http://www.medscape.com/viewarticle/722931. If it can work for cataract, all kinds of health care domains might be facilitated using the same technology. Let's do it! Mike Mair
ReplyDelete