Tuesday, March 12, 2019

How to File a HIPAA Privacy Complaint

I've been seeing a lot of tweets recently complaining about misuse of HIPAA (about a half-dozen).  Mostly from people who know better than doctors what the regulations and legislation actually says.
I tweet back, sometimes cc: @HHSOCR.  The volume's grown enough that I thought it worth while to write a post about it.

If your health care provider or insurer refuses to e-mail you your data, refuses to talk with you over the phone about your health data, or makes it difficult for you, there's someone who will listen to your complaint and will maybe even take action.  The HHS Office of Civil Rights is responsible for investigating complaints about violations of HIPAA.  They don't make the form easy to find (because frankly, they do have limited resources, and do need to filter out stuff that they cannot address), but they do support online complaint filing, and you can get to it online here (I've shortcut some of the filtration steps for you, if you've found this blog post, you probably meet the filter criteria).

Another way to complain is to write a letter.  I know it's old fashioned, but you can do it.  My 8-year-old daughter once wrote a letter to a HIPAA privacy officer.  You don't need to know their name, just the address of the facility, and address it to the HIPAA Privacy Officer.  It'll definitely get someone's attention.  And who knows, you just might change the behavior of the practice (my daughter's letter got the practice to change a form used to report on a visit so that it would be clearer for patients).

I've mentioned before that under the HIPAA Omnibus regulations, in combination with recent certification requirements, providers shouldn't be able to give the excuse that they are not allowed (under HIPAA) to e-mail, or haven't set up the capability to e-mail you your health data.  Those two statements are likely to be false ... but most providers don't know that (if you are reading this blog, you are probably among the exceptions).

I'd love it if HHS OCR provided a simple service that made it possible for patient's to report HIPAA nuisance behavior that would a) send the provider a nasty-gram addressed to the HIPAA Privacy officer at the institution with an official HHS logo on the front cover, and b) track the number of these sent to providers based on patient reports, and c) publicly report the number of nastygrams served to instititions when it reached a certain limit within a year, and d) do a more formal investigation when the number gets over a threshold, and e) tell them all that in short declarative statements:

e.g.,


To whom it may concern,

On (date) a patient reported that (name) or one their staff informed them incorrectly about HIPAA limitations.

The patient was informed that:
[ ] Healthcare data cannot be e-mailed to them.
[ ] Healthcare data cannot be faxed to them.
[ ] Healthcare data cannot be sent to a third party they designate.
... (a bunch of check boxes)

Please see HHS Circular (number) regarding your responsibilities regarding patient privacy rights.

Things you are allowed to do:
... (another laundry list).

This is the (number)th complain this year this office has received about your organization.  After (x) complaints in a year, your organization will be reported on http://www.hhs.gov/List-Of-Privacy-Nuisance-Violators.html.  After (y) complaints total, your organization will be investigated and audited.

Sincerely,


Somebody with an Ominous Sounding Title (i.e., Chief investigator)
/s/




I'd also love it if HHS would require the contact information for the privacy officer be placed on every stupid HIPAA acknowledgement form I've been "required" to sign (acknowledging I've been given the HIPAA notice ... which inevitably I refuse to sign until I get it), and on every HIPAA notice form I'm given.  Because I'd fricken use it. 

I could go on for quite some time about the pharmacy that couldn't find their HIPAA notice for ten minutes and refused to give me my prescription because I refused to sign the signature pad until they did so, only for them to finally discover that if they'd just given me the prescription, I would see it written on the back of the information form they give out with every medication ... but they didn't have a clue until someone made a phone call.  And of course they claimed I had to sign because "HIPAA" (which says no such thing).

I'd also love it if HSS authorized some sort of "secret healthcare shopper" that registered for random healthcare visits and audited the HIPAA components of a provider's intake processes for improvements (e.g., the HIPAA form in 6-point type at an eye doctor's office is one of my favorite stories, that's a potential violation of both HIPAA and disability regulations).  What the hell, make the payers actually be the ones responsible do it with some percentage of their contracted provider organizations, and report the results to HHS on a periodic basis.

I think this would allow us (patients) to fight back with nuisances of our own which could eventually have teeth if made widely available and known to patients.  I'm sorry I didn't think to put this in with my recent HIPAA RFI comments.  Oh well, perhaps another day, and in fact, since there was an RFI, there will be an NPRM, so these comments could be made there, and who knows, perhaps someone will even act on them.  I've had some success with past regulatory comments before.

   Keith


Monday, March 11, 2019

The Phases of Standards Adoption

I was conversing with my prof. about Standards on FB the other day, and made an offhand remark about him demonstrating that FHIR is at level 4 in my seven levels of standards adoption.  It was an off the cuff remark based on certain intuitions I've developed over the years regarding standards.  So I thought it worthwhile to specify what the levels are, and what they mean.

Before I go there, I want to mention a few other related metrics as they apply to standards.  One of these is the Gartner Hype Cycle with Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity and Grahame Grieve's 3 Legs of Health Information Standards, and my own 11 Levels of Interoperability (which is really only 7).  There's a rough correspondence here, as shown in the table below.

PhasesDescriptionHype
Cycle
Grahame's 3‑Legs11 Levels of
Interoperability
Time (y)
-1 StrugglingAt this stage, not only does a standard not exist, but even awareness that there is a problem that it might solve is lacking.
 0 Absent
0 AspiringWe've identified a problem that standards might help solve and are working to solve it.
 Trigger 
1
 1 Aspirational
1-4
1 TestingThe specifications exist, and are being tested.
Peak
1 & 2
 2 Defined
½-1
2 ImplementingWorking prototypes have been tested and commercial implementations are being developed.

2 & 3
 3 Implementable
 ½-1½ 
3 DeployingImplementations are commercially available and can be used by end users.
Trough
 2 & 3
 4 Available
1
4 UsingCommercially available implementations are being used by real people in the real world.
Slope
3
 5 Useful
2-3
5 RefiningThe standard, and it's implementations and deployments are being refined.
Plateau
3
 6‑10 (not named) 
2-4

People are happy with the implementations, and should the question arise about what standard to use, the answer is obvious.


 11 Delightful
?


How are my seven levels of standards any different from the 11 levels of interoperability?  Not by much really.  What's different here, is that I've given phases instead of milestones.

Why this is important is because each phase occurs over time, and is entered into by different kinds of stakeholders according to a technology adoption lifecycle, and can have innovators, early adopters, majority adopters and laggards in each phase.

Time is interesting to consider here, because standards and technology has sort of a quantum nature.  It can exist in several of my phases described above at once, with different degrees of progress of in each phase, with the only real stipulation is that you cannot be further along in a later phase than you are in an earlier one.

If entry and exit to each phase was gated to completion of the phase before, the timelines for reaching refining stage would take about 5 years, but generally one can reach the starting point of the next phase by starting after the start of the previous phase by 3 to 6 months.  You may have more work to do to hit a moving target, but you'll wind up with a much faster time to market.

As Grahame points out, getting to the end of the cycle requires much more time in the market driving stage of his three-legged race than it does in the initial parts of it. 

Anytime I've done serious work on interoperability programs, I'm always working on 2-3 related projects in a complete program, because that's the only way to win the race.  You've got to have at least one leg in each place of Grahame's journey.  Otherwise, you'll reach a point of being done,  and simply expecting someone else to grab the flag and continue on without you.



Tuesday, March 5, 2019

Whose Interoperability Problem is this?

Is this the challenge of an EHR Vendor? Or a medical practice working with other medical practices who insist on sending faxes and paper copies, perhaps because they don't have some method of sending these over a computer network using digital communication standards such as Direct, or IHE Cross Enterprise Document sharing to the receiving practice?

Yes, we need more inter-connected medical practices.  But is that due to the lack of available interoperability options or the lack of desire to implement them, and if the latter, why is that the case?

Yes, this is an interoperability, but here, we have a question related to workflow:

Workflow related to implementation.
Workflow related to changing the behavior of others in your referral network.
Workflow related to changing your own behavior.

If this practice isn't acceptable, why would you continue to accept it?

Problems like the one Danny illustrated quite well above aren't necessarily due to a lack of technology (or standards, or interoperability) to solve them.  Some times they are simply because the right person hasn't asked the right questions.

Some thoughtful questions to ask:

  1. What other ways could this be done?
  2. Why can't we do it another way?
  3. How much does it cost to do it the way we are doing now?
  4. What might it cost to do it a different way?
  5. What could we do with the savings?


   Keith

Friday, March 1, 2019

AllOfUs

Today I scheduled my intake appointment as a participant in the AllOfUs program.  My PCP is the PI for their efforts with AllOfUs in the group practice that I use in central Massachusetts, and so I signed up to participate this morning.

It took me about 15 minutes to sign up.  The consent process was very well done, and very well written, in quite understandable language.  I'd guess the reading level of the content was around 6-7th grade, but was also a highly accurate representation of what the program is doing, which takes quite a bit of work if you've ever had to do that sort of writing.

The surveys took me another 10 minutes to complete and were especially easy since I'd already seen them having read through the protocol previously.

What surprised me was getting a call from my practice to schedule the appointment, but my sense is, they are already very engaged in this effort (I was to have participated as a patient representative in their outreach program, but was unable to attend the initial meeting due to battery problems with my motorcycle).  That was cool, and took about 5 minutes.

I'm looking forward to see how the program operates from the patient perspective, especially since some of the standards work I'm engaged in now can help refine it from the research perspective later.

   Keith

Thursday, February 28, 2019

The skinny on the NoBlocking provisions of the CuresNPRM

In my own contribution to two reams, I put together a tweet stream of of over 150 tweets yesterday covering the information blocking related provisions of the Cures NPRM released by ONC during HIMSS week.  It's finally available from the Federal Register (but not yet "published"), and you can find the copy I used to summarize it from ONC here.  Next Monday you should be able to get to the web-based Federal Register content, and I've heard also that ONC will publish the Word version (which is going to be my source for comments given that I can modify electronic commenting tools I've developed for my HL7 work to gather feedback).

I'm NOT going to give the long details of that stream in this skinny post, but you can find the full set at tweet 18 in this 14-page unroll.

The information blocking provisions are the biggest addition to ONC oversight in terms of new regulation since inception of the CEHRT program, and also have potentially the biggest raw impact since then.  The provisions impact:
  • Patients
  • Data Providers
    • Healthcare Providers
    • Health Information Networks
    • Health Information Exchanges
  • Health IT Vendors
    • Certified EHR Technology vendors
The most challenging aspects of this rule related to the fact that data blocking is essentially defined as a behavior that would restrict, restrain, discourage or otherwise prevent access to electronic health information UNLESS ... and then details 7 exceptions.  The exceptions together take about 17 pages of REGULATION, and somewhere around 180 pages of explanatory text in the preface.  That works out to about 2.5 pages per exception for the regulation alone, and around 25 pages of explanatory text per exception.

45 CFR 171 is all new content, and touches deeply on the rights and responsibilities of stakeholders with regard to exchange of electronic health information (EHI is the new acronym you need to learn), in ways that to my knowledge, are unprecedented in digital commerce.

In the main, past regulatory efforts regarding digital data tied to an individual have been related to what CANNOT flow, or in the cases of an individual, what data MUST flow to that individual.  In the case of the Information Blocking rule, the regulatory effort is about what must flow, and what legitimate reasons must exist that might inhibit that flow.  This is a very new approach.

The current challenge has to do with the fact that while the regulation touches on rights and responsibilities of stakeholders, it isn't written in a form that corresponds to any new or existing rights.  Instead, the form it is written in closely corresponds to the ruling law found in the 21st Century Cures Act.  This meets the test that all regulation must meet in being able to establish its ties back to the supporting legislation, but unfortunately doesn't make it very easy to understand.

I think my comments on this regulation are going to be an attempt to reinterpret it based on rights and responsibilities that appear based on the intent of the legislation.  But this is an area where I think I'm going to need to get some expert assistance.  Because while I can probably figure out the right model, I'm not entirely clear on the constraints that current law might impose in interpretation.

    Keith




Tuesday, February 26, 2019

A Brief summary of my IHE ACDC Profile and A4R Whitepaper Proposals

This week I'm at IHE meetings, and am submitting one of the first (and second) out-of-cycle proposals to IHE workgroups following the continuous development process adopted by IHE PCC and IT Infrastructure (and under consideration in QRPH).

The first of these is the ACDC (Assessment Curation and Data Collection Profile), which advances assessments in a new way that hopefully addresses the challenge that there are literally tens of thousands of assessment instruments that we'd like to be able to exchange in an interoperable manner.  The goal is here is to disconnect the encoding of assessment question and answer concepts from standardized vocabularies such as LOINC and SNOMED CT (which can take some time), yet still enable the capture of assessment data, and handle the encoding task in volume at scale using a mechanism that still needs some R&D.

The profile addresses two connected use cases: the process of assessment instrument acquisition (e.g., by a provider or vendor from an assessment instrument provider or curator), and the process of data collection through a singluar common resource, identified by it's Questionnaire canonical url (basically, a web accessible URL that also acts as a unique identifier).

The first use case allows the assessment instrument curator to make it possible for an assessment instrument acquirer to search for, and explore available instruments and metadata, and eventually get back enough information to make an acquisition decision, and after taking the necessary steps to obtain access (e.g., licensing, click-through, or whatever), to then acquire the executable content (the full Questionnaire resource).

The next use case allows a system that has access to use an executable assessment instrument to ask an assessor application to gather the essential data to make the assessment (it could be through a user interface that simply asked the provider or patient to answer some questions, or it could be more complex, involving answering questions based on available EHR data, or provide some adaptive responses based on other data).  The response to this inquiry would be something like a Bundle containing a) the QuestionnaireResponse, new resources that may have been created as a result of processing the Questionnaire, and perhaps some ClinicalImpression resources that provide the assessment evaluation.

For example, a questionnaire implementing APGAR Score might result in one QuestionnaireResponse resource, and six ClinicalImpression resources, one each for the scores associated with the 5 components of the APGAR score (respiration, heart rate, muscle tone, reflex response and color) and the overall APGAR score result (the sum of all the component scores).

Assessment instruments are commonly used to collect data essential in clinical research regarding a patient's current cognitive or functional status, data related to social determinants of health at the start of a research program, or during the course of research (to determine patient progress).  They are also used to collect data about patient reported outcomes, most often at the end of the research intervention, but perhaps also periodically (also to assess progress).

As we look at the R&D effort that is undertaken to address the scaling problem, we'll be keeping track of our findings regarding what efforts we attempt, and our success in addressing the scaling problem, and will report this in the Assessments for Research (A4R) white paper.  The purpose of doing the white paper gives us the opportunity to explore different approaches and publicize our findings in a way that helps us drive forward progress on the scaling problem associated with encoding assessment instruments in an interoperable manner.

Some thoughts as we've discussed this profile thus far:

  1. It's important to classify assessment data elements by a set of categories that might be important for research.  The hierarchical structure of the Questionnaire allows for groups which enable us to introduce groups that can be used to record classification codes.  For example, if one is performing research on alcohol use, one might be interested in seeking a wide variety of data from multiple assessment instruments.  Enabling use of classification codes in the Questionnaire will enable groups of related questions from multiple instruments to be identified.
  2. How should automated pre-population of responses be addressed.  For example, in cases where there is sufficient data in an EHR system to answer common questions re: Age, gender, et cetera, how might this be enabled in the encoding of the questionnaire resource.
A lot of what goes into these proposals is based on work on assessments and automated data capture for research that started in IHE in 2006 on the RFD profile, in 2007 in the Assessments work done by Patient Care Coordination, and continued through ONC efforts on Structured Data Capture in 2012 (which greatly influenced the work of the FHIR Questionnaire and QuestionnaireResponse resources), and current work being done on Patient Reported Outcomes in FHIR by HL7.

As John Moehrke said, this work won't be "Done Dirt Cheap", which also means it likely also won't be a dirty deed that doesn't get the job done either.  I think we're about to rock on assessments.





Update: Both proposals were accepted today. More later as the work progresses.

Tuesday, February 19, 2019

The short and long of the PatientAccess rule

I never did finish up my regulatory summary post on the patient access rule last week, even though I finished reading the regulation text on Monday of last week. So I'm going to combine that with the detail review.  While the rule still hasn't been published in the Federal Register, you can find the preprint from CMS here.

The Short of It

This is what the reg says, and my responses to it. I start there because I don't want to anchor myself in the regulator's thinking just yet. It's also a LOT less text to read.  

Patient Access

Think of "mom" below as Medicare enrollee, and Kingle as Medicaid enrollee. These are real people for me, which helps me to think about the impacts of the rule.  

Patient Access for Mom

Mom's MA organization has to provide APIs that allow her to use an app after mom approves it to access standardized claim data, adjudications, appeals, provider payments (remittances) and co-payments (cost-sharing) within one business day of claim processing. This is an API form of an EOB essentially, but CMS doesn't use that phrase anywhere in the rule, however see how they describe things here.

Mom can also get standardized encounter data within 1 day, provider directory data, including names, addresses, and phone numbers within 30 days of update, and clinical data and lab results within one day.

And because Mom is also covered by a Part D plan, she'll be able to get information about medications covered too and pharmacy directory data, and formularies,

All using the standards that are adopted by the Secretary at 45 CFR 170.215, which includes FHIR DSTU2, ARCH, Argonaut Data Query, SMART, OIDC and FHIR STU3 ... or some more advanced version of the standards unless specifically prohibited; what @HealthIT_Policy (Steve Posnack) calls raising the upper bar.

Mom can tell her Medicare advantage organization to go get data from her previous plan up to five years after changing the plan.

MA providers have to participate in a trusted exchange where they can exchange this data.

Patient Access for Kingle

Now, Kingle, a Medicaid beneficiary I know basically gets the same rights as Mom, because the States have to do this for them too. And just like Mom, Kingle gets access to the same data. With all the same aforementions and aforesaids thereunto pertaining.

And MA and States must provide web sites in which mom and Kingle can get all the information they need about this stuff, including their rights, and how to bitch to the OCR and the FTC if need be.

And 438.242 of the rule says that Health Information Systems must "Participate in a trusted exchange network" which exchanges health information, supports secure messaging or query between payers, providers and PATIENTS.

Patient Access for Any Life CMS Touches

All of the aforemented, aforesaids and thereuntos apply to CHIP beneficiaries as well, and qualified health plan members in federally facilitated exchanges (27 states), and some other stuff. So, basically, if the Feds give money to states or MA organizations to provide healthcare, they have to give mom, Kingle or any other beneficiary access to their data.=

Conditions of Participation

Conditions of Participation translated means if you get paid by CMS funding, you have to do this. If you happen to be a hospital participating in Medicare or Medicaid and you have an EHR (just about all of them), implementing HL7 V2.5.1 ADT messages (all of them), then the system must send notifications with patient name, doctor, hospital and diagnosis.

Critical Access Hospitals and Psychiatric hospitals have the same responsibilities as other hospitals. The reason for separating these out is so that CMS can change there mind about them individually in the final rule, because some of them might complain very loudly, and this gives CMS a way to codify their requirements differently.

Qualified Health Plans in Federally funded (facilitated) Exchanges have to do the same thing as MA, States, CHIP plans, et cetera, or get a special exception and have a good reason for non-compliance and a timeline for correction. And even then, there special exception becomes public.

Thus ends my analysis of the rule itself, and you know just about everything I do about what the proposed regulation says.

The Long of It

This is where I analyze the front matter, which contains the regulator's justifications for the regulation content. In here, you also find the alternatives which they consider (and which are still fair game in the final text), and the other questions they are specifically asking you to respond to, and what they say that aren't going to do, or may do later. This is good reading, but I don't want to read it first, because I want to have my own thoughts in front of me before I read theirs.

For whom the bell tolls

The Patient Access rule applies to Medicare/Medicaid Fee for Service, CHIP and CHIP entities, Medicare Advantage, Managed Care, prepaid inpatient & ambulatory health plans, and qualified health plans in federally facilitated exchanges. Nearly anywhere CMS writes a check, as best I can figure.

All patients can have their clinical and administrative data travel with them, with complete records available to their providers, and Payers should be able to exchange with other payers. Of course, APIs play a significant role "without special effort". Everyone in government’s definition of  interoperability references the now famous “without special effort” IEEE text which I was writing about here in 2013.

APIs will use FHIR as the Standard

The most important part of the history in this section for me is the call out of the Da Vinci Project Coverage Requirements Discovery (CRD) profile with CMS, and work on prior auth for CPAP.  Between that and the recent letter of thanks that CMS sent to HL7, we can get a very strong idea of some of the directions of CMS thinking around APIs for Medicare/Medicaid in the future.  There's also some potential here for FHIR to supplant X12N EDI transactions for HIPAA in the future, and some appetite in the industry for the same ... just as a related trend to think about.

HIPAA allows for APIs

In a somewhat long winded response, @CMSGov reminds covered entities that patients are not covered entities and that they can direct covered entities to third parties (and Apps by association) by right under HIPAA (as revised by the OmniBus regs). You might recall my own observations on the impacts of the HIPAA Bus and e-mail, well, they apply equally to APIs according to this text (and my own analysis).

Everyone to Use FHIR

The rule would require “MA organizations, state Medicaid and CHIP FFS programs, Medicaid managed care plans, CHIP managed care entities, and QHP issuers in FFEs (excluding issuers of SADPs)” use HL7 FHIR for APIs. CMS intends “to prohibit use of alternative technical standards that could potentially be used for these same data classes and elements, as well as earlier versions of the adopted standards named in 42 CFR 423.160 [the HIPAA ePrescribing standards], 45 CFR 162 [the HIPAA transaction standards] & proposed at 45 CFR 170.213” 

CMS also reports a “wish to assure stakeholders that the API standards required of ...[that list of payees]... under this proposal would be consistent with the API standards proposed by ONC [in the Cures rule]."  Where no standards are elsewhere mandated and HIPAA transaction standards are the only ones available, the rule would “require entities subject to these proposals to use these HIPAA standards when making data available through the API.”  In payer to payer information exchanges, they could still use HIPAA trasaction standards they already have or could use FHIR et. al. for exchange required by the rule, not throwing the baby out with the bathwater.

Pages 32-57 is mostly about use of FHIR and APIs and some not quite so new stuff already in regs from CMS.  If you read the cures rule, you'll see a lot of similar discussion here.  It's mostly background though.  The key point is that the APIs are going to be FHIR, same set as selected by ONC (though I imagine some thought will be needed around claims, and EOB statements as that work progresses in HL7).

Page 57 starts the discussion from CMS’s view on standards update process, and is following ONC's lead in the Cures rule.

The open API in the rule “would include: adjudicated claims (including cost); encounters with capitated providers; provider remittances; enrollee cost-sharing; and clinical data, including laboratory results (where available).” Simplifying that for the non-EDI crowd: Claims data is what docs send to payers, cost/remittance/cost-sharing is what patients get from insurers on an EOB statement, and clinical data what providers put in their EHR (using USCDI).  Also available via APIs would be provider directories and medication formulary data. 

CMS has to say much of the same things over and over because different regulations apply to different entities they pay under programs legislated at different times, and some require slightly different variations because of those variations.

Miscellaneous but Important Short Topics

Timelines

Much of the rule is to be applicable by January 1, 2020, but for some (CHIP), by July 1, 2020. That’s not a typo. Shot and a beer that the industry response is going to push for a later date is a bet I think nobody will take, maybe we should bet on the actual date appearing in the final rule.  Given rule deadlines, Jan 1, 2020 is very short notice.  The rule still hasn't been published, let's say it is on March 1, then the comment period goes through March and April, and then CMS can start putting together it's responses afterwards.  I'd allow for another 60 days or so for that to get done, and it still has to go through another week or more at OMB before publication as an FR.  So, call it about 90 days total.  That means an FR could show up in the late July early August time frame, with an implementation date 6 months away?  That seems VERY tight, especially for the payer space.  I'm guessing those dates will move in response to industry push back.

Color Commentary

CMS claims patient access is “designed to empower patients by making sure that they have access to health information about themselves in a usable digital format and can make decisions about how, with whom, and for what uses they will share it.”

It sure as hell will as I read it!

This kind of data will make unprecedented price transparency available to patients through APIs, and the third parties they wish to share it with under the rule. Imagine if you will what one could do with EOB and price data from millions of patients, think of intervention studies where the intervention is a change in health plan for example.  Think about what patients who pool their data with others might learn:  Under plan 1, doctor D charges X$ for procedure P, what does doctor D charge under plan 2?  What does doctor E charge under plan 1?  How many procedures P do doctors D and E do?  What's the cost of procedure Q? As I think these parts through, this could be earth-shattering to enable patient cost controls, almost makes me sorry to not be on Medicare just yet.

The scary part is who else will try to take advantage of it... and I see many opportunities for abuse here... especially in terms of resale of patient data gathered by apps, even anonymized or aggregated.  I think much thought is needed on the unforeseen consequences, and a risk analysis on these components is something I think the industry should certainly do in response, with feedback to CMS on the results.

Trusted Exchange Network

As I read through the section on Trusted Exchange Networks in in the rule, I don't see enough words for me to equivilate [yes, that’s a word] it to the same as trusted exchange framework, though I see parallels. A Framework is not a network (just ask someone from Carequality/Sequoia if they are a network).  I think there will certainly be tie-ins between the two, but I don't think they are the same thing.

Complexity in the Rule

CMS comments on the need to align Medicare and Medicaid to support care, but the rule also makes it clear that CMS needs to align many programs (MA, part D, CHIP, FFE and others) on standards. A better rule structure with common content might improve compliance.  This, as I said earlier has in part to do with the legislative background associated with CMS responsibilities under so many programs.  I think a "Chinese Menu" approach might be applicable here though, where, like ONC, CMS creates a list of requirements that other sections reference as appropriate.

(Some) States need to Up there Game on Dual Eligible Patients

Under Increasing the frequency of federal-state data exchanges for dually eligible individuals CMS is telling states that do this monthly is that daily exchange is necessary, and will help them cut costs and improve patient outcomes for both CMS AND the states that are behind.

The new "Wall of Shame"

CMS runs through background on Information Blocking from page 126 through 135, and the fact that CMS will publish attestations regarding information exchange publicly on the three questions in section I here.

NPPES to support Electronic Contact Information

Under the rule, CMS would use its NPI provider directory to publish digital contact information for both individuals and facilities eliminating the problem I described here.  This was a thought that I've dropped in various suggestion boxes of many years, and was discussed very early on in the Direct project.

ADT Notifications

Under conditions of participation for hospitals, the rule would require some form of notification (i.e. a functional capability) to be give to other providers upon patient admit, transfer or discharge, but not requiring a specific standard for it, for those providers with 2015 CERHT having HL7 V2.5.1 ADT messages (see 170.299(f)(2)).  Special call-outs for psychiatric hospitals and critical access hospitals allow CMS to use same or different requirements for these kinds facilities in the final rule.  This is a smart move by CMS to alleviate the challenges that might be raised by those institutions with special requirements.

Requests for Information

The last part of the Patient Access NPRM isn't about rules, but rather questions that CMS wants to get feedback on before it makes more policy in this space.  There are three key topics, and I'd suggest you read and respond to these:

  1. Supports for Long-term and Post-Acute Care
  2. Patient Matching
  3. Innovation Center Models for Advancement

The End (for Now)

And that takes us to the end of the interesting stuff in the front matter.  The rest (from page 172 to the start of the regulation) cover regulatory disclosures that talk about costs of the rule, data collection, and other stuff that is required of the regulatory, but generally very difficult to analyze without deep economic expertise.  However, if you have that ability, and provide feedback in this space (not many do), it would probably wake someone up.



Monday, February 18, 2019

Functional and Structural in the context of FHIR Resources

I often get into discussions of Functional and Structural roles in terms of security, because that is where roles most often get discussed.  In this space,


  1. Structural role depends on is who you are, or what specific skills or licensing you have attained
  2. Functional role depends on how you behave, what you are doing.


In the case of a taxi driver delivering a baby, their structural role is taxi driver, that is how they are licensed. But their functional role might be "attending OB", in terms of what role they are taking on.  In the context of

Structural roles describe static capabilities, functional roles dynamic behaviors.

I came upon the realization today that these concepts can also be applied to "Resources" in FHIR.  Here's the context: How should one represent a transcription on a diagnostic test or procedure reported as a text report via an HL7 ORU message?

Is this DocumentReference, because basically the content is a document?  Or should it be represented as a DiagnosticReport because the content is a report on a diagnostic test?

The DocumentReference resource has to do with what it is, it's physical structure and components.  The DiagnosticReport focuses more on the functionality it provides, and the purposes and ways it would be expected to behave (e.g., the state diagram that might be associated with it).  See 4+1 Architectural View Model for yet another way to look at these distinctions.

My answer in this case, to this either/or question, is essential my father's default answer to any either/or question. Yes.

The report COULD be accessed via DocumentReference, and in this case, the API capabilities would be focused on the "document-ness" of the content.  It COULD also be accessed via DiagnosticReport, and the API capabilities would be focused on the "diagnostic-ness" of the content.

In the Venn diagram describing these thing, some documents are diagnostic reports (and others are different kinds of things, e.g., scans of identity cards), and some diagnostic reports are documents (and others are simply combinations of structured data that eventually can be rendered in document form, but don't exist natively in that form).

And FHIR is simply the interface that is provided to access all these things, and when a thing has dual nature, well, why not make it accessible either way.



Wednesday, February 13, 2019

The ONC Information Blocking Rule


blocking hall of fame GIF by Pitt Panthers 

The Cures NPRM added a new section to regulations that covers not just certified EHR technology, but the behaviors of organizations in possession of patient data with regard to information exchange, which I covered earlier this week in my #NoBlocking tweet stream.

The rule itself is a difficult read, the preface, I hope is a lot easier to comprehend. Remember, I read the regulation text first, I haven't gotten to the preface yet.  It gives me an opportunity to get my own impression of the regulations without achoring myself on ONC's justifications for why they did what they did.  I'll be coming back to these again after going through the front matter.

Generally, as I understand this section, ONC is seeking to ensure that all parties who want to use APIs to enable access to patient data (with the patient's authorization) are treated fairly and equally, and that bars to competition are not raised on the basis of access to patient data.

By all parties, ONC applies this section (45 CFR 171) to health care providers, health IT developers of certified products (including API developers and technology suppliers), health information exchanges and networks.  So, if you are a user, developer, reseller, or other third party that essentially deals with deploying, maintaining, using, updating, or otherwise providing access to software certified to use APIs to facilitate the exchange of patient data, this applies to you.

The whole point of this rule authorized under the Cures Act is to prevent ANYONE from interfering with, preventing or discouraging exchange of health data.  ONC makes it clear, ignorance of the requirements aren't an excuse (knows or SHOULD KNOW is the specific wording).  Now, it's pretty clear that ONC understands that there are reasons to prevent access to data, either permanently or temporarily, and that some essential industry behaviors will be seen by some as "information blocking".  The very fact that organizations are making money from healthcare is anathema to some, and so ONC is working hard to try to figure out how to explain the behaviors that aren't considered to be information blocking by listing out the various practices that are allowed.  This is a difficult task for an organization that doesn't engage in business, and I suspect that this part of the Cures NPRM will be the most commented on.

There are a list of things that organizations are permitted, anything not explicitly permitted is at the very least behavior that could be questioned with regard to "information blocking", and this is a case where ONC is the referee who gets to make the decision.

  1. Preventing harm
  2. Promoting privacy and security
  3. Recovering reasonable costs incurred,
  4. Avoiding infeasibility,
  5. Allowing for reasonable and non-discriminatory licensing fees
  6. Maintenance and updates.

Arian Malec presents an excellent sound track and interesting commentary on items 3 and 5 above and a great follow up on contractual requirements that highlight some of the most difficult parts of 45 CFR 171.


Many of exceptions are pretty straightforward to understand.  The first of these falls clearly within the pervue of providers under "do no harm", and is also aligned with HIPAA.  HIPAA and the no blocking rule rule are also aligned on promoting privacy and security (my #2 above).  And lastly, allowing information providers to take information offline temporarily for maintenance and updates is something we all expect and should allow.


Sections 3-5 have to address API providers, but also those who might need to respond to a request for health information.  I think this section is having a little bit of an identity crisis simply because it has to cover so much territory.  I think ONC needs to divest some parts of this section which are already covered under HIPAA (perhaps even by referencing that regulation).  This could help greatly, and in fact make it easier to maintain this section of the regulation.


Avoiding infeasibility addresses a host of issues and concerns.  Many API providers today for example, place limits on the amount of information that can be accessed in a single request, which could be viewed as "information blocking".  These are essential constraints in order to enable responsiveness in technology (returning all of the lab results for the past ten years for a single patient who has been very ill could take a VERY long time, and prevent others from accessing data depending on implementation).  But this section also has to address other sorts of requests for information, which is part of  it's identity crisis.  In this section, ONC makes it very clear, using your access to electronic health information as a means to prevent competition or to force others into paying a fee to access it doesn't make a request for access infeasible.

So, health data is still a commodity under #NoBlocking, but no longer a strategic commodity that the "haves" can hold over the "have nots", and further more, treating it as such could be cause for a claim of "information blocking".  I'm pretty sure I'm in agreement with that general principle, but I'm also sure that this portion of the rule is going to be gone over (by me and others) with an extremely fine-toothed comb.

   Keith

Tuesday, February 12, 2019

My Comments on the HIPAA RFI

I thought I had missed my opportunity to comment on the HIPAA RFI, but saw a tweet about someone posting comments today, so I break from comments on other stuff to put together my thoughts in HIPAA.

0. HIPAA is seen as a barrier rather than promoting sharing of information to support care.  It needs revamping and probably renaming to change the perception.

1. Most covered entities with a certified EHR can provide data immediately and certainly within 24 hours through a portal or other means of access based on my own personal experience across more than a dozen institutions.  Payers also provide for online access with nearly immediate results.  Paper records, or full records that are part of the entire designated record set that are needed for more detailed review (usually to address issues in adjudication or eligibility of benefits) take for **** ever.  Payers are worse on requests that providers in my own experience.  Plans that are causing challenges (from the patient perspective) are more difficult to get data from than those that aren't.   There are variances across providers that is generally based on technical and infrastructure capacity.

2. In general, it is quite feasible to get the most important data within 24 hours.  The entire designated records set is harder to acquire, because that can include diagnostic reports in paper form, xrays and other imaging requiring storage of external media instead of download. 

3. Digital media and electronically available data should be available within 2 business days, paper or film original formats perhaps a bit longer but no more than a week, and any data in the Common clinical data set by a provider with a certified EHR should generally be available within 24 hours.

4. Ask instead what burdens would a shortened time frame relieve for patients, and the associated costs, and you will get a much more enlightened an appropriate answer.

5. I cannot speak to clearinghouses.

6. Yes, providers have challenges getting data for treatment, generally from other providers, often with the excuse "because HIPAA", although some also claim 42 CFR 2 preventes exchange due to over cautious risk tagging of data that MAY be covered by that regulation.

7. Yes, generally covered entities should be required to disclose PHI for verifiable treatment (without making "verifiable" hard, perhaps slightly more challenging than simple attestation, but not so challenging as to make this too difficult).  Verifiable proofs might be as simple as proofs based on existing treatment relationship (e.g. claims) or attempts to establish one (e.g., via prior auth tx) or facsimile copy of patient signature authorizing treatment, or established provider relationships.   I won't address P and O.

7a, this would improve  care coordination and case management, it would create some burdens to implement new requirements, but not insurmountable ones.  New administrative costs would eventually reduce burdens after implementation.

8. not addressed.

9. Doctors should be required to disclose information to other doctors in a treatment relationship for the purposes of treatment, regardless of electronic billing status.  This would challenge some implementations in that they would have to have a process to identify providers not using electronic billing (e.g., due to lack of NPI).  That could readily be addressed by requiring NPI be obtained by all providers regardless of whether or not they engage in electronic billing.

10. A verbal request would be acceptable from a known entity, but an unknown entity should be verified in some way, along with the existence of a treatment relationship.  Signed patient authorization of treatment (or facsimile copy) would be sufficient evidence, but other documentation or proofs of a treatment relationship might also be accepted.  Appropriate policies would be needed.

11-13. Not addressed.

14. Interaction with other laws such as 42 CFR part 2 should be addressed.  It will be a problem.

15. Appropriate policies should be created, with known entities possible getting easier treatment than unknown entities.  The providers should have a policy, and a means to document and enforce it.

18. Yes, this should be made more feasible and easier, and required.  I've seen requests sit for 30 days for these kinds of services.

19- end: Ran out of time.

The Cures NRPM

You will be getting a lot from me this week.  I'm going to start with the first part of the Cures Rule, prereleased by ONC and I'm just going through the regulatory text, rather than all the prefatory material.  That's next on my late night reading list (1000 pages is a lot to read, this cuts it down by at least 80%).

To start off with, some 2014 related materials were removed from the rule.  Thus stuff no longer applies now, so it's not material to any discussion.

Content Standards added include:

  • C-CDA Templates for Clinical Notes R1 Companion Guide
  • NCPDP Script Standard Implementation Guide, Version 2017071
  • 2019 QRDA Category I Hospital Quality Reporting Implementation Guide
  • 2019 QRDA Category III Eligible Clinicians & Professionals Quality Reporting Implementation Guide
API Standards (what we've all been waiting for include):
  1. FHIR DSTU 2
  2. API Resource Collection in Health (ARCH) (more on this in a bit)
  3. Argonaut Data Query 1.0
  4. SMART on FHIR
  5. Open Id Connect
  6. FHIR STU3
  7. Consent2Share profiles (I'm guessing you are wondering what this is to).
Of these, 2 and 7 had me scratching my head.  #2 is pretty straightforward, just click the link.  It's a list of FHIR profiles that need to be supported according to the NPRM.  #7 was a downright bitch to locate, though I finally did.  I suspect that if John Moehrke had been awake, he could have found it a lot faster.  This is basically an STU3 profile of the FHIR Consent resource.  #6 and 7 basically mean that ONC is allowing STU3, but ONLY for the exchange of consent information, not allowing STU3 for anything not already covered by 1-3 above.

So, how did I do in my bet? If the proposed rule becomes the final rule, I lose it, but I picked the options correctly.  I still think my bet is sound, but I'll give it a 60% chance of being the final result, with the alternative being what we saw proposed this morning.  If I'd thought about it harder, I would have known that ONC couldn't have submitted a proposal with R4 because IT HADN'T been published at the time it was submitted to OMB, which still doesn't leave it out of the running.  The proposed rule text published by ONC does contain a reference to R4 in section 170.299.  My bet (and I'll know more tomorrow) is that the industry will push for R4 and the US Core specifications.

Even if they don't, ONC's changed the rules so that they (and you) can upgrade to newer standards, so that the rule can set both a lower and upper bar.

USCDI is something we are all going to have to become more familiar with.  I took my first look at it a year ago.  It's still not much more than the CCDS and a collection of C-CDA templates.  For the most part, there's nothing challenging here, it's just work for your engineers and validation staff.

There's additional guidance in the rule on how to use CCDA (see the companion guide link in the previous section), and on how to handle Assessment and Plan, Goals, Health Concerns, and UDI (device identifiers) data in the rule.  Expect test procedures to change (I imaging those folks are starting to think about what to change, but won't really get to work until the rule is final).  However, I don't expect a big lift here.

Section D; Conditions and Maintenance of Certification for Health IT Developers is brand new content for certification.  It applies to the organizations who certify product, and how they must behave, rather than what the product must do.  To summarize:
400: This is authorized by the Cures act.
401: As a developer you will attest to ONC that you won't block.  This is a regulatory form that basically makes it possible for you to be fined under the false claims acf if you say you won't and then you are found to do so (c.f. recent news), if I'm reading this correctly.  This is powerful stuff which makes enforcement possible by more than just ONC revoking a certification, the DOJ can get involved.
402: Prove it, and furthermore, don't do what other guys did (see the last link), and make getting that single patient data export document out easy peasy, and keep your records for 10 years, and you have 2 years to handle that patient data export change.
403: What I call the "No Gag Rule" clause says that a health IT developer cannot restrict communication regarding:
  1. usability,
  2. interoperability,
  3. security,
  4. user experience,
  5. business practices,
  6. ways in which the product has been used.
Anything is fair game to be communicated, including proprietary, confidential or IP content when the communication is about one or more of the above for communications:
  1. required by law
  2. regarding patient safety, to government agencies, accreditors or patient safety organizations,
  3. about privacy and security to government agencies,
  4. about information blocking to government agencies,
  5. about certification non-compliances to ONC, an ONC-ACB 
Restrictions are permitted to or about:
  1. contractors and employees,
  2. non-user facing aspects,
  3. IP except that screenshots ARE permitted with certain reasonable provisions, EXCEPT for cases of premarket testing and development until the product ships, after which they are subject to prior rules
And must notify (within 6 months) and make effective (within a year) that any existing contract provisions that contravene the above are basically null and void hereafter.

404: This section requires a close read by your staff involved in revenue generation from APIs.  Basically it says you have to disclose everything needed to use them, make clear how you are charging, be fair in the way you charge, not specifically designed to be anti-competitive, not require non-compete clauses, not require exclusivity or transfer of IP rights, or require reciprocation.

There are some other clauses in here as well, regarding "allowing production use" quickly, and publishing endpoint addresses for systems "in a computable format". 

405: Just as FDA requires post-market surveillance and corrective action, ONC is now requiring ongoing testing and maintenance for certified product, requiring the developer to make a plan, share it with ONC, execute it, and report on the results.  
And you've got 2 years to upgrade your products to the new standards.

406: The developer must attest to all of the aforemented, aforesaid, thereunto appertaining stuff at the time of certification and every six months thereafter, therefore becoming subject to additional enforcement opportunities by ONC and DOJ.

I'm not going to spend much time on section 500, as it mostly pertains to the operations of ACBs, but there's some impact on Health IT Vendors here to look at in this section, mostly in that adopting section D above ONC now has a responsibility to verify that section D is being complied with.  This means that they may perform surveillance about developer behavior in addition to product behavior.
And, as a result of that surveillance, ONC may also BAN a developer from the certification program based on their behavior, giving them yet more enforcement capabilities.

Section D is going to be the hardest discussed section in the industry, but I think in generally it is fairly written (if complex).  There's definitely some work that could be done to make the language simpler and easier to read.  A lot of that may already be written in the preface (which I have yet to read, I like to get my first opinions by reading the regulations, rather than the regulatory body's justifications for them).

I'll be covering what I call the "No Blocking" part of this rule later this week, as well as the Patient Access rule released by CMS.  NOTE: I'm reading the prerelease versions that were available from ONC (and CMS) prior to publication in the Federal Register.  I'll read those versions in my second full read through, just to see if there are any significant differences.

As always, I'm not a lawyer, and this is NOT legal advice, and you get what you paid for.  This also is my own opinion (as are all posts written here), and not the opinion of my employer nor any other organization I might have some affiliation with.  

Keith

My read through tweet streams can be found by searching my stream for #CuresNPRM, #PatientAccess and #NoBlocking.



Getting to Outcomes for PrecisionMedicine

I've been thinking about precision medicine and standards a lot lately for my day job.  One of the things I have to work out is how to prioritize what to work on, and how to develop a framework for that prioritization.

I like to work by evidence (objective criteria) rather than eminence (subjective criteria), and so I need some numbers.  Which means that I need measures.  In any process improvement effort, there are three fundamental kinds of measures that you can apply.
Structure
Measures that demonstrate appropriate systems, structures and processes are in place to support the outcome.
Process
Measures that demonstrated that processes to support the outcome are being executed.
Outcome
Measures of the outcome.
If I were to rank these measurements in terms of value and ability to move the needle on the scale, you've got bronze, silver and gold (a scoring system that works both by weight and by monetary value).

But, in my world, they also rank by time over which they are implemented.  You have to first have the infrastructure deployed (which means it must be available with support for the needed technology), and then the workflow designed, and the processes implemented and executed before you will see changes in outcomes.

Let me give you an example.  If you want to measure the impacts of Sexual Orientation and Gender Identity on health outcomes, you first need the standards to record that information readily available (structure), then you need to have processes designed to support the capture of that information (also a structure measure), then those processes need to be implemented and executed (process measures), and then you need standards designed to exchange that data (structure measures), the software available (structure) and configured to perform that exchange (process), and then data exchanged that includes SOGI to occur (process).  And then you can get to outcomes.

This is a pipeline that has 7 segments that need to be completed before we can use SOGI data in research (the desired outcome).  Working on any of these segments can proceed in parallel BUT that's challenging to coordinate.  Some of the work has already been done at the federal level to promote utilization of the standards for federal reporting for HRSA Health Center grantees (a.k.a. Federally Qualified Health Centers), but hasn't been included in requirements for other healthcare providers.  For example, birth sex (a signal that SOGI data is surely relevant) is part of the Common Clinical Data Set (CCDS, now know as USCDI) that MUST be able to be exchanged by EHR systems, but the SOGI data itself is NOT part of this definition, and so while it may be available in an EHR, neither the processes to capture, nor the C-CDA documents being exchanged may actually do anything with SOGI data.

There's two missing segments in this seven segment pipeline.  The necessary standards are there, the mechanism of exchange is there, the process for exchange exists, but implementing the workflow to capture the data isn't incented in any way by current regulations (which focus on USCDI exchange), nor is exchange of that data mandated.

So, I can identify where the work needs to be done, and I can assess (measure) even, how much work that is.

Now, compare this work to the effort associated with capturing data from wearables (e.g., blood pressure, heart rate, physical activity, blood glucose, sleep cycles, et cetera).  There's a lot more missing segments.  I know what that work is, and I can also estimate how much work there is here.

Now, suppose for the sake of argument that I needed to choose just one of these to work on.  How would I justify one over the other?  NOTE: This is completely for the sake of argument, and I've set up an arbitrary a/b scenario.  There's a reason for this.  If I can build a framework for making that decision, I can extend that framework in ways that allow me to make decisions about how to SPLIT the work and how much to invest in each.  That's just how math works.

So, how to get to prioritization. I need a different sort of evaluation and to measure in a different way.  What can I measure?  I can measure the quantity of research that might be enabled.  I could take a crack at, but it would at best be a guess on the impacts of that research to patient care, or cost.  I could look at the impacts of the diseases that research affects.

And then there's the missing link problem.  If there are 2 of 7 links missing before I can get to outcomes, what's the value of working on broken link 1 or broken link 2?

This is risk assessment turned on its head, as my friend Gila once put it, it's opportunity assessment.  But using the framework for risk assessment is still valid in this case, it's just that what you measure is different.

And now, I think I have the start of a sketch for the framework for my answer.

Time to do some more reading.

   Keith







Friday, February 8, 2019

HealthIT Interoperability isn't Simple

Let's start (as I'm often want to do), with a definition:
Ability of a system or a product to work with other systems or products without special effort on the part of the customer. Interoperability is made possible by the implementation of standards. -- IEEE Standards Glossary
Now, let's think about what the phrase "special effort" means.  Thesaurus.com suggests "fall over backwards", "go out of the way", or "take special pains."  I might also suggest "do extra work", or "more than ordinary effort."  It goes back to the definition of what is normal or expected of a system.

In 2006, when my work in interoperability really started to take off, the ability to import data between competing EHR systems wasn't considered to be a normal function of the EHR system.  The ability for patients to access data via an application of their choosing also wasn't considered to be a normal function.  Over time, these capabilities became the norm as a result of two things:


  1. Changes in Health IT Policy over the course of ten years which incentivized the development of of these capabilities in systems.
  2. Changes in Payment Policy made use of these capabilities the norm for providers who wanted to get as much money as they could from CMS.
  3. The development of standards (the second part of the definition) which enabled these capabilities.
Making a policy change isn't simple.  It's a process that takes years to complete, and a ton of people and effort.  CMS doesn't just get to decide to make a change, tell everyone, and then everyone complies.  There's a multi-year long process to making a rule, and that operates under the presumption that there's already legislation in place to make the rule.  If not, there's another several month long (at least) cycle just to get the legislation in place.  And rules have to leave time for people to implement them.  So, from legislation to implementation is at least a 3 year long process, and that's when the pressure is on (i.e., the early years of the HITECH Act).

Once implemented, we now begin a new stage of learning ... what works, what doesn't work, and what needs to change.  And then the process starts over again (and if well designed, it already had moved into the next stage).

Developing standards is equally complex.  The best work takes decades.  But e-mail ... took 60 years to get where it is today.  But the internet (HTTP, HTML and the web) ... took 20 years to get where it is today.  But XML ... took 3 years from inception of the project to become a standard, and it started from a standard (SGML) that had been in existence for 20+ years before it, and is still being advanced today some 20 years later.  But JSON ... is based on work started in 2006 and took another 7 years to be defined as a standard ... and is still being refined today.  CDA is almost old enough to drink as a standard, and was in the womb for three years before it's first publication.  The inception of FHIR can be dated back almost a decade, the first published proposal (and there was a lot of work before that was published) goes back eight years.

Whether it's technology or policy, by the time you get there, the goal posts will have changed.  That's called progress.

   Keith






Tuesday, February 5, 2019

History of the Concern Act in CCDA

This particular post results from questions on the HL7 Structured Documents workgroup's e-mail list.  It basically boils down to a question of why the Condition and Allergy observations have an Act structure wrapping the Observation related to the Condition or Allergy that is described within.

The answers are below:
  1. What were the intended semantics associated with the effectiveTime on the concern act?
    This is the time that the reported problem became the concern of the provider.
  2. What were the semantics of the author on the concern act as opposed to the semantics of the author on the Problem Observation?
    One provider can be concerned about an observation made by another provider.  The author of the concern is basically the person saying, this is an issue I need to track.
  3. What were the semantics of the Priority Preference on the concern act as opposed to the semantics of the Priority Preference on the Problem Observation?
    Again, this is related to the distinction between the "concern" and the "condition".  A low priority problem for the patient (e.g., a minor twinge in their tooth), might be a high priority concern for the patient's dentist.
  4. How was the design of the Problem Concern intend to be used relative to representing “the patient’s problem list”? 
    The provider's problem list for the patient can be viewed as the provider's record of the concerns they have regarding the health of the patient.


General Background

In 2005, shortly after CDA was introduced, HL7 and IHE collaborated on a joint project to develop templates to exchange a care record summary.  HL7 was to to work on level 1 and 2 templates (documents and sections), and IHE was to work on Level 3 templates (entries) to enable the exchange of of this data (you might recall that ASTM was building the CCR around this time as well).  I was the editor for both the HL7 and IHE documents (and later the editor for the HITSP C32, and one of many editors for CCD and later C-CDA).

Involved in this project also from the HL7 and IHE sides was Dan Russler, cochair of the Patient Care workgroup at the time in HL7, and alongside me, of the IHE Patient Care Coordination Workgroup.  Dan brought extensive knowledge of V3 structures and vocabulary that HL7 had been developing in Patient Care to the project, and I was the go to person for mapping this to CDA.  The project had been cooked up by leadership of HL7 and IHE to basically try to get something done on care record exchange, because some of the IHE sponsors who had also been engaged with the CCR project were getting tired of it getting bogged down in ASTM.  Also involved from the HL7 and IHE sides was Larry McKnight, a physician from Siemens.

In HL7, we based a lot of our work on the Vancouver Island Health Authority's e-Medical Summary, developed for CDA Release 2.0 in 2004 (before CDA Release 2.0 had even finished publication).  That organization was the first organization to use Schematron for validation in CDA documents, something that continues to this day, 15 years later.  But the eMS didn't really get into details for level 3 templates for problems, meds and allergies (our key areas of concern in this project).  Fortunately, the HL7 Patient Care group had been working on vocabulary and modeling to describe Concerns about a patient.

If you look at the changes to CCD over time, you will see in C-CDA 1.1 that the Problem Concern Act uses CONCERN from ActClass in Act.code.  This is because it couldn't be used in Act.class because CONCERN hadn't been an accepted V3 vocabulary term at the time CDA R2 was completed.  This resulted in part from long running debates over the semantics of the CONCERN Act, which didn't finally get resolved until 2014 and later after the third and final push to complete this work.

The Problem Concern Act in C-CDA is the representation in CDA of the semantics of a Health Concern, which is distinct from the underlying problem that causes the concern.  Concern is about provider awareness of a problem, while the problem observation is directly related to the problem itself.  Consider this: I have Cervicular Radiculopathy again, but this time in my left arm.  I told my physician about it on January 19th, but had symptoms going back to a week before, and I was examined on the 22nd.  So, the concern about my nerve pain should be dated 1/19, or perhaps 1/22 after he evaluated me, but the problem itself should have a start date somewhere around 1/11.  When this problem is resolved (say in a few more weeks), it can be marked in the problem observation with regard to the date is was resolved, and in the concern act when that resolution is reported to my provider (which will likely be after that).

The health concern itself can change over time, and acts as a wrapper around the relevant data: When I originally had the problem, we could have included not just the physician observation, but also subsequent diagnostic test data, the treatment plan and evaluation.  All of this development of the Health Concern act evolves from the Problem Oriented Medical Record pioneered by Dr. Larry Weed.

   Keith






Thursday, January 31, 2019

A QueryHealth for AllOfUs

Those of you who have been reading this blog for years are already familiar with what I've written in the past about Query Health.  For those of you who haven't check the link.

Recently, I've been looking into All of Us, a precision medicine research program that takes the ideas of Query Health to the next level.  The original thinking on Query Health was about taking the question to the data.  All of Us has a similar approach, but instead of querying data in possibly thousands of information systems, it uses a raw data research repository to collect data, and a cloud-based infrastructure to support research access to the curated data that is prepared from the raw data sourced from thousands of information systems.  I find the best detailed description today to be found in the All of Us Research Operational Protocol.

There's a lot to be learned from Query Health, and the first thing that any group putting together a large repository of curated and anonymized data is certainly going to be security and confidentiality.  Anonymization itself is a difficult process, and given the large data sets being considered, there's no real way to fully make the data anonymous.

Numerous studies and articles have shown that you don't need much to identify a single individual from a large collection of data collected over time.  A single physician may see 3-6 thousand patients in a year.  Put data from two of them together an the overlap is going to be smaller.  Add other data elements, and pretty soon you get down to a very small group of people, perhaps a group of one that combined with other data can pretty easily get you to the identity of a patient.

For Query Health, we had discussed this in depth, and regarded counts and categories smaller than 5 as being something that needs special attention (e.g., masking of results for small counts).  There was a whole lot of other discussion, and unfortunately my memory of that part of the project (over 8 years old now), is rather limited (especially since it wasn't my primary focus).

Another area of interest is patient consent, and how that might related to "authorization" to access data via APIs from other external sources.  A lot of this can be automated today using technologies like OAuth2, OpenID Connect, and for EHR data, SMART on FHIR.  But as you look at the variety of health information data repositories that might be connected to All of Us through APIs, you wind up with a lot of proprietary APIs with a variety of OAuth2 implementations.  That's another interesting standards challenge, probably not on the near-term horizon for All of Us, considering their present focus.

It's interesting how everything comes back eventually, only different.  One of my ongoing roles seems to be "standards historian", something I never actually thought about.  I'm guessing if you hang around long enough, that becomes one of the roles you wind up adopting by default.

Tuesday, January 29, 2019

Relearning: One of my Favorite Things

One of my favorite things to do is go back to something I learned how to do once and relearn it.  I actually mean that.  I try to keep all of the text books from classes that taught me something that might be useful later on, because quite often, I know I might need to know it in the future, even when I don't have plans for it today.  Those books live in my office in different places, from each different era of my career (usually because I arrange by topic, and I generally float between topics over time), so long as I remember where / what I was learning, I can go back and find the book.

Earlier today, I was looking at a graph and was reminded that it clearly showed the effect of an intervention (basically hiring my employer to take something on), and I was trying to remember how to evaluate that effect.  Then later today, I was trying to identify when and why measuring days between events was a good process measure.  Both of these topics are related to control charts, a method that is used for statistical process control.  Over the last week, I was also looking for how to identify cases when a regime change was identified to trigger further activity.


So, I dug up my second edition copy of the Healthcare Quality Book (now in it's third edition), and turned to the chapter on statistical tools for quality improvement (Chapter 7 in the Second edition).  And there we find a whole chapter basically talking about the utility of control charts, and how to compute the values associated with them.

This would allow me to take a chart like the one below (a graph counting some item N over time T), and through a series of computations, compute a graph like the one on the right.

Where the rate of growth of N clearly enters a new regime, and to clearly show it. Now, for this graph, it's quite obvious, but that's because we've got a couple of years of sample data, and what I want to know is that the regime is changing sooner, rather than later.

In other words, I don't want to wait a day longer than I have to in order to see the effect.  Control charts let me do that.  Why is this important?  Often, providers want to know when something abnormal has happened.  What the control chart does is help to establish a normal range of variation, which makes it possible to detect variation due to special cause, i.e., variation that falls outside of the normal range.

In my particular case, I was looking for a way to signal that a particular event needed special attention, and through control charts, I can readily do that visually, and anything I can visualize, I can generally compute.  Unfortunately, I didn't find the chapter that talks about G/H charts yet, but that simply means I have more reading and searching to do.

I'd never be in a position to apply these charts to what we applied them to in class, because I'm not a doctor.  But I do get to develop software that doctors use, and I also happen to have my own set of "patients" that often need diagnosis and treatment (software systems).  Control charts can work for them just as well as they do for quality management in patient care.

   Keith

  

Thursday, January 24, 2019

An Old Man's Tools

This weekend, my children and I cleared our driveway of about 3 inches of snow covered by about 1/2 inch of ice, in bitter cold weather after Winter Storm Harper.  While "shoveling snow" (mostly breaking ice chunks), I found myself saying something to my children my father used to say to me: "Let the tool do the work."  Somehow, I was never as able as he to let the tool do the work, and now I understand why.

It's an approach of a more experienced (older), less physically energetic (lazier) tool user.  Letting the tool do the work requires an understanding of HOW the tool works: how to hold it, how to use (and abuse) it, what physical advantages it provides, and how to best take advantage of it to get the job done.  In the main, that can all be chalked up to experience with the tool (or ones like it), including how much experience one has trying to be lazy with it.  My youngest pointed out to me that I also had some physical advantages she didn't (height, and uhmmm... weight).  I found her a post hole digging bar, better suited to her capabilities and stature (it had more mass and could just be dropped through the ice) and she was able to be more successful with it than the shovel.

The same thing applies to software engineering.  The tools you select define the level of skills needed to get the job done.  I can do just about anything with XSLT (and for some things, it's my favorite tool for the job), but that's not a skill that anyone has.  I can write a code generator, but again, that's not necessarily a skill that every engineer has.  In defining my approach to software problems these days, I have to look at tools and approaches differently.  I have to find ways to get things done so that I'm not the one who has to do them.  I have to select and identify tools in ways that enable others to do the work, because I don't scale.

My old man's tools aren't necessarily the tools I select for MY use, but rather, the tools I select for younger men and women to use.  My job, like my father's before me, is to pick the right tool, and teach others how to use them according to their skills.

   Keith

P.S. Yes, my birthday was Sunday, and (well) in my fifth decade, I can certainly claim to be "old", although my children still insist I have yet to grow up.