Pages

Wednesday, July 29, 2020

Eco-Theater Footers

You know what I'm talking about, those silly e-mail footers that read something like:

P Please consider the environment before printing this email  

I did some back of the napkin analysis on these silly buggers:

Data Quantity Notes Source
Emails Per Day 40 Emails/Day https://review42.com/how-many-emails-are-sent-per-day/
Length of Please Consider text 64 Bytes Assumes UTF-8 encoded text
Working Days Per Year 200 Days/Year
Emails Per Year 8000 Emails/Year
Extra bytes sent per year 512000 Email bytes/Year
Transmission Cost 9.31E-07 Wh/Byte Adjusted downwards for a decade of efficiency improvements, units converted to Wh https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.12630
Storage Cost 2.78E-11 Wh/Byte Units converted to Wh http://large.stanford.edu/courses/2018/ph240/jiang2/
Total Energy/Year 0.477 Wh/Year
Paper 0.032 Wh/Page Assumes an average size of an e-mail is ~ 1 page, and that the energy costs for 20lb paper are larger enough than that of paper used in journal production to make up the difference. https://scholarlykitchen.sspnet.org/2012/01/19/the-hidden-expense-of-energy-costs-print-is-costly-online-isnt-free/
Break-even point 14.9 Emails Not-Printed
0.2%
Grams of Carbon/Wh 0.2252252 g/Wh
Carbon Weight 0.107399 g

It turns out, that the carbon cost in sending these works about to about 0.1g / year for a person sending 40 emails a day for a working year of 200 days (8000 e-mails).  Oh, and if you use Outlook, and it sends multi-part mime because you prettify your e-mails, then double it.  In order for this to balance the energy cost (w/o the doubling), it has to prevent someone from printing about 15 emails.  Considering that I've not printed a single e-mail for more than a decade, and almost all of my colleagues operate in a similar fashion, it seems unlikely that it would even have this efficacy.

It's stupid, it's not really saving the world, and it's just more stuff to not read.  So, as I say, eco-theater.

    Keith

Monday, July 27, 2020

When National Informatics Infrastructure Fails

We had an interesting discussion on The SANER Project call today.  One of the issues that came up was how to get the trigger codes from RCKMS (the Reportable Condition Knowledge Management System).

As a former employee of an EHR Vendor, I'm well aware of VSAC (the Value Set Authority Center) as a source value sets (I also played a role in writing the HIT Standards Committee recommendation for the creation of VSAC).  

As an Informaticist, I'm also well aware of LOINC Value Sets and SNOMED CT Value Sets for COVID that many have published.

As an Interoperability policy geek, I am also familiar with the ONC Interoperability Standards Advisory COVID-19 content.

As a highly educated expert, I CANNOT, for the life of me tell you which of these is the most authoritative content to use for COVID-19 reporting, either for what had been reported to NHSN, or presently for HHS Protect, or for eCase Reporting.  I do know enough to build a pretty damn good value set of my own for a system that I'm responsible for developing, but that's NOT what I want.  Developers don't need more work, we need experts to do their work, and make it readily available for others, and not just to publish it, but also to Market it for their jurisdictions so that it can be found.

I can also tell you that there's simply FAR too much data available to developers to let this situation continue.

Here are some rules for organizations responsible for these value sets to consider:
  1. Figure out where the developers who have build systems get their information, and publish to those sources where they can get more.
  2. Consider cognitive load on developers. Don't give them hard to remember web site names with 80 character URLs to get to the data they need.  Register a reasonable and memorable domain name.  Make the information easy to find. Remember that not every developer works for a hospital or an EHR vendor.
  3. Remember that data distribution and publication needs different governance for access that data creation. If you have a system that supports creation, don't force a developer to get credentials for that system just to access data, just give them a URL.  Sure, give them a way to share an e-mail address for updates, but don't force them to create ONE more login just to learn what they heck they need to do.
  4. Make sure that the distribution system has a way of reporting that data (especially value sets) using standards. Sure, CSV files are good, but come on, we're trying to work on modernization and APIs.  Put the same effort into your distribution mechanisms with respect to APIs and publication that you expect developers to put into the systems that will use the standards you are promoting.
    <rant>There's absolutely NO EXCUSE for a system designed to support FHIR to have an easily accessible CSV distribution mechanism for value sets, BUT not to have a FHIR ValueSet distribution mechanism.</rant>
  5. Curation of a value set is a responsibility that steps up as demand increases. If you are responsible for curating a value set and the demand for updating it steps up due to an emergency.  Quarterly may be fine for things that change annually, but when the situation is changing week by week, changes are also needed week by week.  Yeah, I know, funding and all that ... figure it out, that's part of what being responsible means.
Of course all my International friends are simply going to tell me that the fundamental issue is that the US does not have a coordinated national infrastructure.  I can't argue with that.  Every agency is a plural of services, centers or institutes, I just wish ...



Friday, July 24, 2020

A model for SANER Measure Automation

By Birger Eriksson CC BY-SA 3.0
One of the principles of FHIR is to be able to provide the essential data about the standard, the same data that is necessary to support automatic generation of implementations.  This also shows up in definitional resources, resources that describe the essential structure of other resources, such as Measure, Questionnaire, and CarePlan, in which these resources can be given a computer friendly name to distinguish them from others of the same type.

While it likely won't be the subject of anything appearing in The SANER Implementation Guide, I was considering (more like, the idea popped into my head when I was writing) how one would define a DOM-like binding for a Measure in a language like JavaScript.

In SANER Measures, the measure has a name, a period, a subject, a reporter, and groups of populations.  Each group has a code, and so does each population.  I won't get into strata, but the same princples would apply to these.

If you've got a measure with the Name X, Groups Y1, Y2, and Y3, and populations within those groups of X1a, X1b, X1c, ... X3a... X3c.

Then a MeasureReport that is defined by that measure has Type X (with supertype MeasureReport).  MeasureReport has some common features, such as as subject, reporter, period.  So, you might reference X.subject, X.reporter, et cetera.  But group is kind of an intrisic collection associated with a Measure, so you might reference data in groups as X.Y1, X.Y2, X.Y3.  To get to the score, reference X.Y1.score (rather than the more laborious measureScore).

Within a group, X.Y1.X1a would reference the population "named" X1a, where X1a is simply the code value.

For the CDC Measures we had been working from, that would give us something like 
CDCPatientImpactAndHospitalCapacity as the type, and for an instance X of that type X.Beds would refer to the Beds group, and X.Beds.numTotBeds as the first population of that group.

It's a pretty simple translation between these from and to the components of the Measure.

This is a pretty helpful observation, in that it relates definitional resources in FHIR to user defined types (or structures or classes) in programming environments, and variables of those types to instances of the resource defined by the definitional resource.

Principles for the SANER Project

Principles
Principles by Nick Youngson CC BY-SA 3.0 Alpha Stock Images
One of the things I try to understand for every project I work on are what are the fundamental principles of the project.  Often I even attempt to get these principles formally adopted by a project (when I can remember this important step).  That's because these principles facilitate and simplify decision making.  When the principles are clearly stated and agreed to, then when a contentious decision comes up, it can be evaluated according to these already agreed upon principles, and the decision making can move forward on that basis, rather than the sometimes deeply emotional arguments that can result. 

Often times, one can understand the principles of a project by looking at definitions of the project scope or approach (i.e., methods).  We never actually established the principles of The SANER Project, but that never stopped me from thinking about them and at least developing a mental model of what I think they are.  So, here they are, enumerated and explained in modest detail:

  1. The work needs to be done quickly.  As a result, it must
    1. Work with existing systems where possible, minimizing new development.
    2. Utilize off the shelf components and FHIR Servers where feasible.
    3. Use existing FHIR capabilities and operations where feasible.
    4. Minimize efforts needed to integrate external systems.
  2. The work needs to be aligned with current US national and international initiatives, thus it will use FHIR Release 4.
  3. While the end-goal is automation of the supporting solution, there must be intermediate features which still enable value for users who are not able to provide more advanced capabilities.  This is really just another way of saying "minimize new development", but more clearly describes that partial implementation must both be possible, and add value.
  4. Finally, in the extreme situation impacting facilities who might use this solution, a great deal of attention must be paid to verification and validation of the specification.  There is absolutely no room for implementers to break a system that is otherwise functional and vital to operations.
Decision points in The SANER Project have been constantly evaluated against this mental model as we tracked the work.

Tuesday, July 21, 2020

Who do you trust?

If you've been involved in software for any period of time, you've heard the terms validation and verification.

If you've been involved in standards for any period of time, you've also heard the terms certified and accredited.

By non-pedants, these two pairs of terms are often slightly confusing, because one term of each pair is often used incorrectly to mean the other. 

The differences are subtle:

  • Validation is the process of ensuring that a specification meets the customer requirements.
  • Verification is the process of ensuring that a product meets the specifications.

  • Accreditation is a third party process of ensuring that an organization has the capacity (skill set and processes) to create validated and verified products.
  • Certification is a third party process of ensuring that a product meets a specification, and may include verification that a particular set of skills and processes were used during development.
In the end, Validation and Verification are everyday engineering processes in an organization. Accreditation is something that occurs periodically, and which ensures that organizations are following those everyday processes.  And Certification happens sometime around product delivery to ensure that a product is verified.

The word accreditation includes the same stem as credible, and is about establishing trust in an organization.

In software development, these words are all meaningful when you think about the acquisition of third party software products for use in your own software development.  When you acquire that software (either through purchase, or via open source), you need to validate that it meets your needs, and may also need to verify it.  It depends on your organizations process requirements.

In my past, one of the ways to acquire new software included a step where we basically "accredited" an organization, essentially convinced ourselves (or our leaders) that the organization that developed the materials had good processes and followed them, which mean that we had less effort to go through when we verified that the software met our needs.  You basically do the same when you look at a brand to make a decision (and yes, Apache is a brand)

This can be very helpful, because a full-blown verification of something like an XSLT processor is a pretty extensive (and expensive) task.

As I look at the recent discussions about the exchange of Situation Awareness data in the media recently, what we face is this challenge of trust.  The question of who you trust is important.  CDC has a trusted brand, HSS Protect is brand new, it has yet to establish such a brand, and the consequent trust.

The last thing any Software Engineering manager in the non-governmental world of software engineering is going to allow in the development of a system is the replacement of a trusted branded system with a novel system in the middle of a product release.  It's just NOT the way we've learned to do things successfully.  Yet, there have been times when it has been necessary (I'm not saying it is at this time).  What has to happen then, is that the NEW system has to be validated and verified.  It has to be thoroughly tested, and that also means that there's a lot about how it works that has to be made transparent to people who are going to rely on that system. That's how trust works.

The challenge of validation is a real one.  I've worked on software projects where we built a fully verified system, but failed to validate one of the requirements (that it was sufficient to be as accurate as a human), and that caused the product to fail.  Humans can explain their rationale (right or wrong), but the product we build (and verified was as accurate as a human), could NOT explain it's rationale, and so failed to meet a fundamental requirement of its users, which was to be something that could be trusted.  And since trust failed, the product failed.  


I don't know if HHS Protect is going to be successful in the long run. HHS Protect's  primary requirement is to be a trusted system reporting the data about what's going on with COVID. It won't succeed if it cannot be trusted.  That's not a statement of opinion, or about politics.  That's a statement of experience.


Where did the CDC NHSN COVID-19 Measure Definition References Go?

How Can You Help the Internet Archive? - Internet Archive Blogs
Last week, after the Federal Government shifted the responsibility from NHSN at the CDC to elsewhere in HHS for Hospital Situational Awareness reporting, the source materials for the measures we were using were taken down from the CDC NHSN web site.  This represents one of the challenges of the digital age, the ability to erase information. Unlike physical documents or books, web pages can just be disappeared.

Fortunately, even while documents can be lost and books go out of print, we generally maintain archives of those sorts of source materials.  And the Internet has an archive as well.  So, when the SANER Project lost those reference links, we decided we'd grab them from the Internet's archive.

The SANER project may not be the only one still wanting to work with that data (for example, their's Lantana's SMART on FHIR based reporting app that is still under development).

These files will be added to the SANER IG as soon as I publish the next version, but until then, you can download the most recent versions from Google Drive.  And if you want to follow the history, you can look for these same files in the Internet Archive to see all the versions that have been stored.


Wednesday, July 15, 2020

How long does it take?

How long should it take to deploy a new measure?  In the ideal world, where everyone uses a common information model, and supports APIs, the answer should be: Almost no time at all, the time should be measured in units of hours or smaller.

GIVEN a measure has been defined
AND it is ready for deployment
WHEN that measure is available
THEN it is deployed to a system
AND is available to be reported on within an hour of use.

The next issue has to do with validating that the measure works as expected.  You don't just want to install new software and have it start reporting garbage data.  Someone needs to verify that it works, and approve it for reporting.  So now you have to schedule a person to deal with that, and they have to fit it into their schedule.  This should take on the order of days or less.

GIVEN a measure has been deployed,
WHEN that measure has been validated for use,
THEN reporting on it begins.

But wait, somebody has to define this measure and do so clearly.  How long does that take?  Realistically, if you actually KNOW ALL of the detail of what you are doing, AND the data is available, a competent analyst can probably work out an initial draft in a week or so.

GIVEN that the information needed to be reported is available in the reporting system
WHEN the measure is defined computably
THEN it can be deployed.

But wait, you actually have to test this out.  There's that validation step that involves a human, and that can produce errors in interpretation.  Written (or spoken) language is NOT precise.  It has ambiguity, which can result in different interpretations.  So, you have to check that.  So we need to change that last statement to:

THEN it can be deployed for testing.

Now you have to involve some test subjects (people and systems), and work that through.  And you might add some pre-check time to verify that the requirements as written match the automation as developed.  And you have to add some time to address dealing with issues that come back from this.  With all the interactions involved, your week just became several weeks, perhaps even a month.

So, how long should it take to report on a new measure?  Starting from scratch?  Maybe a month.  If you have to get agreement from a lot of people/organizations on the measure, you have to factor that process in, and so now, you have to add time for a process of review and evaluation.  Now you are talking about a quarter or perhaps two depending on volume of input sources, and feedback from them.

So, the fact that it might take a month to create a new measure with enough detail to support computing is not a surprise, at least to me or anyone else who has done this before.  It beats the hell out of throwing a spreadsheet over a wall and asking someone to populate it from some ideal view that they think should exist in the world.

The real issue is that for a lot of this, is not "How long does it take to deploy a new measure", but rather, how ready are we to deal with this kind of emergency.  The time to prepare for a disaster is before it happens.  You may not know exactly what you will need, but you can make some pretty good guesses.  In the SANER Project, we often wrote requirements well in advance of them being released as official measures.  The project which started on March 20th had identified just about everything released to date (now July) by April 15th.  We've stopped adding requirements for new measures because they've served their purpose in proving out that the system that we've been building will support the measures we need, but here are a few additional measures we know are necessary:

  1. Ambulatory provider staffing and availability.
  2. Immunization supplies.
  3. Immunization reporting capacity (how many health systems can report immunizations to an Immunization Registry).
  4. Drug supplies for critical drugs.
  5. Other measures might include aspects of SDOH, such as # of individuals with food, housing or income challenges due to COVID-19 by county or smaller regional subdivisions such as census tract (basically neighborhood).

You can basically figure out what to measure by thinking through the disease and pandemic process.  It's not like we haven't seen pandemics before (just not recently at THIS scale), or other emergencies for that matter.

The point is, complaining about how long it takes to put together a reasonable, accurate and automatable measure SHOULD be done before hand.  And putting together a system to handle it should also have been done beforehand.

My wife and I have a short meme about SHOULD HAVE:

Should'a, would'a, could'a ... DIDN'T. 

We didn't. So we have to do it now.  And it will take what it takes.  Bite the bullet, do it right, so it will be ready for the next time, or the next flu season.  So, that's what I'm doing.