Convert your FHIR JSON -> XML and back here. The CDA Book is sometimes listed for Kindle here and it is also SHIPPING from Amazon! See here for Errata.

Sunday, September 23, 2018

Who is responsible for clinical workflow?

RACI Chart 02
Dirk Stanley asked the above question via Facebook. My initial response focused on the keyword "responsible". In normal conversation, this often means "who has the authority", rather than "who does the work", which are often different.

I applied these concepts from RACI matrices in my initial response. If we look at one area-- medication management, physicians still retain accountability; but responsibility, consulting, and informing relationships have been added to this workflow in many ways over many decades, centuries and millennia.

At first physicians did it all: prescribe, compound, and dispense. Assistants took over some responsibilities for the work, eventually evolving into specialties of their own (nurses and MAs).  Compounding and dispensing further evolved into its own specialty apothecaries and pharmacists taking on some of the responsibilities. This resulted in both the expansion and contracting of the drug supply. More preparations would be available but physicians would also be limited by those available in their suppliers formulary. Introduction of these actors into the workflow required the physician to inform others of the order.

The number of preparations grew beyond the ability of human recall requiring accountable authorities to compile references describing benefits, indications for and against, possible reactions, etc; which physicians would consult. I recall as a child browsing my own physicians copy of the PDR -- being fascinated by the details. This information is now electronically available in physician workflows via clinical decision support providers.

Compounding & preparation led into further specialization, introducing manufacturing and subsequent regulatory accountability, including requirements for manufacturers to inform reference suppliers about what they make.

Full Accountability for what can be given to a patient at this stage is no longer under direct physician control.

Health insurance (and PBMs) changed the nature of payment, farther complicating the matters and convoluting drug markets well beyond the ability of almost anyone to understand. The influence of drug prices on treatment efficacy is easily acknowledged. But most physicians lack sufficient information to be accountable for the impact of drug pricing on efficacy and treatment selection. PBMs are making this information available to physicians and their staff. EDI vendors are facilitates this flow of information.

Physicians, pharmacists and payers variously accept different RACI roles to ensure their patients are taking / filling / purchasing their medications. In some ways this has evolved into a shared accountability. I routinely receive inquiries from all of the above, my own responsibility to acquire, purchase, and-take my medications has evolved into simple approval for shipping it to my home.

Attempts to improve availability of drug treatment to special populations (i.e.Medicaid) via discount programs such as 340B add to physician responsibilities. They must inform others of their medication choices for their patients.

Recently, information about prevalence of opioid related deaths and adverse events have introduced yet another stakeholder into the workflow. State regulatory agencies are informed of patient drug use, and want to share Prescription information with physicians accountable for ordering medications.

My own responsibilities as a software architect require me to integrate the needs of all these stakeholders into a seamless workflow. One could further explore this process. I've surely missed some important detail somewhere.

And yet, after all this, the simple question in the title remains ... answered and yet not answered at the same time.

     -- Keith

P.S. This question often comes up in a much different fashion, and one I hear way too often: "Who is to blame for the problems of automated clinical workflow in EHR systems?"

Wednesday, September 19, 2018

Loving to hate Identifiers

Here's an interesting one.  What's the value set for ObservationInterpretation?  What's the vocabulary?

Well, it depends actually on who you ask, and the fine details are rather weak on their effect in the eventual outcome.

Originally defined in HL7 Version 2, the Observation abnormal flags are defined as content from HL7 Table 0078.  That's HL7-speak for an official table, which has the following OID: 2.16.840.1.113883.12.78.  It looks like this.

Value
Description
L
Below low normal
H
Above high normal
LL
Below lower panic limits
HH
Above upper panic limits
< 
Below absolute low-off instrument scale
> 
Above absolute high-off instrument scale
N
Normal (applies to non-numeric results)
A
Abnormal (applies to non-numeric results)
AA
Very abnormal (applies to non-numeric units, analogous to panic limits for numeric units)
null
No range defined, or normal ranges don't apply
U
Significant change up
D
Significant change down
B
Better--use when direction not relevant
W
Worse--use when direction not relevant
For microbology susceptibilities only:
    S
Susceptible*
    R
Resistant*
    I
Intermediate*
   MS
Moderately susceptible*
   VS
Very susceptible*

When we moved to Version 3 with CDA, we got ObservationInterpretation and it looks something like the following.  It's OID is2.16.840.1.113883.5.83.  The table is even bigger (click the link above) and has a few more values.  But all the core concepts below (found in 2010 normative edition of CDA) are unchanged.

ObservationInterpretation     
One or more codes specifying a rough qualitative interpretation of the observation, such as "normal", "abnormal", "below normal", "change up", "resistant", "susceptible", etc.
LvlType, Domain name and/or Mnemonic codeConcept IDMnemonicPrint NameDefinition/Description
1A: ObservationInterpretationChangeV10214
Change of quantity and/or severity. At most one of B or W and one of U or D allowed.
2  L:  (B)10215Bbetter
Better (of severity or nominal observations)
2  L:  (D)10218Ddecreased
Significant change down (quantitative observations, does not imply B or W)
2  L:  (U)10217Uincreased
Significant change up (quantitative observations, does not imply B or W)
2  L:  (W)10216Wworse
Worse (of severity or nominal observations)
1A: ObservationInterpretationExceptionsV10225
Technical exceptions. At most one allowed. Does not imply normality or severity.
2  L:  (<)10226<low off scale
Below absolute low-off instrument scale. This is statement depending on the instrument, logically does not imply LL or L (e.g., if the instrument is inadequate). If an off-scale value is also low or critically low one must also report L and LL respectively.
2  L:  (>)10227>high off scale
Above absolute high-off instrument scale. This is statement depending on the instrument, logically does not imply LL or L (e.g., if the instrument is inadequate). If an off-scale value is also high or critically high one must also report H and HH respectively.
1A: ObservationInterpretationNormalityV10206
Normality, Abnormality, Alert. Concepts in this category are mutually exclusive, i.e., at most one is allowed.
2  S: ObservationInterpretationNormalityAbnormal (A)V10208AAbnormal
Abnormal (for nominal observations, all service types)
3    S: ObservationInterpretationNormalityAlert (AA)V10211AAAbnormal alert
Abnormal alert (for nominal observations and all service types)
4      L:  (HH)10213HHHigh alert
Above upper alert threshold (for quantitative observations)
4      L:  (LL)10212LLLow alert
Below lower alert threshold (for quantitative observations)
3    S: ObservationInterpretationNormalityHigh (H)V10210HHigh
Above high normal (for quantitative observations)
4      L:  (HH)10213HHHigh alert
Above upper alert threshold (for quantitative observations)
3    S: ObservationInterpretationNormalityLow (L)V10209LLow
Below low normal (for quantitative observations)
4      L:  (LL)10212LLLow alert
Below lower alert threshold (for quantitative observations)
2  L:  (N)10207NNormal
Normal (for all service types)
1A: ObservationInterpretationSusceptibilityV10219
Microbiology: interpretations of minimal inhibitory concentration (MIC) values. At most one allowed.
2  L:  (I)10221Iintermediate
Intermediate
2  L:  (MS)10222MSmoderately susceptible
Moderately susceptible
2  L:  (R)10220Rresistent
Resistent
2  L:  (S)10223Ssusceptible
Susceptible
2  L:  (VS)10224VSvery susceptible
Very susceptible

It's got a value set OID as well.  It happens to be: 2.16.840.1.113883.1.11.78.  Just in case you need more clarification.

Along comes FHIR, and we no longer need to worry about OIDs any more.  Here's the FHIR table.  Oh, and this one is called: http://hl7.org/fhir/v2/0078. That should be easy enough to remember.  Oh, and it has a value set identifier as well: http://hl7.org/fhir/ValueSet/observation-interpretation.  Or is it http://hl7.org/fhir/ValueSet/v2-0078?  

Just to be safe, FHIR also defined identifiers for Version 3 code systems.  So the HL7 V3 ObservationInterpretation code system is: http://hl7.org/fhir/v3/ObservationInterpretation.  Fortunately for us, the confusion ends here, because it correctly says that this code system is defined as the expansion of the V2 value set.

And, so, through the magic of vocabulary, we have finally resolved what Observation interpretation means.  Which leaves us pretty much right back where we started last century.

I'm poking fun of course.  This is just one of the minor absurdities that we have in standards for some necessary but evil reasons that do not necessarily include full employment for vocabularists. The reality is, everyone who needs to knows what these codes mean, and we've been agreeing on them for decades.  We just don't know what to call them.  This is one of those problems that only needs a solution so that we don't look ridiculous.  Hey, at least we got the codes for gender right .. right?  Err, tomorrow maybe.




Tuesday, September 18, 2018

The Role of an Interop Expert


In my many years as a Standards Geek and later as an Interop Guru, one of the things that I learned is that my customers will rarely name interoperability as a core requirement.  Nor, do they want to pay extra for it. The only time they have is when it had Buzz (like FHIR does now), or mandates (like C-CDA).  And if you drill into the detail, they will rarely scratch the surface of buzz or mandate.

If you look at the work break down structure for a product, you’ll see that a product release is made up of features that are delivered by software components that have different capabilities.  If one release has 3 features (or 30, your mileage may vary), and each feature requires 3 components and each component needs to use 3 capabilities ... (turtles all the way down), then engineers will have to deliver 3^3 or 27 software components, plus glue for 9 components, plus baling twine for 3 features.  That’s a lot of work.

But if you design capabilities with reuse in mind then it gets a lot easier.  Let’s look at three features:
  • When patient labs are out of range in a lab report, please flag it for my attention.
  • Please route information to Dr. Smith when I’m on vacation..
  • Let me know when Susan’s labs come back.
Each of these is a very distinct requirement, and yet every single one of them can use the FHIR Subscription capability (which in turn can be built over the same parts used to support search) to implement their functionality.  Each one of these follows the pattern: when this, then that.  And so the biggest piece of that is determining the “when this” ... which is exactly what subscriptions are good for.

It’s my job to:
  1. Know this
  2. Call it out in designs
  3. Educate my teams to think this way.
It turns an insurmountable pile of work into an achievable collection of software that can be delivered.

    Keith





Thursday, September 13, 2018

Sorting Codes and Identifiers for Human Use


These pictures demonstrate one of those little annoyances that has bothered me for some time.

 

Have you ever used a tool to display a list of OIDs, codes or other data?  Were you surprised at the results of sorting of codes?  Most of us who work with these things often are, but only for a moment.  However, that brief moment of cognitive dissonance is just enough to distract me from whatever I'm doing at the moment and thus it takes me just a little bit longer to finish what I'm doing.

The first example above came from searching LOINC for blood pressure, then sorting by LOINC code.  Notice that all the 8### series codes appear after the 7####-# and 8####-# series (starting at page 2).  The second comes from a search of Trifolia for Immunization, sorting by identifier.  Look at the order of the identifiers used for Immunization entries in the HIV/AIDS Services Report guide.  Note that urn:oid:2.16.840.1.113883.10.20.31.3.36 comes before urn:oid:2.16.840.1.113883.10.20.31.3.4.

The problem with sorting codes which have multipart components that contain numeric subparts is that they don't sort the way we expect them to when you apply alpha sorting rules to them.

The "right" way to sort codes is to divide them up at their delimiters, and then sort them component-wise, using an appropriate sort (alpha or numeric) for each component, based on the appropriate sort for that component, in the right order.

For LOINC, the right sort is numeric for each component, and the delimiter is '-'.  For OIDs, the right sort is numeric for each component, and the delimiter is '.'.  For both of these, the right order is simply sequential.  For HL7 urn: values, it gets a bit weird.  Frankly, I want to sort them first by their OID, then by their date, and finally by the urn:oid: or urn:hl7ii: parts, but putting urn:oid: first because it is effectively equivalent to a urn:hl7ii: without an extension part.

A comparator for codes that would work for most of the cases I encounter would do the following:

  1. Split the code into an array of strings at any consecutive sequences of delimiters code.split("[-/.:]+").  
  2. Then, for each component in the two arrays
  3. If the first component is numeric and the second is not, the first component is < the second, and the converse.
  4. If the first and second components are numeric, order them numerically.
  5. Otherwise, sort alphabetically, using a case sensitive collation sequence.

NOTE: The above does not address my urn:hl7ii: and urn:oid: challenge, but I could live with that.

There are more than 50,000 LOINC codes, 68,000 ICD-10 codes, 100,000 SNOMED CT codes, 225,000 RxNorm codes, and 350,000 NDC codes.  If sorting codes the right way saved us all just 1/2 second per code in each of those vocabularies, we'd have 3 days of more time in our lives.

I don't know about you, but I could sure use that time.


Tuesday, September 11, 2018

HealthIT Interfacing in the Cloud

I've been thinking a lot lately about Interfacing in the Cloud, and Microservices strategies, and how that makes things different.  Consider interface engines today.

The classic interface engine is a piece of middleware that supports a number of capabilities:
  1. Routing
  2. Queuing
  3. Transformation (message reformatting, tweaking messages)
  4. Translation (codes, cross walks, et cetera)
  5. Monitoring and Management
  6. Transport (HTTP/HTTPS, TCP, SOAP, FTP/SFTP, REST, SMTP/IMAP/POP3)
It's a piece of enterprise middle-ware, and classically a monolithic application that allows you to build highly orchestrated pipelines to do the work.

In the micro-services world, each of these areas could be its own bounded context, with a few extra pieces thrown into the mix (e.g., the Message itself is probably a bounded context shared by many pieces).  Instead of building highly orchestrated message pipelines, we could instead be building choreographed services, even one-off services to support specific needs into operation pipelines that would be highly scalable and configurable in a cloud environment.

It's an interesting approach that I frankly haven't seen anyone take on well yet.  Mostly what I'm seeing is interfaces moving into the cloud first as a great big chunk ... running in a Windows VM somewhere to start off with.  Then some decomposition into parts, and finally, maybe a rational micro-services architecture.  Eventually, I think we'll get there, but I'm not so certain that the emerging victor in the Interface engine competition is going to be one of the current leaders.  It's possible, but sounds very muck like one of the classic scenarios found in The Innovator's Dilemma.

I'm not entirely happy with the current situation.  I think cloud needs different thinking and a different way of doing things to handle interfaces. I really hope I don't have to build it myself.

   Keith

Thursday, September 6, 2018

What does it take to become a HealthIT Standards Expert?

As I think back to what it takes to become an expert in Health IT standards, I ponder.
  • Is 10,000 hours of study and implementation the answer?  No.  You can do it faster.
  • Is a really good memory the answer? No.  It helps, but I don't expect you to remember the HL7 root OID (2.16.840.1.113883 -- and yes, I wrote that from memory).  I do expect you to write it down somewhere.
  • Is passion the answer? Maybe, but it isn't sufficient by itself.
  • Is intelligence the answer? It helps, but again, isn't sufficient by itself.
  • Do you have to like the subject?  No.  I hate V2, but I'm still an expert.
  • Is persistence the answer?  Again, it helps, but still isn't enough.
What defines an expert?  An expert is someone with a high degree of skill or knowledge, an authority on the topic.  At least according to one reference standard.

There's a missing piece to all of this, and that's the willingness to share.  As an expert, one has
both a willingness to shares their knowledge to help solve others problems, AND a strong desire to acquires new knowledge when they cannot help.

If you have those two key things, it still doesn't make you an expert, but if you have those and time, you will eventually become one.

   Keith

Wednesday, September 5, 2018

Interface Load Estimation Rules of Thumb

Over the years I've developed a lot of rules of thumb for handling capacity planning for interfaces.  Loads are bursty, they have peaks and valleys.  The following rules of the road have worked well for me over 20 years:

Sum the Week and Divide by 5 to get your Worst Day of the Week
Sum the week (all seven days) and divide by 5 to get a daily average load.  That will give you an average load that addresses your needs for the worst day of the week, and regardless of what day you start on, you can ignore workweek variation.

Shift Changes and Breaks follows the rule of 3
Shift changes and breaks can triple your loads (depending on user workflows).  Users want to finish up what they are doing before they leave.

Plan for 5x Peak Loads
Extraordinary events can place additional burden on your system.  Plan for a peak load of 5 times your average.  That will account for about 99.95% of your requirements, give or take.  That has to do with the special relationship between the Poisson distribution used to describe "job" arrivals, and the exponential distribution, and failure rates (e.g., the failure case where a job arrives when the system is already "at peak capacity").

Load varies Fractally by Time Unit
The last rule is simply this: The deeper you dig into it (month, week, day of week, hour of day), the more detail you can find in instantaneous load variation. More detail isn't better.  If getting more detail doesn't change your estimates ... stop digging.  It's just a rat hole at that point.

I've been able to apply these rules to many systems that involve human workflows: From the number times I'd have to visit a customer based on the number of computer they have (to estimate my costs for a service contract back in the days I managed a computer service organization), to the number of dictionary compression jobs per day back when I was responsible for a compression services team working on spelling dictionaries, to the number of CDA documents a physician might generate over the course of their day.

   Keith



Tuesday, July 31, 2018

A Book Review for HealthIT Communicators


My wife works at a library, and often brings home books she enjoys reading.  Recently she told me about a book she was reading by Alan Alda about the work he was doing with Improvisation and communication in Science and Technology.  I grabbed a digital copy for the family including the audio-book version read by Alan.

So much of this book resonated with my experiences over that last decade+, starting with the event that gave me the reason behind the name of this blog.

Many years ago, I found myself speaking to an audience with a slide deck in front of me, behind a podium 20 feet from the front of a raised stage, in an auditorium with an orchestra pit.  After 15 seconds of trying to connect with the audience in that awkward postion, I ditched the podium mike, grabbed the hand-held and went to the front of the stage. 

I was now back in a familiar setting, that of the theater, in which I had spent some three semesters with a fantastic director from my early college days.  And I responded to that setting with all of the training* I had been provided about speaking (projection, diction), connecting with audiences, and MOST of all, improvisation.  First thing I did was went off script**.

I used that entire stage as I talked (from down stage right) about how Cross Enterprise Document Sharing (XDS) would enable patients living and being cared for in Hartford (where we were meeting) who traveled to Danbury (crossing to stage left) get access to their documents, traversing back to the center to reference my slides on the large backlit projection screen behind me (somewhat like a TED setup).  I could see the audience track me with their eyes from one side of the stage to the other.

That presentation was a turning point in my career in terms of how I approached speaking at public events. What I learned on that day was what Shakespeare already knew: "All the world is a stage, and all the men and women are merely players ..."  Since then, I no longer need the trappings of the stage, merely the impression that I have one, to apply that training I so thoroughly enjoyed.  Improv was the key for me, as it was for Alan as he describes in his book.

This is an awesome book for technology communicators. Read it. Enjoy it. Apply it.  I have the same sense about this book as others related to me about my presentation that day.  He nailed it.

   Keith

* For those of you who don't know me well, I am somewhat of an introvert.  Dealing with people in groups is exhausting, and I can spend hours on my own quite satisfied.  But I loved theater because it was a safe environment to go be other than my introverted self.  So I explain myself as being a well-trained introvert, and it was that early theater and improv training I had in high-school and college that I'm speaking of.

** I no longer script presentations, but I do prepare and rehearse them, especially if I've only done them once or twice.

Monday, July 23, 2018

Will Faxes never Die?

A very old fax machine
A question comes to me from a former colleague that I found interesting:

Can you convert a fax document into a CCDA for Direct Messaging?

Yes.  Here's how, at three different levels of quality:

Unstructured Document

This is the simplest method, but it won't meet requirements for 2014 or 2015 Certification.  It's still somewhat useful.

  1. Convert the image into a bitmap file format such as PDF or PNG.
  2. Create a CCDA Unstructured Document.  Optionally, apply the IHE Scanned Document requirements to the content as well.
  3. Include the document into a Direct Message

Getting to Electronic Text

Getting the content to meet 2014 or 2015 requirements for certification means doing quite a bit of additional work.  First step is to get to electronic text.
  1. First, you have to apply text recognition technology to the output to turn it into electronic text.  This converts the image content into letters and symbols that the computer can recongize.
  2. From this, create a bare-bones narrative structure with headings and section content.  Apply some very basic Natural Language Processing (NLP) to recognize headings and section content.  Signals such as line spacing, paragraph formatting and font styles are helpful here (that would often be included in a more than basic text recognition pass).  From here, you could create a Level 2 CDA (note, NOT a C-CDA yet) that would be MORE useful to the receiver.

Getting to Certification

After getting to electronic text, now you need to get to CCDA Entries.  It can be done, I've been there and done that more than a decade ago.
  1. From the previous step now you need to code the section headings to LOINC, and match the document content to an appropriate C-CDA template (knowing that you can also mix and match sections from other C-CDA documents into the base C-CDA CCD requirements).  At this point, you are at level 2 with coded sections.
  2. So finally, you need to run some specialized NLP to recognize things like problems, medications, allergies, et cetera ... and THEN
  3. Convert the specialized content to match the C-CDA template chosen in step 3.
And now, you COULD meet 2014 or 2015 Certification requirements.

Would I do this?

The question not asked, but which I will answer is:

Would you convert a fax document into a CCDA for Direct Messaging?

Probably not, AND certainly not for the structured content option.  Current NLP algorithms being what they are, you could probably get to about 95%  accuracy with the structured markup, which means about 1 error in 20 items.  That's MULTIPLE structure recognition errors per page, NOT a level of accuracy appropriate for patient care. The level of effort involved in cleaning up someone else's data is huge, the value obtained from it is very rarely worth the cost.  You are better off figuring out how to give your high volume fax senders a way to send you something better.

I might consider implementing the Unstructured Document mechanism as a feature for providers that are getting faxed today, as many would find it of some use.  It's not really much more though than giving them an e-mail end-point attached to a fax number, so again, of very little additional value.

Thursday, July 19, 2018

Sweat the Small Stuff

Small things can sometimes make a big difference.  The difference between an adequate piece of equipment and an excellent one can be huge.  Sometimes the things that you need to change aren't your equipment, but rather yourself.  That's more than just money, it's time and effort.  It's hard.  It's annoying.

The way that small things can make a big difference is when they provide a small but consistent improvement in what you do.  Keyboards for example.  Today I had to switch back to a crappy backup keyboard because the 5 and 7* keys on my Unicomp keyboard died.  I can already feel the change in my typing speed.  More than half my work is typing, and the better keyboard is the difference between 70 WPM and 75 WPM.  That's a 6.667% difference in speed.  It's small, I can live with it for a short period of time.

What will using the cheaper keyboard cost me?  Well, I don't spend all my typing time at top speed, so really, it only impacts 50% of my work. But, for that 50%, that's the most productive time I have, because the other time is meetings and overhead.  So now I'm losing not just 6.667% of my time, I'm actually missing it out of my most productive activity.

Amortized over a year, that's a whole month of my productive time that I somehow have to make up for.  There goes vacation.  All for lack of a minor improvement.  I'll probably get the Unicomp repaired (it's a great keyboard when it works), but I've got a better one on order with Cherry MX blue switches.  They have a similar feel to the spring-switches in the Unicomp IBM-style switches and are the current "state-of-the-art" for typists as best I can tell.  And if it breaks, I can replace the dang switch, which I cannot do on the Unicomp without about two-three hours of effort.

A colleague and I were talking about how making personal changes can help you in your efforts.  His point was that many cyclists spend hundreds (or even more) to reduce the weight of they bicycles by a few more ounces to get greater hill-climbing speed.  He noted that losing a few pounds of personal weight can have a much greater impact (I'm down nearly 35 lbs since January, but my bike has never had a problem with hill-climbing, so I wouldn't know about that).

Learning to touch type was something I succeeded (if you can call a D success) in doing in high school, but never actually applied (why I got the D) until someone showed me that I was already doing it (but only when I wasn't thinking about it).  After discovering that, over the next six months, I went from being a two finger typist to four, and then to eight, and then to ten.  That simple physical skill has a huge impact on my productivity.

I now make it a point, when I learn a new application to understand how to operate it completely without removing my fingers from the keyboard.  And I train myself to operate the applications I most commonly use to learn them that way because it makes a small difference that adds up.  It's an almost meaningless small thing that greatly improves my productivity.  Yeah, I ****ing hated it when Microsoft changed the keyboard bindings in office (and I still remap to some that I have long familiarity with), but I spent the time to learn the new ones.  It ****ed me off for six months, but afterwards it paid off.

Here's where this starts to come into play in Health IT.  We KNOW that there are efficient and inefficient workflows.  We KNOW that changing workflows is really going to yank people's chains.  How do we get people to make even small changes who want to keep doing things the way they always have been?   And more importantly, what is going to happen to those non-digital-natives who have to adapt to an increasingly more digital world when their up and coming colleagues start having more influence.

When we get rushed, we let the small stuff slip.  It's a little bit more time, a little bit more effort.  And the reward is great and immediate, we get more done.  But the small stuff has value.  It's there to keep us from making mistakes.  Check in your code before the end of the day ... but I'll have to take a later train ... and now your hard drive is dead tomorrow, and you have to redo the day's work.  Which would you rather have?

Sweat it.  It's worth the effort.

So, what small thing are you going to change?  And what big difference will it make?

   Keith

* 7 is pretty darn common in hash tags I use, and in e-mails I write.  That's pretty dang frustrating.