Friday, September 26, 2014

Finance and Twentieth Century Medicine

I'm moving to the country in a few days, to a small farm about fifty miles from Boston.  The process of buying a house is rather complex, sort of like getting healthcare.  The next time someone mouths off to me about how the financial services sector has interoperability down pat, I am going to laugh so very hard at them.

1.  We transacted most of our data exchanges through e-mail and fax, with some telephone and web mixed in.
2.  Every data exchange was paper or PDF based.  Structured data?  I can hear the underwriter evilly laughing in the background.  Yes, please, send me your structured data so we can print it out and transfer it into our underwriting forms manually.
3.  Get me a quote and fax it... (On the hazard insurance policy).

Sure, that is interoperable... As 20th century medicine.

   Keith

P.S. What the finance sector has learned is how to use interoperability to take THEIR costs out of the system, not MINE.  We should remember that for healthcare too.

Thursday, September 25, 2014

All the Good Names are Taken

A recent thread on the HL7 FHIR List points to one of the real challenges in computer science.  You see, if you don't get to a particular space first, someone else grabs all of the good names.   For example, "namespace" happens to be already used as a reserved word in five different programming languages.

I propose a novel solution to this problem, which is to use common dictionary meanings for terms first. Only when extreme precision is necessary would we disambiguate, and only after demonstration that such disambiguation is necessary.  In these cases, we would subscript the name with the definition number assigned to a definition (in a commonly used dictionary resource like Wiktionary).  If no definition suited, only then would we create a new term which did not otherwise exist in the dictionary.  We would assign someone to then add it to Wictionary, effectively claiming the space.  In this way, maybe we could actually explain how standards work to the average C-level.

    Keith

P.S. ;-)

Friday, September 19, 2014

The HL7 September Plenary

I spent a good bit of time at the recent HL7 September Working Group Plenary meeting over the past five days with a lot of different workgroups.

While my home is usually Structured Documents, I only spent two quarters with them, first on forward planning and next hearing about the DAF FHIR project (which I'll talk about a bit more later).  We also talked briefly about ONC's HIT Standards and Policy FACA's feedback on C-CDA and also recent issues regarding the quality of CCDA documents being produced.  I agreed to bring this up in the HL7 Policy Advisory Committee meeting later on Wednesday.

I spent a good quarter with InM and ITS talking about the Data Access Framework PSS to create Query and Response profiles, in part satisfying one of the gaps identified in IHE's white paper on the Data Access Framework (this is the link to the public comment version, the Final is to be published soon).  One of the challenges here is that DAF wants to develop profiles that eventually will take advantage of the C-CDA on FHIR project, but they want to do things sooner than that project will be ready so that people can take advantage of the Query and Response profiles to test them.  I made the point that this needs to be coordinated across the other HL7 CDA/FHIR projects and the feedback I got was that "That isn't our project".  This is a common misconception that happens quite a bit when folks bring projects to HL7 is that they think they own the project.  The reality is, this becomes an HL7 project, and HL7 needs to do what it must to manage and coordinate ALL of the projects in its portfolio.  So, there will be some coordination there, and hopefully, we'll figure out how to do that properly.

Another good quarter was spent on QUICK, in which we talked quite a bit about my ONE negative comment on QUICK, which was the "Bad" ballot you can read more about in this post.  We traded a lot of thinking about what QUICK is trying to do.  One of the challenges of this work is that they think some of the names of things in FHIR are actually misnamed when approached from a quality and/or clinical decision support perspective,  I think there are probably three or four things that QUICK needs to do to address these mismatches, including getting some change proposals on the FHIR agenda to address some of these naming issues.  After all, if FHIR is truly EHR focused, we need to recall that at least in one market (the US), both Clinical Decision Support and Quality Measurement are key features that have to be present.

I spent a quarter with the HL7 Policy Advisory Committee, in which we spent about half the time planning the Policy Summit to be held in early December, and the other half discussion how to respond to concerns raised by the HIT FACAs on C-CDA.  We already have many processes within HL7 to address such feedback, and HL7 members use these to get improvements into the standards pipeline.  For example, the Examples task force headed by by Brett Marquard has already begun work on some of the examples that had been identified by the FACA.  Fortunately, we've been tracking these issues, but it might be nice if someone actually fed them more directly into HL7.  We'll be working on how to streamline that.

I spent a quarter with the Attachments workgroup, and we resolved some issues with esMD, but more importantly, Paul Knapp, chair of the HL7 Financial Management workgroup showed up to report on what he has been doing with FHIR in the Financial sector.  A while back I wrote a post about how Blue Button Plus and EOB data might be used to help reduce costs, but one of the outstanding issues has been the missing content standard for an EOB.  Building from the work that Paul has already completed with Claims and Remittances, we believe that FM could (and Attachments would support) the creation of an EOB resource that could be used with Direct, Blue Button Plus, or any other transport.

Thursday morning I spent with at the Payer Summit, giving payers a very high level view of HL7 Standards, along with many other HL7 luminaries.  It wasn't the largest room, but it was certainly chock full of some very interested payers.  I wasn't able to stay for the full summit, but I heard many good things.  Also speaking at the Summit was Brian Ahier (@ahier on twitter).

Finally, I spent my last quarter at the Working Group meeting with a number of HL7 and IHE members discussing the formation of a joint workgroup between IHE and HL7, preliminarily known as the Healthcare Standards Integration workgroup.  The IHE board has already approved this in principle, and we are following the HL7 Governance process to finalize the new workgroup, with the expectancy of final IHE board approval. Hopefully it will be in place before we complete the 2015/2016 Profile Selection process with several IHE Domains in October/November.

All in all, it was a pretty busy week, and I was quite happy to get home to finish packing for my big move to the country, a week from tomorrow.


Thursday, September 18, 2014

That takes guts

Normally I do this post Wednesday morning, but quite honestly had day job and personal distractions (I'm moving in about a week) this week, so I'm doing it today.  Wednesday morning at the HL7 Plenary the God Father of Health Level 7, Ed Hammond gives out the Ed Hammond awards, and I traditionally also give out an ad hoc award.  I do that not so much to compete with Ed (I hope I can do what he does when I reach that degree of tenure)., but to continue the tradition.

Tuesday morning I saw a combination of ribbons on an HL7 Member's badge that I found stunning. They were "First Time Attendee" and "Co-chair".  When I asked further, I discovered that this person was a new co-chair of perhaps the most technically challenging, and also difficult collection (which is a compliment, not a critique) of people to manage.  The Security Workgroup is relatively small, but contains some of the top names in Health IT Security, and has always been a very challenging place to engage.  I leave that to my colleague John Moehrke, who has much more experience in this area.  I know enough about security to know that I'd rather defer to seasoned experts that to try to do it myself.

So this combination of badges deserves special recognition, because while it takes guts as an HL7 first-timer to join the Security workgroup, it takes even more than that be willing to co-chair the group.  An extra special thanks and here we go ...


This certifies that 
Alexander Mense of HL7 Austria 


Has hereby been recognized for for having the guts to take on a role as cochair of the HL7 Security Workgroup

The FHIR Code

A guest post from one of the FHIR Chief's: Lloyd McKensie

A little over three years ago, when Grahame introduced the concept thFHIR(TM) standard, he didn’t just have set of technical ideas for how to better share healthcare information. He also had some fairly strong ideas about what we needed to hold as “important” as we pursued that new approach. The technical approach has evolved, in some places quite a lot. However, the underlying priorities have remained pretty consistent.
at would become

Principles are actually important core to FHIR – or any standards effort. They drive what gets produced. They also guide the community. If the principles aren’t well understood or clearly expressed, it’s easy for a standard to drift and lose focus. It’s also easy for it to deliver the wrong thing. V3 had a really strong focus on “semantic” interoperability. We made great strides in that space. However, we sort of lost track of the fact we still needed technical interoperability underneath that. (And that ease of use was sort of relevant too . . .)

Some of those principles such as “the 80%” have been widely shared (though not always well understood) . Others have found their way into presentations in slides such as the FHIR Manifesto. However, we’d never really sat down as a project and written down exactly what the fundamental principles of FHIR were or why we felt those principles were central to what FHIR was.

So the FHIR Governance Board (with review from the FHIR Management Group) has written down what we see as the “core principles” of FHIR – the FHIR Code, if you will. These are the underlying drivers that we feel should guide every design decision, every methodology rule, every step we take in deciding on scope, ballot timelines, etc. They can be found on the HL7 wiki.

I don’t think any of these principles will be a surprise to those who have been following the FHIR project. They pretty much all stem from the first principle:

FHIR prioritizes implementation 

Note that these aren’t hard and fast rules, but guidelines. You can’t say “I’m an implementer, I don’t like what you’re doing – therefore you’re violating FHIR core principles”. But they do reflect the spirit of what we’re trying to do and we’ll try to adhere to them as much as we can. (As well, we interpret “implementer in the broad sense – we don’t only care about those who write code but about all those who use FHIR.)

The FHIR code isn’t done though, because FHIR isn’t a top-down process. It’s about community (Grahame’s been re-enforcing that a lot this week.) And as I write this, I realize we may have missed a principle that should be added to the list. In any case, we want the principles to be reflective of the desires of the community – so we’re throwing them out to implementers and the broader FHIR community:

Do these principles reflect your vision for FHIR? Is this what should be guiding our decisions? Will this help us to keep our focus on the right things? Are they clear enough?

We’ll take your feedback (here, on the FHIR list, implementer’s Skype chat or any other means you choose). Then we’ll seek feedback as part of the next FHIR DSTU.

Tuesday, September 16, 2014

In what ways does it make sense to extend DirectProject beyond those already defined by MeaningfulUse?

The question above comes from 20 questions for HealthIT posted over on the HL7 Standards blog. You can probably guess my short answer, which is NO.  What you may not know are my reasons.

The Purpose of Direct

Let's start with the purpose of the Direct Project which you can find in the project overview:
The Direct project specifies a simple, secure, scalable, standards-based way for participants to send authenticated, encrypted health information directly to known, trusted recipients over the Internet.
Direct was promoted as being the on-ramp to Health Information Exchange.  As an on-ramp, Direct technically has succeeded.  It is certainly technically possible to use direct to exchange information between providers, or to patients.  In execution, it has pretty much failed to deliver to those expectations.  These challenges aren't technical, they are organizational and related to the Healthcare provider. Until we solve those issues, I don't want to rely on Direct (or anything else) until we have resolved the exchange issues between providers.

We need more than an On-Ramp

We need more than a on-ramp for exchange.  Direct as it stands is a one-way push.  As a push specification, it can only address known, trusted entities.  It cannot deal with exchange with unknown entities, and the developing trust framework does not have any way at present to deal with establishing trust in a near real-time way.  

Another challenge with the Direct Project is that there's no real way to do dynamic almost-real-time queries, and it is NOT ideal for handling other sorts of queries, even though it is feasible.  There is a small group of people who have promoted it for this purpose, but several attempts at creating a consensus body to further develop Direct to support query has not yet succeeded.

Meaningful Innovation

Direct was supposed to resolve a short term problem which, as it turns out is much bigger than the technical issues it was supposed to solve.  At present, there are other innovative activities which are more promising than Direct (e.g., FHIR), which require attention.  I really want to stop trying to catch the train that's already left the station (e.g., the next stage of Meaningful Use), and spend more time on meaningful innovations in Healthcare IT.  

If you had a choice to advance yesterday's compromise solution (and to be sure, Direct was exactly that), or to work on more forward looking forms of Health information exchange, which would you do?

Monday, September 15, 2014

HL7WGM Plenary

The HL7 Plenary session is an annual event in which HL7 members hear from folks outside the standards development space.  This year the topic this year was around data, with focus on analytics, privacy and ethics.  First up was Dr. Richard Platt, who talked about the value of Mini-Sentinel and PCORnet, and the value of a standard data model to support the benfits of a learning healt system.  His talk was good, but what I found unfortunate was that we still focus on claims data.  One of his key points was that clinical data used in the EHR is designed to meet the neds of clinicians, not computers.  And so we get the problem below, which physicians resolve quite readily, but computers do not.

Next up was Zoi Kolitsi, Ph.D. Who talked about the balance between data protection and innovation.  The key points in her presentation were:
* Health and health data is special, even in the EU it receives an exception in legislation.
* Everytime that eHealth comes up, the legal aspects are also brought up.
* Governance, identity, and privacy are important principles to build into data sharing.

After the break, Marc Overhage gave a great presentation the challenge given to JASON (find the Golden Fleece).  He was quite quotable in his presentation.  For example, "if Interoperability is the problem, then architecture was the answer... not that anyone had ever thought of that before."  Or "the simple fix is to change the Universal Gravitational Constant..."

Here are his key challenges for interoperability:
* Maintaining privacy
* Misaligned incentives
* Competing priorities
* Same old problems
 - semantic variation
 - patient, provider and location matching
* Missing events model

Note that only one of these is technical (missing event model is one worth a whole blog post).

While Mike Jennings from Walgreens made a good start talking about how Walgreens use HL7 standards like CDA, Version 2 and Version 3, the rest of his presentation was little more than either an advertisement, or a rehash of all the reasons (which we well understood) for using health data.

Last up was Ken Goodman who talked about ethics in interoperability.  His presentation was very thought provoking.  Fitting ethics into the process of standards development and software development is something that he thinks is critical.  I need to digest his slides a bit more.

All in all, it was a useful plenary.  I'd give it an 8.5 out of ten.  Next time, I think we should probably provide the speakers with the same warning about "advertising" that HL7 tutorial speakers get.  That might have made it a 9 or 10.


Friday, September 12, 2014

Stages of Standards Development

One of the most amusing things (because if I didn't laugh I might cry) about the Meaningful Use program is the way that executives who have never paid anything more but lip service to standards are now reporting as experts on their effectiveness (or lack thereof) and the related causes.  Reports out of the HIT Standards committee about the need for improvement of CCDA are a perfect example of this.  However, the problems being reported almost certainly aren't the real problems that need to be addressed.

I have a theory about Stages of Standards Development that is fairly similar to Lohlberg's stages of development, but applied to standards and interoperability.

  1. We used this standard because we would get punished (or not rewarded) if we didn't.
  2. We used this standard because you told us too.
  3. We used this standard because everyone else is using it too for this problem.
  4. We used this standard because we are actually paying attention to interoperability.

In the first stage, the use of the standard is to avoid punishment.  Application and understanding of the standard is low.  The minimum work is done to pass the tests, and no thought or effort is put into the use of the standard to do anything more than to comply with the requirement.

The second stage is no much different, other than the user of the standard now recognizes authority, and may actually understand that there is a reason for being granted the authority.  So they may pay a little bit more attention to the standard, but don't really yet understand it.

The third stage is when the user of the standard looks around and sees that lots of other folks are using the same standard, and they could benefit from it as well.  In this stage, there are some they can learn from who are at more advanced stages and others who are not.  At this stage, they pay quite a bit of attention to learning as much as they can about the standard to make it work.

In the final stage, they stop worrying so much about the standard, and really start focusing on the intent and purpose of it, which is to enable interoperability between systems.  They stop looking at all the slavish ways the standard can magically solve the interoperability problem for them, and start realizing that standards are a major component of interoperability, but that other things are needed to support it as well.

Meaningful Use will only take us so far over time.  Stage 1 and the 2011 criteria only enabled level 1 and 2 adoption behaviors.  Stage 2 and the 2014 (Release 1 and 2) criteria starts to enable level 3 adoption behaviors, and creates an environment where level 4 behaviors can emerge.  But they do nothing to support level 4 behaviors.  The REC program supported some higher level behaviors, but it too really only stopped at level 3.  The workforce education program may have contributed a bit more to higher levels.  The Beacon program certainly tried to support level 4.  However, most of the money for these programs is well past gone.

Interoperability is a journey.  Standards are an important tool along that journey that are necessary. But most don't even know how long that journey really takes.  We had RFC-822 and before that RFC-561, and even today, after more than 4 decades, e-mail is still not perfectly interoperable. Remember Lao-Tzu: The journey of a thousand miles begins with a single step.  We've taken a few so far, let's keep going.



Thursday, September 11, 2014

ONC 2014 Edition Release 2

ONC figured out that it had a versioning problem with it's third release of the Certification and Standards rule.  This one (original entitled the 2015 edition) has been renamed to the 2014 Edition, Release 2.

What's new?  Not a lot actually.  As originally envisioned, there were plenty of changes coming to EHR vendors and users.  As written, it's a much smaller list of changes for everyone, which I summarize below.  A lot of what I complained about in the proposed rule is gone from the final rule. Other issues have simply been put off until stage 3 and 2017. In their own words ONC has "not adopted the Proposed Voluntary Edition. Rather, we have only  adopted a small subset of the proposed certification criteria as optional 2014 Edition EHR certification criteria and made revisions to 2014 Edition EHR certification criteria that provide flexibility, clarity, and enhance health information exchange."

  • Split CPOE into separate criteria for ordering Medications, labs and imaging.
  • Decoupled content and transport capabilities for transitions of care
  • Shifting the “incorporation” into an updated “clinical information reconciliation and incorporation” (CIRI) criterion.
  • Adopted Version 1.1 of Direct Edge Protocols
  • Decoupled the transport and content capabilities of the VDT certification criterion.
  • Allowed "any method or standard" to be used to "electronically create syndrome-based public health surveillance information for electronic transmission" for ambulatory use.  The net effect here is to enable meaningful users to claim the capability for the purpose of incentives if they electronically transmit surveillance data to public health.
  • Included an optional set of data elements (patient demographics, provider specialty, provider address, problem list, vital signs, laboratory results, procedures, medications, and insurance) to support surveillance.  This optional list on an optional criteria, basically serves the purpose of telling people the eventual direction they might go in a couple of years.
  • Discontinued use of the Complete EHR concept and Complete EHR certification
  • Created an ONC Certification Mark
There you have it, an even shorter summary of the short list of what ONC has added to certification in 2014.

I forgot to include Table 3 from the rule, which lists the differences between the various editions. It is quite useful
Table 3. Gap Certification Eligibility for 2014 Edition, Release 2 EHR Certification Criteria

2014 Edition Release 2
2014 Edition

2011 Edition

Regulation Section
Title of Regulation Paragraph
Regulation Section
Title of Regulation Paragraph
Regulation Section
Title of Regulation Paragraph
314(a)(18)
Optional – computerized physician order entry -medications
314(a)(1)

Computerized
physician order
entry

304(a)
306(a)

Computerized physician
order entry

314(a)(19)

Optional –
computerized
physician order
entry - laboratory

314(a)(20)

Optional –
computerized
physician order
entry – diagnostic
imaging

314(f)(7)*
Optional – ambulatory setting only – transmission to public health agencies – syndromic surveillance
 314(f)(3)
Transmission to public health agencies— syndromic surveillance (ambulatory setting only)
302(1)
Public health surveillance (ambulatory setting only)
314(h)(1)
Optional – Applicability Statement for Secure Health 
314(b)(1)(i)(A) and 314(b)(2)(ii)(A)
Transitions of care—receive, display, and incorporate transition of
care/referral summaries. Transitions of care—create and transmit
transition of care/referral summaries.
N/A
N/A
314(h)(2)
Optional – Applicability Statement for Secure Health Transport and XDR/XDM for Direct
Messaging
314(b)(1)(i)(B) and 314(b)(2)(ii)(B)
Transitions of care—receive, display, and incorporate transition of care/referral summaries. Transitions of care—create and transmit transition of  care/referral summaries.
N/A
N/A
314(h)(3)
Optional –SOAP Transport and   Security Specification and XDR/XDM for Direct Messaging
314(b)(1)(i)(C) and 314(b)(2)(ii)(C)
Transitions of care—create and transmit transition of care/referral summaries.
N/A
N/A
* Gap certification does not apply for the optional data elements listed in 314(f)(7).



Tuesday, September 9, 2014

HL7 September Ballots: The Good, the Bad, and the Ugly

I should probably do this in reverse order, so that I end it on a positive note.  Yesterday was the close of the September ballot cycle.  I reviewed several ballots, but three stood out for me.  Here they are from the worst to the best.

The Ugly: Provenance

In the we need a template for every possible combination of ways to specify author and document, section or entry, this one takes the cake.  What you really need are three to five templates to represent authorship or assembly, and some policies about how they must be used in documents which conform to requirements about provenance.  This specification tries to make templates to enforce what is necessary for every possibility, instead of telling people what needs to be done in different circumstances, and expecting them to be able to follow some very general and simply policies.  Most of the requirements it tries to enforce through templates are already mechanisms by which CDA interprets authorship.  This one could use a rewrite to dramatically simplify it.

The Bad: QUICK

This one did itself in by providing evidence that it had the worst reason to develop a separate standard.  The image captured below is from section 4.7 of the overview:


The point being that if there really is that much similarity, then why not improve FHIR to meet the needs, or develop extensions.  We don't need yet another information model.  Yeah, I know, "it's ... and you have different requirements than ...," but I'm really over that argument.  

To their credit, they did ask: "Can (and should) QUICK and FHIR be harmonized into one model? If so, how could this be achieved and still meet the intent of both models? If not, how should the two models relate?"  

I'd get this one under FHIR Governance as quickly as possible and stop messing around with yet another UML model.

The Good: CQL

This isn't just good, it's actually astonishingly good.  In my early college years for my Microcomputer class, I once wrote a mini-operating system instead something simpler suggested by the instructor to solve the problem.  He presumed I had submitted something else I'd written for another purpose and didn't give me full credit until I complained to him about it.  I pointed out that it quite elegantly solved the problem he asked it to, and that when assembled (yes it was in assembly), it did exactly what he wanted (which was to support ANSI terminal emulation).

CQL looks like a similar kind of component.  It hardly looks like something that was prepared for an HL7 ballot, but frankly, it really solves the problem that it was asked to solve, which was to develop a language that would support Clinical quality and clinical decision support.  It reads well, and it has a certain elegance that things developed based on a simple model do.  I really liked this one, and it got a very rare affirmative from me on it's first ballot.  I suppose if I had been following it more closely I could have found some fault with it, but it actually looks pretty good.



Test mobile interoperability at the IHE Connectathon


Take the lead in mobile interoperability
IHE is focused on incorporating new mobile capabilities into our profiles to advance interoperability, and now is the time to choose these profiles for testing at the IHE North American Connectathon.



The industry is quickly moving toward an anytime/anywhere approach where care providers and individual patients need access to protected health information and lightweight devices that must be interoperable to keep your organization relevant with increasing customer demands.
How will testing IHE mobile profiles advance product development?
Patient Demographics Query for Mobile (PDQm) is the first IHE profile to be compliant with FHIR. PDQm allows mobile devices such as medical devices, web-based EHR/EMR applications and other mobile devices/applications to access patient demographics. Mobile Patient Demographics Query (PDQm) is offered in Connectathon testing.
Mobile Access to Health Document (MHD) Profile enables transportation of medical documents from a mobile device to EHRs or PHRs. MHD is offered in New Directions testing service at the Connectathon.
Why is the inclusion of FHIR (with a RESTful interface option) important to IHE profiles?
FHIR is comprehensive healthcare content and representation standard developed by HL7 that utilizes REST-based transport to enable mobile healthcare applications, medical device integration and creating flexible custom workflows. IHE infrastructure supports FHIR standards as it matures to enable interoperability, offering your customers ways to streamline and improve patient care and drive down bottom line costs.
HIMSS
33 W Monroe, Suite 1700
Chicago, IL 60603
www.himss.org

Friday, September 5, 2014

Is HealthIT Software Development an Informatics Job?

I'm working on a term paper for my Business of Health Informatics class.  This question cropped up while I was writing the paper.  My answer is fairly simple: it depends.  The developer writing straight forward middle-tier database access code, or implementing the service layer, or the user interface probably doesn't need (but might benefit from) informatics training,

However, the designer and architect would certainly benefit from it.

So, is it an informatics job?  No for most.  And yes for some.  In general, I would say about 2 or even 3 orders of magnitude more people writing Health IT software would answer no.

What do you think?


Thursday, September 4, 2014

Three new IHE PCC Profiles for Connectathon this Year

This cross my stream this morning while on vacation, and I thought it valuable enough to share with others. Thanks to George Cole of Allscripts for getting the word out on these profiles which he edited and contributed greatly to. I especially agree with George on RECON. We updated this profile so that systems which supported "incorporate" in Meaningful Use could readily claim implementation of it. I know that's most of you out there ;-) Keith P.S. I'll be back after vacation finishes (today is the last day).
Hello everyone:

3 relatively new (or improved) profiles from IHE PCC for your consideration:

RECON – new and improved. Very useful for all of us to consider, so I’m hoping to see lots of systems sign up for this Reconciliation of Clinical Content and Care Providers. Your systems probably reconcile medication lists as a part of Meaningful Use, so this profile may be almost a freebie. Additionally, the content specifics for managing item identifiers will be generally useful as the use of document exchange increases.

MCV – Multiple Content Views, allows one CDA document with styleCode values from a catalog of styleCode values to be displayed for different uses; one document with a complete and also a patient view is one scenario that is possible. This may also lead to a more consistent display of content across the community.

ROL – Referral / Order Linking, for handling the roundtrip of referrals and referral responses, with content specifications to facilitate linking all related documents.

These are all new, and therefore run the risk of being dropped without sufficient interest. They are also all of such general usefulness that I really hope all of our products are interested in testing all three.

Please share with others – I know there are many more people that might consider testing these profiles. Maybe some of our famous bloggers might take up this topic to spread the word ;)

Thanks,

--george