Wednesday, February 13, 2019

The ONC Information Blocking Rule


blocking hall of fame GIF by Pitt Panthers 

The Cures NPRM added a new section to regulations that covers not just certified EHR technology, but the behaviors of organizations in possession of patient data with regard to information exchange, which I covered earlier this week in my #NoBlocking tweet stream.

The rule itself is a difficult read, the preface, I hope is a lot easier to comprehend. Remember, I read the regulation text first, I haven't gotten to the preface yet.  It gives me an opportunity to get my own impression of the regulations without achoring myself on ONC's justifications for why they did what they did.  I'll be coming back to these again after going through the front matter.

Generally, as I understand this section, ONC is seeking to ensure that all parties who want to use APIs to enable access to patient data (with the patient's authorization) are treated fairly and equally, and that bars to competition are not raised on the basis of access to patient data.

By all parties, ONC applies this section (45 CFR 171) to health care providers, health IT developers of certified products (including API developers and technology suppliers), health information exchanges and networks.  So, if you are a user, developer, reseller, or other third party that essentially deals with deploying, maintaining, using, updating, or otherwise providing access to software certified to use APIs to facilitate the exchange of patient data, this applies to you. 

The whole point of this rule authorized under the Cures Act is to prevent ANYONE from interfering with, preventing or discouraging exchange of health data.  ONC makes it clear, ignorance of the requirements aren't an excuse (knows or SHOULD KNOW is the specific wording).  Now, it's pretty clear that ONC understands that there are reasons to prevent access to data, either permanently or temporarily, and that some essential industry behaviors will be seen by some as "information blocking".  The very fact that organizations are making money from healthcare is anathema to some, and so ONC is working hard to try to figure out how to explain the behaviors that aren't considered to be information blocking by listing out the various practices that are allowed.  This is a difficult task for an organization that doesn't engage in business, and I suspect that this part of the Cures NPRM will be the most commented on.

There are a list of things that organizations are permitted, anything not explicitly permitted is at the very least behavior that could be questioned with regard to "information blocking", and this is a case where ONC is the referee who gets to make the decision.

  1. Preventing harm
  2. Promoting privacy and security
  3. Recovering reasonable costs incurred,
  4. Avoiding infeasibility,
  5. Allowing for reasonable and non-discriminatory licensing fees
  6. Maintenance and updates.

Arian Malec presents an excellent sound track and interesting commentary on items 3 and 5 above and a great follow up on contractual requirements that highlight some of the most difficult parts of 45 CFR 171.


Many of exceptions are pretty straightforward to understand.  The first of these falls clearly within the pervue of providers under "do no harm", and is also aligned with HIPAA.  HIPAA and the no blocking rule rule are also aligned on promoting privacy and security (my #2 above).  And lastly, allowing information providers to take information offline temporarily for maintenance and updates is something we all expect and should allow.


Sections 3-5 have to address API providers, but also those who might need to respond to a request for health information.  I think this section is having a little bit of an identity crisis simply because it has to cover so much territory.  I think ONC needs to divest some parts of this section which are already covered under HIPAA (perhaps even by referencing that regulation).  This could help greatly, and in fact make it easier to maintain this section of the regulation.


Avoiding infeasibility addresses a host of issues and concerns.  Many API providers today for example, place limits on the amount of information that can be accessed in a single request, which could be viewed as "information blocking".  These are essential constraints in order to enable responsiveness in technology (returning all of the lab results for the past ten years for a single patient who has been very ill could take a VERY long time, and prevent others from accessing data depending on implementation).  But this section also has to address other sorts of requests for information, which is part of  it's identity crisis.  In this section, ONC makes it very clear, using your access to electronic health information as a means to prevent competition or to force others into paying a fee to access it doesn't make a request for access infeasible.

So, health data is still a commodity under #NoBlocking, but no longer a strategic commodity that the "haves" can hold over the "have nots", and further more, treating it as such could be cause for a claim of "information blocking".  I'm pretty sure I'm in agreement with that general principle, but I'm also sure that this portion of the rule is going to be gone over (by me and others) with an extremely fine-toothed comb.

   Keith

Tuesday, February 12, 2019

My Comments on the HIPAA RFI

I thought I had missed my opportunity to comment on the HIPAA RFI, but saw a tweet about someone posting comments today, so I break from comments on other stuff to put together my thoughts in HIPAA.

0. HIPAA is seen as a barrier rather than promoting sharing of information to support care.  It needs revamping and probably renaming to change the perception.

1. Most covered entities with a certified EHR can provide data immediately and certainly within 24 hours through a portal or other means of access based on my own personal experience across more than a dozen institutions.  Payers also provide for online access with nearly immediate results.  Paper records, or full records that are part of the entire designated record set that are needed for more detailed review (usually to address issues in adjudication or eligibility of benefits) take for **** ever.  Payers are worse on requests that providers in my own experience.  Plans that are causing challenges (from the patient perspective) are more difficult to get data from than those that aren't.   There are variances across providers that is generally based on technical and infrastructure capacity.

2. In general, it is quite feasible to get the most important data within 24 hours.  The entire designated records set is harder to acquire, because that can include diagnostic reports in paper form, xrays and other imaging requiring storage of external media instead of download. 

3. Digital media and electronically available data should be available within 2 business days, paper or film original formats perhaps a bit longer but no more than a week, and any data in the Common clinical data set by a provider with a certified EHR should generally be available within 24 hours.

4. Ask instead what burdens would a shortened time frame relieve for patients, and the associated costs, and you will get a much more enlightened an appropriate answer.

5. I cannot speak to clearinghouses.

6. Yes, providers have challenges getting data for treatment, generally from other providers, often with the excuse "because HIPAA", although some also claim 42 CFR 2 preventes exchange due to over cautious risk tagging of data that MAY be covered by that regulation.

7. Yes, generally covered entities should be required to disclose PHI for verifiable treatment (without making "verifiable" hard, perhaps slightly more challenging than simple attestation, but not so challenging as to make this too difficult).  Verifiable proofs might be as simple as proofs based on existing treatment relationship (e.g. claims) or attempts to establish one (e.g., via prior auth tx) or facsimile copy of patient signature authorizing treatment, or established provider relationships.   I won't address P and O.

7a, this would improve  care coordination and case management, it would create some burdens to implement new requirements, but not insurmountable ones.  New administrative costs would eventually reduce burdens after implementation.

8. not addressed.

9. Doctors should be required to disclose information to other doctors in a treatment relationship for the purposes of treatment, regardless of electronic billing status.  This would challenge some implementations in that they would have to have a process to identify providers not using electronic billing (e.g., due to lack of NPI).  That could readily be addressed by requiring NPI be obtained by all providers regardless of whether or not they engage in electronic billing.

10. A verbal request would be acceptable from a known entity, but an unknown entity should be verified in some way, along with the existence of a treatment relationship.  Signed patient authorization of treatment (or facsimile copy) would be sufficient evidence, but other documentation or proofs of a treatment relationship might also be accepted.  Appropriate policies would be needed.

11-13. Not addressed.

14. Interaction with other laws such as 42 CFR part 2 should be addressed.  It will be a problem.

15. Appropriate policies should be created, with known entities possible getting easier treatment than unknown entities.  The providers should have a policy, and a means to document and enforce it.

18. Yes, this should be made more feasible and easier, and required.  I've seen requests sit for 30 days for these kinds of services.

19- end: Ran out of time.

The Cures NRPM

You will be getting a lot from me this week.  I'm going to start with the first part of the Cures Rule, prereleased by ONC and I'm just going through the regulatory text, rather than all the prefatory material.  That's next on my late night reading list (1000 pages is a lot to read, this cuts it down by at least 80%).

To start off with, some 2014 related materials were removed from the rule.  Thus stuff no longer applies now, so it's not material to any discussion. 

Content Standards added include:

  • C-CDA Templates for Clinical Notes R1 Companion Guide
  • NCPDP Script Standard Implementation Guide, Version 2017071
  • 2019 QRDA Category I Hospital Quality Reporting Implementation Guide
  • 2019 QRDA Category III Eligible Clinicians & Professionals Quality Reporting Implementation Guide
API Standards (what we've all been waiting for include):
  1. FHIR DSTU 2
  2. API Resource Collection in Health (ARCH) (more on this in a bit)
  3. Argonaut Data Query 1.0
  4. SMART on FHIR
  5. Open Id Connect
  6. FHIR STU3
  7. Consent2Share profiles (I'm guessing you are wondering what this is to).
Of these, 2 and 7 had me scratching my head.  #2 is pretty straightforward, just click the link.  It's a list of FHIR profiles that need to be supported according to the NPRM.  #7 was a downright bitch to locate, though I finally did.  I suspect that if John Moehrke had been awake, he could have found it a lot faster.  This is basically an STU3 profile of the FHIR Consent resource.  #6 and 7 basically mean that ONC is allowing STU3, but ONLY for the exchange of consent information, not allowing STU3 for anything not already covered by 1-3 above.

So, how did I do in my bet? If the proposed rule becomes the final rule, I lose it, but I picked the options correctly.  I still think my bet is sound, but I'll give it a 60% chance of being the final result, with the alternative being what we saw proposed this morning.  If I'd thought about it harder, I would have known that ONC couldn't have submitted a proposal with R4 because IT HADN'T been published at the time it was submitted to OMB, which still doesn't leave it out of the running.  The proposed rule text published by ONC does contain a reference to R4 in section 170.299.  My bet (and I'll know more tomorrow) is that the industry will push for R4 and the US Core specifications.

Even if they don't, ONC's changed the rules so that they (and you) can upgrade to newer standards, so that the rule can set both a lower and upper bar.

USCDI is something we are all going to have to become more familiar with.  I took my first look at it a year ago.  It's still not much more than the CCDS and a collection of C-CDA templates.  For the most part, there's nothing challenging here, it's just work for your engineers and validation staff.

There's additional guidance in the rule on how to use CCDA (see the companion guide link in the previous section), and on how to handle Assessment and Plan, Goals, Health Concerns, and UDI (device identifiers) data in the rule.  Expect test procedures to change (I imaging those folks are starting to think about what to change, but won't really get to work until the rule is final).  However, I don't expect a big lift here.

Section D; Conditions and Maintenance of Certification for Health IT Developers is brand new content for certification.  It applies to the organizations who certify product, and how they must behave, rather than what the product must do.  To summarize:
400: This is authorized by the Cures act.
401: As a developer you will attest to ONC that you won't block.  This is a regulatory form that basically makes it possible for you to be fined under the false claims acf if you say you won't and then you are found to do so (c.f. recent news), if I'm reading this correctly.  This is powerful stuff which makes enforcement possible by more than just ONC revoking a certification, the DOJ can get involved.
402: Prove it, and furthermore, don't do what other guys did (see the last link), and make getting that single patient data export document out easy peasy, and keep your records for 10 years, and you have 2 years to handle that patient data export change.
403: What I call the "No Gag Rule" clause says that a health IT developer cannot restrict communication regarding:
  1. usability,
  2. interoperability,
  3. security,
  4. user experience,
  5. business practices,
  6. ways in which the product has been used.
Anything is fair game to be communicated, including proprietary, confidential or IP content when the communication is about one or more of the above for communications:
  1. required by law
  2. regarding patient safety, to government agencies, accreditors or patient safety organizations,
  3. about privacy and security to government agencies,
  4. about information blocking to government agencies,
  5. about certification non-compliances to ONC, an ONC-ACB 
Restrictions are permitted to or about:
  1. contractors and employees,
  2. non-user facing aspects,
  3. IP except that screenshots ARE permitted with certain reasonable provisions, EXCEPT for cases of premarket testing and development until the product ships, after which they are subject to prior rules
And must notify (within 6 months) and make effective (within a year) that any existing contract provisions that contravene the above are basically null and void hereafter.

404: This section requires a close read by your staff involved in revenue generation from APIs.  Basically it says you have to disclose everything needed to use them, make clear how you are charging, be fair in the way you charge, not specifically designed to be anti-competitive, not require non-compete clauses, not require exclusivity or transfer of IP rights, or require reciprocation.

There are some other clauses in here as well, regarding "allowing production use" quickly, and publishing endpoint addresses for systems "in a computable format". 

405: Just as FDA requires post-market surveillance and corrective action, ONC is now requiring ongoing testing and maintenance for certified product, requiring the developer to make a plan, share it with ONC, execute it, and report on the results.  
And you've got 2 years to upgrade your products to the new standards.

406: The developer must attest to all of the aforemented, aforesaid, thereunto appertaining stuff at the time of certification and every six months thereafter, therefore becoming subject to additional enforcement opportunities by ONC and DOJ.

I'm not going to spend much time on section 500, as it mostly pertains to the operations of ACBs, but there's some impact on Health IT Vendors here to look at in this section, mostly in that adopting section D above ONC now has a responsibility to verify that section D is being complied with.  This means that they may perform surveillance about developer behavior in addition to product behavior.
And, as a result of that surveillance, ONC may also BAN a developer from the certification program based on their behavior, giving them yet more enforcement capabilities.

Section D is going to be the hardest discussed section in the industry, but I think in generally it is fairly written (if complex).  There's definitely some work that could be done to make the language simpler and easier to read.  A lot of that may already be written in the preface (which I have yet to read, I like to get my first opinions by reading the regulations, rather than the regulatory body's justifications for them).

I'll be covering what I call the "No Blocking" part of this rule later this week, as well as the Patient Access rule released by CMS.  NOTE: I'm reading the prerelease versions that were available from ONC (and CMS) prior to publication in the Federal Register.  I'll read those versions in my second full read through, just to see if there are any significant differences.

As always, I'm not a lawyer, and this is NOT legal advice, and you get what you paid for.  This also is my own opinion (as are all posts written here), and not the opinion of my employer nor any other organization I might have some affiliation with.  

Keith

My read through tweet streams can be found by searching my stream for #CuresNPRM, #PatientAccess and #NoBlocking.



Getting to Outcomes for PrecisionMedicine

I've been thinking about precision medicine and standards a lot lately for my day job.  One of the things I have to work out is how to prioritize what to work on, and how to develop a framework for that prioritization.

I like to work by evidence (objective criteria) rather than eminence (subjective criteria), and so I need some numbers.  Which means that I need measures.  In any process improvement effort, there are three fundamental kinds of measures that you can apply.
Structure
Measures that demonstrate appropriate systems, structures and processes are in place to support the outcome.
Process
Measures that demonstrated that processes to support the outcome are being executed.
Outcome
Measures of the outcome.
If I were to rank these measurements in terms of value and ability to move the needle on the scale, you've got bronze, silver and gold (a scoring system that works both by weight and by monetary value).

But, in my world, they also rank by time over which they are implemented.  You have to first have the infrastructure deployed (which means it must be available with support for the needed technology), and then the workflow designed, and the processes implemented and executed before you will see changes in outcomes.

Let me give you an example.  If you want to measure the impacts of Sexual Orientation and Gender Identity on health outcomes, you first need the standards to record that information readily available (structure), then you need to have processes designed to support the capture of that information (also a structure measure), then those processes need to be implemented and executed (process measures), and then you need standards designed to exchange that data (structure measures), the software available (structure) and configured to perform that exchange (process), and then data exchanged that includes SOGI to occur (process).  And then you can get to outcomes.

This is a pipeline that has 7 segments that need to be completed before we can use SOGI data in research (the desired outcome).  Working on any of these segments can proceed in parallel BUT that's challenging to coordinate.  Some of the work has already been done at the federal level to promote utilization of the standards for federal reporting for HRSA Health Center grantees (a.k.a. Federally Qualified Health Centers), but hasn't been included in requirements for other healthcare providers.  For example, birth sex (a signal that SOGI data is surely relevant) is part of the Common Clinical Data Set (CCDS, now know as USCDI) that MUST be able to be exchanged by EHR systems, but the SOGI data itself is NOT part of this definition, and so while it may be available in an EHR, neither the processes to capture, nor the C-CDA documents being exchanged may actually do anything with SOGI data.

There's two missing segments in this seven segment pipeline.  The necessary standards are there, the mechanism of exchange is there, the process for exchange exists, but implementing the workflow to capture the data isn't incented in any way by current regulations (which focus on USCDI exchange), nor is exchange of that data mandated.

So, I can identify where the work needs to be done, and I can assess (measure) even, how much work that is.

Now, compare this work to the effort associated with capturing data from wearables (e.g., blood pressure, heart rate, physical activity, blood glucose, sleep cycles, et cetera).  There's a lot more missing segments.  I know what that work is, and I can also estimate how much work there is here.

Now, suppose for the sake of argument that I needed to choose just one of these to work on.  How would I justify one over the other?  NOTE: This is completely for the sake of argument, and I've set up an arbitrary a/b scenario.  There's a reason for this.  If I can build a framework for making that decision, I can extend that framework in ways that allow me to make decisions about how to SPLIT the work and how much to invest in each.  That's just how math works.

So, how to get to prioritization. I need a different sort of evaluation and to measure in a different way.  What can I measure?  I can measure the quantity of research that might be enabled.  I could take a crack at, but it would at best be a guess on the impacts of that research to patient care, or cost.  I could look at the impacts of the diseases that research affects.

And then there's the missing link problem.  If there are 2 of 7 links missing before I can get to outcomes, what's the value of working on broken link 1 or broken link 2?

This is risk assessment turned on its head, as my friend Gila once put it, it's opportunity assessment.  But using the framework for risk assessment is still valid in this case, it's just that what you measure is different.

And now, I think I have the start of a sketch for the framework for my answer.

Time to do some more reading.

   Keith







Friday, February 8, 2019

HealthIT Interoperability isn't Simple

Let's start (as I'm often want to do), with a definition:
Ability of a system or a product to work with other systems or products without special effort on the part of the customer. Interoperability is made possible by the implementation of standards. -- IEEE Standards Glossary
Now, let's think about what the phrase "special effort" means.  Thesaurus.com suggests "fall over backwards", "go out of the way", or "take special pains."  I might also suggest "do extra work", or "more than ordinary effort."  It goes back to the definition of what is normal or expected of a system.

In 2006, when my work in interoperability really started to take off, the ability to import data between competing EHR systems wasn't considered to be a normal function of the EHR system.  The ability for patients to access data via an application of their choosing also wasn't considered to be a normal function.  Over time, these capabilities became the norm as a result of two things:


  1. Changes in Health IT Policy over the course of ten years which incentivized the development of of these capabilities in systems.
  2. Changes in Payment Policy made use of these capabilities the norm for providers who wanted to get as much money as they could from CMS.
  3. The development of standards (the second part of the definition) which enabled these capabilities.
Making a policy change isn't simple.  It's a process that takes years to complete, and a ton of people and effort.  CMS doesn't just get to decide to make a change, tell everyone, and then everyone complies.  There's a multi-year long process to making a rule, and that operates under the presumption that there's already legislation in place to make the rule.  If not, there's another several month long (at least) cycle just to get the legislation in place.  And rules have to leave time for people to implement them.  So, from legislation to implementation is at least a 3 year long process, and that's when the pressure is on (i.e., the early years of the HITECH Act).

Once implemented, we now begin a new stage of learning ... what works, what doesn't work, and what needs to change.  And then the process starts over again (and if well designed, it already had moved into the next stage).

Developing standards is equally complex.  The best work takes decades.  But e-mail ... took 60 years to get where it is today.  But the internet (HTTP, HTML and the web) ... took 20 years to get where it is today.  But XML ... took 3 years from inception of the project to become a standard, and it started from a standard (SGML) that had been in existence for 20+ years before it, and is still being advanced today some 20 years later.  But JSON ... is based on work started in 2006 and took another 7 years to be defined as a standard ... and is still being refined today.  CDA is almost old enough to drink as a standard, and was in the womb for three years before it's first publication.  The inception of FHIR can be dated back almost a decade, the first published proposal (and there was a lot of work before that was published) goes back eight years.

Whether it's technology or policy, by the time you get there, the goal posts will have changed.  That's called progress.

   Keith






Tuesday, February 5, 2019

History of the Concern Act in CCDA

This particular post results from questions on the HL7 Structured Documents workgroup's e-mail list.  It basically boils down to a question of why the Condition and Allergy observations have an Act structure wrapping the Observation related to the Condition or Allergy that is described within.

The answers are below:
  1. What were the intended semantics associated with the effectiveTime on the concern act?
    This is the time that the reported problem became the concern of the provider.
  2. What were the semantics of the author on the concern act as opposed to the semantics of the author on the Problem Observation?
    One provider can be concerned about an observation made by another provider.  The author of the concern is basically the person saying, this is an issue I need to track.
  3. What were the semantics of the Priority Preference on the concern act as opposed to the semantics of the Priority Preference on the Problem Observation?
    Again, this is related to the distinction between the "concern" and the "condition".  A low priority problem for the patient (e.g., a minor twinge in their tooth), might be a high priority concern for the patient's dentist.
  4. How was the design of the Problem Concern intend to be used relative to representing “the patient’s problem list”? 
    The provider's problem list for the patient can be viewed as the provider's record of the concerns they have regarding the health of the patient.


General Background

In 2005, shortly after CDA was introduced, HL7 and IHE collaborated on a joint project to develop templates to exchange a care record summary.  HL7 was to to work on level 1 and 2 templates (documents and sections), and IHE was to work on Level 3 templates (entries) to enable the exchange of of this data (you might recall that ASTM was building the CCR around this time as well).  I was the editor for both the HL7 and IHE documents (and later the editor for the HITSP C32, and one of many editors for CCD and later C-CDA).

Involved in this project also from the HL7 and IHE sides was Dan Russler, cochair of the Patient Care workgroup at the time in HL7, and alongside me, of the IHE Patient Care Coordination Workgroup.  Dan brought extensive knowledge of V3 structures and vocabulary that HL7 had been developing in Patient Care to the project, and I was the go to person for mapping this to CDA.  The project had been cooked up by leadership of HL7 and IHE to basically try to get something done on care record exchange, because some of the IHE sponsors who had also been engaged with the CCR project were getting tired of it getting bogged down in ASTM.  Also involved from the HL7 and IHE sides was Larry McKnight, a physician from Siemens.

In HL7, we based a lot of our work on the Vancouver Island Health Authority's e-Medical Summary, developed for CDA Release 2.0 in 2004 (before CDA Release 2.0 had even finished publication).  That organization was the first organization to use Schematron for validation in CDA documents, something that continues to this day, 15 years later.  But the eMS didn't really get into details for level 3 templates for problems, meds and allergies (our key areas of concern in this project).  Fortunately, the HL7 Patient Care group had been working on vocabulary and modeling to describe Concerns about a patient.

If you look at the changes to CCD over time, you will see in C-CDA 1.1 that the Problem Concern Act uses CONCERN from ActClass in Act.code.  This is because it couldn't be used in Act.class because CONCERN hadn't been an accepted V3 vocabulary term at the time CDA R2 was completed.  This resulted in part from long running debates over the semantics of the CONCERN Act, which didn't finally get resolved until 2014 and later after the third and final push to complete this work.

The Problem Concern Act in C-CDA is the representation in CDA of the semantics of a Health Concern, which is distinct from the underlying problem that causes the concern.  Concern is about provider awareness of a problem, while the problem observation is directly related to the problem itself.  Consider this: I have Cervicular Radiculopathy again, but this time in my left arm.  I told my physician about it on January 19th, but had symptoms going back to a week before, and I was examined on the 22nd.  So, the concern about my nerve pain should be dated 1/19, or perhaps 1/22 after he evaluated me, but the problem itself should have a start date somewhere around 1/11.  When this problem is resolved (say in a few more weeks), it can be marked in the problem observation with regard to the date is was resolved, and in the concern act when that resolution is reported to my provider (which will likely be after that).

The health concern itself can change over time, and acts as a wrapper around the relevant data: When I originally had the problem, we could have included not just the physician observation, but also subsequent diagnostic test data, the treatment plan and evaluation.  All of this development of the Health Concern act evolves from the Problem Oriented Medical Record pioneered by Dr. Larry Weed.

   Keith






Thursday, January 31, 2019

A QueryHealth for AllOfUs

Those of you who have been reading this blog for years are already familiar with what I've written in the past about Query Health.  For those of you who haven't check the link.

Recently, I've been looking into All of Us, a precision medicine research program that takes the ideas of Query Health to the next level.  The original thinking on Query Health was about taking the question to the data.  All of Us has a similar approach, but instead of querying data in possibly thousands of information systems, it uses a raw data research repository to collect data, and a cloud-based infrastructure to support research access to the curated data that is prepared from the raw data sourced from thousands of information systems.  I find the best detailed description today to be found in the All of Us Research Operational Protocol.

There's a lot to be learned from Query Health, and the first thing that any group putting together a large repository of curated and anonymized data is certainly going to be security and confidentiality.  Anonymization itself is a difficult process, and given the large data sets being considered, there's no real way to fully make the data anonymous.

Numerous studies and articles have shown that you don't need much to identify a single individual from a large collection of data collected over time.  A single physician may see 3-6 thousand patients in a year.  Put data from two of them together an the overlap is going to be smaller.  Add other data elements, and pretty soon you get down to a very small group of people, perhaps a group of one that combined with other data can pretty easily get you to the identity of a patient.

For Query Health, we had discussed this in depth, and regarded counts and categories smaller than 5 as being something that needs special attention (e.g., masking of results for small counts).  There was a whole lot of other discussion, and unfortunately my memory of that part of the project (over 8 years old now), is rather limited (especially since it wasn't my primary focus).

Another area of interest is patient consent, and how that might related to "authorization" to access data via APIs from other external sources.  A lot of this can be automated today using technologies like OAuth2, OpenID Connect, and for EHR data, SMART on FHIR.  But as you look at the variety of health information data repositories that might be connected to All of Us through APIs, you wind up with a lot of proprietary APIs with a variety of OAuth2 implementations.  That's another interesting standards challenge, probably not on the near-term horizon for All of Us, considering their present focus.

It's interesting how everything comes back eventually, only different.  One of my ongoing roles seems to be "standards historian", something I never actually thought about.  I'm guessing if you hang around long enough, that becomes one of the roles you wind up adopting by default.