|AJ Peterson ** Co-Chair||Netsmart|
|Amul Patel||Blue Shield CA|
|Andrei Azudin||Health Gorilla|
|Brett Marquard||Wave One Associates|
|Caroline Brozak||Children’s National Health Systems|
|Didi Davis||The Sequoia Project|
|Elaine Blechman||Smart health Records, Inc. (SHARE)|
|Ganesh Persad||Memorial Healthcare Systems|
|Genevieve Morris ** Co-Chair||Intregral Health Strategies|
|Gurbinder Singh||Symcore Solutions|
|Kalyani Yerra||Premier Inc|
|Lisa Moon||Advocate Consulting|
|Meryl Bloomrosen||Premier healthcare alliance|
|Micky Tripathi||MA eHealth Collaborative|
|Santosh Jami||SAFE Health|
|Susie Flores||Care Directives|
|Tashfeen Ekram||Luma Health|
|Thomas Hallisey||Healthcare Association of New York State (HANYS)|
February 7, 2019
The meeting was largely broken into 4 bullets:
1.) Uptime: To be measured on a monthly basis at the Gateway level and only unplanned downtime shall be computed. The percentages are as follows:
- 99.9% is the target for the Best Practice
- 99.5% is the target for the SLA
2.) Response times: Implementers should strive for response times that are commensurate with the Permitted Purpose that is being sent in the transaction. For example, Payment is not necessarily as heavily weighted as Treatment.
3.) Implementers should also include a Priority Level in the commensurate with the request. Example: for patient encounters that are considered emergencies, an Urgent flag should be denoted. If it’s possible to mark a Treatment transaction as urgent, implementers should include that in the transmission as well.
4.) Planned downtime: Referencing both the Gateway and Carequality Connection level. Ideally should occur during the following window:
- After midnight PST and before 6:00 am EST
For #1 above, we agreed that uptime largely referred to non Carequality Connection levels and really the Gateway level, with the caveat being there may not always be a broker between connection X and connection Y. The group also agreed that uptime should be measured on a monthly basis, depending upon the architecture. However, not all Implementers have the same mechanism is place to measure uptime. Some may check every few seconds to a minute, while others may have an hour long interval or more for checking. So it would be hard to deploy a consistent/fair measuring stick here. In the end, the group was leaning towards the spirit of the Best Practice/SLA, instead of the actual nuts and bolts of how to institute it.
For #3 above, it was noted that some sort of legend/table should be created that would be more explicit as to how to properly choose a Priority Level in their request.
For #4 above, it someone mentioned that “the overnight” in a large city’s ED may yield the highest/most urgent demand for clinical data. This is not necessarily a show stopper for overnight planned downtime, but more of an FYI that – there isn’t really a “good time” for downtime.
January 31, 2019
We spoke at length about NIST levels and ultimately decided not to implement a policy requirement around this for the time being. The caveat here being, we may have to take another run at this pending the outcome of the TEF. If specific language is used in the TEF about NIST, we’ll have to wrap a policy around it. Regardless of possible requirements on NIST levels for FHIR, any CEQ transaction should denote what level of NIST being included in the transmission (if known.)
There are also 2 Straw Proposals on the table:
1.)Uptime: To be measured on a monthly basis and only unplanned downtime shall be computed. The percentages are as follows:
· 99.9% is the target for the Best Practice
· 99.5% is the target for the SLA
2.) Response times: Implementers should strive for response times that are commensurate with the Permitted Purpose that is being sent in the transaction. For example, payment is not necessarily as heavily weighted as treatment. Moreover, if it’s possible to mark a treatment transaction as urgent, implementers should include that in the transmission as well (assuming, of course, that the flag will be deployed/reacted to in the proper fashion.)
January 24, 2019
FHIR errors should use the OperationOutcome capability to return both human readable and machine processable information with sufficient detail to allow the client to determine if the error can be corrected at the client side; such as a retry operation due to the resource being busy or a fatal error. Implementers shall be permitted to obscure some of these details due to security reasons https://www.hl7.org/fhir/operationoutcome.html
There are National Institute of Standards and Technology (NIST) metrics that are currently deployed across a variety of industries (including healthcare) with respect to Identity Proofing. With Level 1 being the lowest and Level 4 being the highest, the NIST authentication levels are based on the degree of confidence needed to establish an identity. Here is a link to current standards: https://csrc.nist.gov/publications/detail/sp/800-63/1/archive/2011-12-12.
And here is a link to the new “draft” standard: https://csrc.nist.gov/publications/detail/sp/800-63/3/final. Pending the outcome of the TEF, we may have to introduce policy requirements around identity proofing and authorization. The group will continue to talk about this for next week’s meeting.
January 17, 2019
The group spent more time discussing whether or not to Implement SLAs or just Best Practices. In either case, we will begin compiling a list that will be categorized as one or the other down the road.
The first topic of discussion related to how to best respond with an error by a receiving system in response to transaction received that they are unable to process. Generally speaking, we agreed that the most useful and descriptive response so that a correction could be made was the best response.
The following Straw Proposal will be put to the group for the next meeting:
- FHIR errors should use the OperationOutcome capability to return both human readable and machine processable information with sufficient detail to allow the client to determine if the error can be corrected at the client side, such as a retry operation due to the resource being busy, or is a fatal error. And due to security reasons, it might be wise to obscure some of these details.
We also discussed needing to implement policy requirement around authorization and identity proofing levels. Specifically, we talked about the following 2 questions:
1.) Should we deploy “old” NIST standards or “new” NIST standards.
2.) What level of NIST should we deploy?
We also talked about backwards compatibility and the best way to deploy it. The group suggested having a separate URL for different versions. It was stated that Implementers should definitely list what version(s) they support in their Capability Statement.
January 10, 2019
We continued our discussion about whether or not to incorporate any specific Service Level Agreements (SLAs.) The group continue to struggle to find a balance between what is practical/doable and what is fair to our Implementers community & their end users. At this time, we are concentrating on defining some Best Practices in the hopes that they can one day become actual SLAs. We would also like to create an appropriate adjudication process (Dispute Resolution) for complaints.
We asked for responses from the group regarding Interoperability Best Practices in lieu of creating SLAs at this time. An example of a best practice, is not having a maintenance window every business day. We are looking for examples of both Encouraged and Discouraged Behaviors. This feedback will be disseminated to the workgroup for the next meeting and analyzed by the group.
We will also continue talking about Provenance (describes the metadata, or extra information about data, that
can help answer questions such as when and who created the data) at a future meeting. We will be exploring putting Policy requirements around this as described in USCDI here. Putting specific policy guidelines around Data Provenance will cut down on duplicative data.
December 20, 2018
** What Network Hygiene / SLAs should we consider?
The group was hesitant to establish definitive metrics. Instead we focused on considering defining what the “best practices” are for an Implementer. Specifically:
· Policy for notifications around vendors discovering that they are down or a connection is down
· Policy for planned maintenance windows to put into the Directory
– Agreed to Use the Conformance Statement for this
Also discussed the challenges associated with implementing uptime metrics. However, we did think it would be wise to explore operational and technical challenges associated with being an Implementer and trying to associated data with said challenges. Because the data would have to be identified, collected over a period of time, and reviewed – a phased approach would be necessary if we were to travel down this path.
In the end, we agreed to start with a “best practices” approach (e.g. do not “turn off” system every night) and Rob from Epic would have his team forward us other examples of best practices.
** Hit Rates – identified the need to control volume (no query blasting all patients every day) and efficiency of the network.
Perhaps we can deploy hit rate metrics by Implementer (a reporting requirement) that allows each Implementer to report on how many hits/day/Implementer they are seeing.
We agreed to parking lot SLAs for Implementers and RLS (but should explore after TEF is released and we have a chance to digest its meaning/implications):
1.) SLAs for response times for real time queries
2.) SLAs for response times for planned queries
3.) SLAs for query initiators
4.) SLAs for query responders
5.) SLAs for RLS
December 6, 2018
We agreed to the following order for our Draft Goals:
1. Principles of Trust
- Patient Permission
- Permitted Purposes in alignment with the draft TEF and ongoing alignment, to the extent possible, with future iterations of the TEF.
- Permitted Users
- Data Integrity
- Policy Assertions
- Non Discrimination – Policy Assertion Acceptance
- Non Discrimination – Access Policies
- Responses for “Access Denials”
2. Access controls
- Workflows around defining specific users
- Workflows around defining originating system
3. Consider federal initiatives such as TEFCA and 21st Century Cures, and, to the extent reasonable, align Carequality policies with these initiatives to avoid later re-work.
4. Develop specific policy requirements around Evidence of Compliance
- Considerations for non-production testing and validation
- Pre-production validation
- Quarterly and ongoing interoperability confirmation
5. Consider defining the minimum Resources supported for each Permitted Purpose deployed by a group of Implementers.
- Example – if you’re a payer, who wants to exchange data for Permitted Purpose “A”, then you need to support, at a minimum, Resources “X, Y, and Z” in order to participate in the Carequality ecosystem.
- Responses to Unsupported Resources
For the next meeting on Dec 13, we’ll be referring back to the current IG, looking at the Policy Assertions listed on pages 15-22, and applying them to a FHIR Ecosystem.
November 15, 2018
This was the kick-off meeting where the group introduced themselves to each other. We also discussed what role Carequality plays in the interoperability world and distributed the Workgroup Charter and FHIR Use Case proposal to the group.
November 29, 2018
The Workgroup reconvened after not meeting during the week of Thanksgiving. The group officially gave its blessing to the Use Case Proposal and Charter. We also prioritized the draft goals as outlined in the charter.
December 13, 2018
Consensus from the group that the Policy Assertions listed on pages 15-22 of the Query Based Document Exchange IG are fine and the list does not need editing for creating a Carequality FHIR Ecosystem at this time. The caveat being, we may need to circle back this list pending the outcomes of the TEF and SMARTonFHIR. Marty Prahl (SSA) informed that his organization needs specific authorizations and have many requirements as it relates to consent.
Lisa Moon (Advocate Consulting) expressed concerns over the patient access policy education for organizations who are on framework. As we expand the Use Cases beyond treatment, payment, and operations, we may need more guardrails with respect to consent in order to make queries.
We may need to flesh out section 3.5 SLAs: What are best practices and good network behavior. Hans
Buitendijk stated that we may be able to deploy something around volume/time units (e.g., not every 2 seconds.) We may start with a recommendation then implement a rule. Dave Cassel stated we should circle back with straw proposals at next co-chair meeting on this topic.