Integration & Interoperability

Interoperability is the means by which two or more systems can exchange and use data and information across multiple platforms. Standardized solutions (patterns) are used to exchange data between systems in a meaningful way.  Interoperability occurs on multiple levels within and between healthcare systems. Each of these levels has specific challenges and needs that must be addressed to achieve interoperability. There are main three levels of health information technology interoperability that we strive to achieve:

Foundational interoperability allows data exchange from one information technology system to be received by another.  There is no requirement for the receiving information system to interpret the data. This might involve sending an image, free text, or a pdf document.

Structural interoperability defines the structure or format of the message format standards (syntax). This is used when the data being exchanged is standardized and uniform between the sending and receiving systems and therefore the clinical or operational meaning of the data remains intact. Structural interoperability ensures that data exchanges between systems can be interpreted at the data field level.

Semantic interoperability provides interoperability at the highest level that delivers shared meaning between different systems. This approach leverages the structure and degree of codification of the data, including vocabulary, which allows the receiving system to interpret the incoming data. In most cases, two different systems do not use the same terminology.  For example, one system defines a blood test as Fasting Glucose while another system defines the same test Fasting Blood Sugar. Semantic interoperability provides the vocabulary translation between systems making meaningful data exchange possible between disparate EHR systems, business information systems, medical devices, and mobile technologies.  

Our integration patterns are a set of standardized tools and approaches that when applied consistently across the enterprise help to achieve interoperability. Integration patterns are the key to conformity to standards and successful interoperability. They simplify the overall interoperability, yet are abstract enough to apply to most of technologies, but at the same time specific enough to offer practical guidance to architects and designers. Patterns are also used to define the vocabulary (semantics) for developers. Application integration patterns store commonly used and observed solutions in EAI (Enterprise Application Integration). It also captures best practices around integration of application and data, workflow implementation and process automation involving human interactions. The integration patterns can be implemented using any one of the four Data focused and Process focused application patterns. These different pattern designs offer solution flexibility to provide specific business needs of the business processes being automated.

In this system integration framework, we recommend supporting three Sharing Models because the principles are the same, as it is relatively simple to implement more than one model to accomplish multiple objectives. The first model is Direct Push, where clinical content in the form of documents and metadata is sent directly to a known recipient, or published on media for delivery.  Centralized Discovery and Retrieve is when a centralized locator is used to discover the location of documents which enables a retrieval of the document from a custodian who has registered existence of the document with the centralized locator. Federated Discovery and Retrieve are a collection of peer entities that are enabled to query each other to locate documents of interest, followed by retrieval of specific documents. These models share the common definition of a document and metadata describing documents, folders, submission sets and document associations. Each requires some level of governance structure in order to operate. The centralized models require repositories such as direct push and federated approaches that involve a detailed directory of participating entitles. This is used to ensure that the push or query transactions are sent to the proper place. These models include strong support for authenticity and encryption on transport. Privacy requirements vary especially between the direct push where privacy policy is generally determined prior to initiation of the action and Discovery mechanisms where privacy policy is most often determined prior to responding to the request. Although the issues that need to be resolved through governance are largely the same, resolutions vary depending on the model chosen.

To accomplish this, we use an abstraction layer which translates a high-level request into the low-level commands required to perform the operation. The most common abstraction layer is the Application Programming Interface (API) between an application and the operating system which includes stored procedures, services, executable code, and dynamic link libraries. The software architecture of the system consists of the large-grained structures of the software which describes the components of the system and how those components interact at a high level. Those components are then be decomposed into smaller pieces to facilitate their development as sprint backlog item. Logically related components and functions are grouped together to gain coding efficiency and to test related functionality. Service calls can then be orchestrated into workflows that constitute applications, part of an application or simply a higher level function call. This data abstraction technique using APIs will lead to the ability to leverage physical data, no matter how it’s structured, as new, logical schemas that exist only in middleware – creating a common data layer that architects can restructure as needed, rather than making costly changes to the physical database or core services. To accomplish this, we:

  • Define and use integrated Computer-Aided Software Engineering (CASE) technology environment including tools, techniques and end-user participation in the total rapid application development process to capture requirements, provide requirements traceability, construct and generate source code and produce executable code modules.
  • Establish the detailed system architectures to include: system schematics, including system and subsystem performance-based descriptions, and key interfaces between them.
  • Define hardware and software specifications to include: sizing and performance requirements, systems and subsystem, interface requirements, and systems control requirements.
  • Design database and file structures to include: definitions of file characteristics, file layouts, data dictionary entries, file indices for each subsystem and database schema and subschema.
  • Finalize input and output design to include: data flows, data dictionary entries, dialogue specifications and lists of all inputs and outputs by subsystem.
  • Define special design considerations to include: network design approaches, teleprocessing design specifications, data control, security and audit procedures; archived historical, current data purging and data entry criteria, scheduling, disaster recovery, special quality assurance factors and configuration control requirements.
  • Define program specifications to include: detailed processing logic for each module, data dictionary entries for parameter data and a list of compile and load units for each design unit and their component modules.
  • Identify, define, and design capacity requirements and any associated limitations. Specifically, coordinate with the customer to identify facility limitations and considerations during the design phase
  • Apply software development processes of, or equivalent to, the stated Institute of Electrical and Electronics Engineers (IEEE) Standards, or the Software Engineering Institute (SEI) Capability Maturity Model (CMM), Level III or higher

Implementation

Change Management 

Supporting the change management function of business process re-engineering effort, we leverage Kotter’s Eight-Step Change Management Process to guide building a change management plan aspect of business process re-engineering ensuring the greatest chance with regard to the human factors and industrial engineering of new system implementation success which include:

  1. Increase urgency – inspire people to move, make objectives real and relevant.
  2. Build the guiding team – get the right people in place with the right emotional commitment, and the right mix of skills and levels.
  3. Get the vision right – get the team to establish a simple vision and strategy, focus on emotional and creative aspects necessary to drive service and efficiency.
  4. Communicate for buy-in – Involve as many people as possible, communicate the essentials, simply, and to appeal and respond to people’s needs. De-clutter communications – make technology work for you rather than against.
  5. Empower action – Remove obstacles, enable constructive feedback and lots of support from leaders – reward and recognize progress and achievements.
  6. Create short-term wins – Set aims that are easy to achieve – in bite-size chunks. Manageable numbers of initiatives. Finish current stages before starting new ones.
  7. Don’t let up – Foster and encourage determination and persistence – ongoing change – encourage ongoing progress reporting – highlight achieved and future milestones.
  8. Make change stick – Reinforce the value of successful change via recruitment, promotion, and new change leaders. Helping weave change into culture.

Business Process Reengineering (BPR)

Our BPR approach rethinks or redesigns the current or existing business practices in both technical and behavioral. Change can be threatening to individuals and organizations. Yet successful adaptation to change is crucial to the success of any initiative. We use a 5 step approach to Business process re-engineering.

  1. Empowerment of community: Empowerment means increasing the power or capability of some individual to do their work. We accomplish this by nurturing thriving communities of practice.
  2. Providing information: Right information at right time will help to perform their work and for most of the information system it is the primary purpose. These information systems provide different type of information like– some provide business related essential information and other provides potentially useful information in different ways.
  3. Providing tools: Empowering the community is only possible by providing right information and right tools. When some bottom lines are arrived or plans are changed during negotiation process then planning analysis has to recalculate the whole project result, at that moment tools are used.
  4. Providing training: Information is needed to support their desire work and it is also important for training purpose.
  5. Saving time: Relentlessly finding efficiencies while improving effectiveness using a lean approach.

Training 

Our approach is based on the industry standard ADDIE model (Analyze, Design, Develop, Implement, and Evaluate) to develop a full range of learning content and materials, including training strategies, plans, needs assessments and delivery tools. The five steps of the ADDIE model are divided into two phases: the Planning Phase when training needs are assessed, materials are developed, and the training plan established; and the Implementation Phase when training areconducted.

Analysis – During this phase of the methodology, MicroHealth work with customer representatives to confirm training session audiences and the required training content. The audience and content scope are key inputs into developing the training plan and training session curriculum. During this phase, conduct thorough needs assessment of support and training provided to customers and begin to identify the individuals who will be involved in training reviews and train-the-trainer activities, if applicable. The training team captures possible constraints and risks to be considered during development of the training plan, including feedback from previous voluntary customer surveys.

Design – In the design phase, the findings from the training analysis are used  to develop a training plan and curriculum. The training plan provides a roadmap for subsequent training phases, outlining detailed approaches for material development, logistics management, delivery and evaluation.

Our differentiation is a blended learning methodology that promotes interactive, engaging, and sustainable learning. In our previous experience with training on data quality and information management, we have found that an initial instructor-led course (that focuses on workflow), followed by computer-based training modules on tool functionality is the most effective learning approach. Reference materials are also valuable to customer staff as they begin applying the concepts introduced within the classes.

Develop – During this phase of the training methodology, training materials are developed and finalized. MicroHealth develops and establishes a training review process to confirm that training materials are finalized and approved prior to deployment.

Implement – MicroHealth works with the customer leadership and the specific training points of contacts to confirm training delivery approaches, including the logistics for the specific planned sessions.

Post Deployment Review

The key to a successful post deployment review is recognizing that the time spent on the project is just a small part of an ongoing time-line. For people and organizations that are working on similar projects in the future, it makes sense to learn as many lessons as possible, so that mistakes are not repeated in future projects. For organizations benefiting from the project, it makes sense to ensure that all desired benefits have been realized, and to understand what additional benefits can be achieved. A Post Deployment Review is conducted after completing a project. Its purpose is to evaluate whether project objectives were met, to determine how effectively the project was run, to learn lessons for the future, and to ensure that the organization gets the greatest possible benefit from the project, and determine whether the project goals were achieved.

Sustainment

Whether in help desk operations, system maintenance or managing a data center we use ITIL based practices for systems support and sustainment.    An ITIL approach provides guidance to IT Service Management on how to provide quality IT services, as well as the processes, functions, and other capabilities needed to support them.  Consistent with our CMMI practices, we track project measures and monitor trends as early indicators of problems where we can take corrective measures.

Service Strategy.  Our service strategy approach considers the perspective, position, plans, and patterns that a service provider needs to be able to execute to meet the business requirements.  The team provides analysis and clear identification of the definition of services, documentation, and coordination of how service assets are used, as well as to define efficient service management processes.

Service Design.  Our service design approach ensures IT service designs are aligned with the government’s IT practices, processes, policies, and desired strategy to facilitate the introduction of these services into supported environments, ensuring quality service delivery, customer satisfaction, and cost-effective service provisions.  We understand the importance of delivering adaptable solutions to accommodate future functionality and technology to enhance the system environment and capabilities

Service Transition.  Our service transition approach ensures that new, modified, or retired services meet the expectations of our stakeholders.  The objectives of well executed service transition are to plan and manage service changes efficiently and effectively, set correct expectation on the performance and the use of new change services, and validate that service changes create the expected business value, set correct expectations on the performance and use of new or changed services, etc.  In order to achieve these objectives, the team plans and manages the capacity, availability, and resources required to manage the transitions.

Service Operation.  Our service operation approach coordinates and executes the activities and processes required to deliver and manage services at agreed levels of Service Level Agreements (SLAs) to the team and customers.  Our service operation is also responsible for the ongoing management of the technology that is used to deliver and support services. The team delivers support and training for the ongoing management of the technology.

Continual Service Improvement (CSI).  Our CSI approach aligns IT services with changing requirements by identifying and implementing improvements to IT services that support business processes.  Our CSI identifies ways to improve service and workflow effectiveness and cost efficiencies. The objectives are to review, analyze, prioritize, and make recommendations on improvement opportunities, review and analyze service level achievement, and improve the cost effectiveness of delivering IT services without sacrificing customer satisfaction, etc