In systems engineering and software engineering, requirements management encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. It is an early stage in the more general activity of requirements engineering which encompasses all activities concerned with eliciting, analyzing, documenting, validating and managing software or system requirement. Requirements play a key role to identify, document and track the functional and technical requirements of the product or service. It also helps stakeholders prioritize those needs and track changes that occur over time to ensure continuity. Lastly, it serves to validate the capabilities delivered are meeting the needs and expectation of stakeholders that are functionally and technically testable and traceable. Our approach uses applied methods to ensure that requirements are properly documented, actionable, measurable, testable, traceable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design.
In our agile approach, we help the customer elaborate on requirements as user stories in a Product Backlog. Ideally, we prefer to see customer products like the Baseline Requirements Document (BRD), Requirements Specification Document (RSD), and System Design Documents (SDD). However, the very nature of our agile methodology does not require these artifacts to be precise nor do they need to be described fully. As with most projects, the requirements and specification are sourced from the expected users or “the business”. We typically perform iterations in two to four week sprints. Each iteration involves a team working through a full software development cycle, including Requirements Analysis, Design, Development, and Testing. The goal is an available software release at the end of each iteration. Multiple iterations are integrated into a baseline that are delivered to the customer product team for user acceptance testing.
MicroHealth uses a model-driven approach to Systems Engineering aligned to stakeholders Architecture and preferred techniques from the list above that allows the stakeholders, and other vendors to see inside the architecture without any proprietary constraints, which in turn allows capabilities to be integrated quicker. MicroHealth provides engineering efforts required as well as prepare detailed technical data documentation for these efforts. This documentation reflects the latest design, configuration, integration, and installation concepts.
Business Reference Model (BRM)
The design is a function-driven framework for describing business operations. This business reference model provides an organized, hierarchical construct for describing the day-to-day business operations of government using a functionally driven approach captured from the analysis. The BRM provides a framework that facilitates a functional (as opposed to organizational or technical) view of the stakeholders lines of business.
Service Component Reference Model (SRM)
SRM design is a business- and performance-driven functional framework that classifies service components with respect to how they support business and/or performance objectives. The SRM is structured across horizontal and vertical service domains that, independent of the business functions, can provide a leverage-able foundation to support the reuse of applications, application capabilities, components, and business services, which in turn to supports the discovery of data components. This will help tie in the business aspect of government to the technical need to deliver the objectives based on the analysis. We recommend uses a common services approach using open standards based an open architecture, where services handle the standard application integration (allowing applications to talk to each other) activities such as exception management, management of reference data, and other interactions with enterprise standard systems.
Technical Reference Model (TRM)
The TRM design is a component-driven, technical framework that categorizes the standards and technologies to support and enable the delivery of service components and capabilities. It also unifies existing customer-related performance indicator initiatives by providing a foundation to advance the reuse and standardization of technology and service components from a community-wide perspective. This model helps lower the barrier to access and participation from a technical perspective and maximize interoperability across data providing systems.
Data Reference Model (DRM)
DRM describes, at an aggregate level, the data, and information that support government program and business line operations. It enables the government to describe the types of interaction and exchanges that occur between their systems and other data providing systems. The DRM categorizes government information into greater levels of detail. It also establishes a classification for government data and identifies duplicative data resources. A common data model will streamline information exchange processes within the government providing organizations and help others take advantage of the system as a platform for performance measuring and monitoring.
Cybersecurity Reference Model (CsRM)
CsRM design ensures that as the system is designed, we incorporate the Defense security model. Our solution is based on commodity items that will minimize overall life cycle costs. We recognize that this approach requires us to balance security implementation with the constraints of the commodity items. That is why as we develop the capabilities, our security engineers evaluate these constraints and use the methods and lessons learned to ensure that the concluding solution is compliant with the government security architecture. NIST SP 800-53 and stakeholder cybersecurity policies that security decisions are documented, the evaluation and identification of potential solutions, and the maintenance of the operational systems’ security. .
Quality Reference Model (QRM)
QRM ensures full traceability between the requirements of the community to the deliverable. A key aspect of this is a quality control program, which ensures defect-free products. We take an iterative approach following a “test-fix-test” technique throughout the development cycle to ensure that all software functions are as designed and free of defects and vulnerabilities, either intentional or unintentional. We incorporate both automated and manual software quality checks that assess not only code quality, but also vulnerabilities early and throughout the Agile development process. By taking this approach, defects are found much earlier in the systems integration process, which can significantly reduce risk and costs as opposed to the traditional test at final delivery approach.
Enterprise Architecture (EA)
Team MicroHealth combines these reference models into an EA that defines and illustrates key relationships and interactions between people, processes, and technology to produce better outcomes. We leverage EA frameworks like Togaf, DODAF, FEA and IEEE P1471 – Recommended Practice for Architectural Description of Software Intensive Systems, and stakeholder EA guidelines to provide a foundational framework (e.g. UML, ERD, BPMN) for developing and representing architecture descriptions that ensure a common denominator for understanding, comparing, and integrating architectures across organizational boundaries. This approach serves to guide integration engineering and support to ensure that future systems are aligned with user’s needs and also yield IT products that work well together, are not duplicative, and are not in conflict with each other.
MicroHealth uses Agile development methods in a DevOps framework due to its ability to reduce product risk and a faster time to market for new capabilities. Our agile framework approach provides structure, planning and control to deliver capability rapidly within the Government Acquisition Framework. Our methods promote development, teamwork, collaboration, and process adaptability throughout the life-cycle of the project. Specifically, our Agile methodology break tasks into small increments with incremental planning. It is designed to provide the flexibility needed to adequately manage risk while allowing for differences in project size, complexity, scope, and duration. Smaller release cycles means less complexity in the code, leading to significantly less bugs as well as a structure that is conducive to accepting changing requirements. A release doesn’t necessarily mean a full scale deployment however, our goal is to provide incremental deliverables that are well defined, coded, ready to be demonstrated to gather insights from the community as early and frequent as possible.
Our design concepts provide the software designer with a foundation from which more sophisticated methods can be applied. Our software architecture consists of reusable software components and components to be developed. Software requirements are allocated to one or more components of that architecture. The project follows the defined processes documented to conduct object-oriented architectural and detailed software design of new software and to capture the design, and reengineer if necessary the software to re reused. Emphasis is placed on good software engineering principles such as information hiding and encapsulation, providing a complete description of processing, and the definition of all software and hardware component interfaces to facilitate software integration and provide a basis for future growth. The Software Design Description (SDD) and Software Interface Design Description (SIDD) is produced, and the User Documentation Description (UDD) is updated. Satisfactory completion of a Software Design Review (SDR) as part of the sprint cycle serves as entrance criteria to begin development within the sprint.A set of fundamental design concepts has evolved. They are:
- Abstraction – Abstraction is the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically in order to retain only information, which is relevant for a particular purpose.
- Refinement – It is the process of elaboration. A hierarchy is developed by decomposing a macroscopic statement of function in a stepwise fashion until programming language statements are reached. In each step, one or several instructions of a given program are decomposed into more detailed instructions. Abstraction and Refinement are complementary c
- Modularity – Software architecture is divided into components called modules.
- Software Architecture – It refers to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. Good software architecture will yield a good return on investment with respect to the desired outcome of the project, e.g. in terms of performance, quality, schedule and cost.
- Control Hierarchy – A program structure that represents the organization of a program component and implies a hierarchy of control.
- Structural Partitioning – The program structure can be divided both horizontally and vertically. Horizontal partitions define separate branches of modular hierarchy for each major program function. Vertical partitioning suggests that control and work should be distributed top down in the program structure.
- Data Structure – It is a representation of the logical relationship among individual elements of data.
- Software Procedure – It focuses on the processing of each modules individually
- Information Hiding – Modules should be specified and designed so that information contained within a module is inaccessible to other modules that have no need for such information.
Our software development uses best engineering practices and design patterns that not only addresses current issues and patches needed, but carefully begins to position the customer to help the achieve modernization objectives. Specifically, we use common services/microservices approach using open standards based, open architecture where services handle the standard application integration activities such as exception management, management of reference data, and other interactions with enterprise standard systems. Services are any predefined endpoints that provide predefine functionality with known inputs and outputs. From the Enterprise Architecture meta-model point of view services can be provided in different protocols (API, COM interface, .Net interfaces, RPC, FTP, Web Services, etc). Specifically, our approach ensures:
- Accuracy –provide a configurable, standards-based, distributed solution that supports intelligent, accurate rules based routing, message and protocol transformation, and message enhancement while ensuring timely and complete delivery of any message.
- Adaptability and Performance – adapt to rapidly changing business needs while maintaining an adequate performance for each priority within the enterprise.
- Speed and Ease of Implementation –designed for rapid development and integration, and to minimize the time to implement and to integrate with customer systems—and to minimize the changes required to existing systems.
- Security, Reliability and Availability – provide configurable enterprise-level quality of service (QoS) to ensure that service communication is secure and reliable to meet the needs of the business.
- Distribution – services and computing are orchestrated with capabilities spread across a single organization, between organizations, and between multiple organizations.
- Flexibility –flexibility to allow the customer to change and meet emerging needs with minimal effort and disruption to the function of the enterprise.
- Monitoring and Management Visibility and Control – built with tools and processes to support effective, monitoring and management of the infrastructure, the processes, and services enabled through it.
- Loose Coupling – Products are loosely coupled, asynchronous solution that will support complex communication between service requesters and service providers across a diverse environment
- Messaging Service and Abstraction Layer –messaging services and an abstraction layer that will allow integration architects to adapt to changing business needs without writing code.
- Application Integration –Simplify integration with the customer and external systems, and provide for flexible reuse of business components within a system environment.
- Platform Independence –Capable of implementation on a variety of computing infrastructure.
- Standards Compliance – open architecture and open standards compliance that follow customer enterprise architecture guidelines and industry standards.
- Scalability – ability to operate in austere environments, low computing environments to scaling in a distributed or central computing environment supporting
- Section 508 compliance of the Rehabilitation Act of 1973 (29 U.S.C. 794d) – Specifically, the procurement, development, maintenance, or integration of electronic and information technology under this contract must comply with the applicable accessibility standards issued by the Architectural and Transportation Barriers Compliance Board at CFR Part 1194.
- IPV6 Compliance – leverage RFC 4038, RFC 3493, and RFC 3542 software transitioning mechanisms for systems non-ipv6 compliant
Our DevOps framework builds upon Agile and Lean principles, thus reinforcing, extending, and amplifying the benefits of this approach. Our DevOps approach, like Agile, is designed to overcome the shortcomings of traditional waterfall approaches while still supporting the software development lifecycle (SDLC) waterfall process outlined below:
Developing and Testing Against Production-like Systems
MicroHealth’s DevOps approach, known as shift left, addresses operational concerns as early as possible in the SDLC. Specifically, this approach calls for development and quality assurance (QA) teams to develop and test against systems that behave like the production system.
Deploying with Repeatable, Reliable Processes
We use automation tools to create iterative, repeatable, and reliable processes. This allow for continuous, automated deployment and testing, resulting in greater process efficiencies and reduced manpower.
Our collaborative approach enables diverse sets of developers, architects, functional subject matter experts (SMEs), etc., to work together and achieve continuous integration.
The software developers’ work is continuously integrated and validated. Routine, periodic integration of results enables early discovery and resolution of integration risks and issues.
Monitor and Validate Operational Quality
We monitor application quality early in the SDLC, through automated testing of the application’s functional and non-functional features, thus providing early notice about operational or quality issues that may occur in production.
MicroHealth employs a “test-fix-test” approach with continuous integration throughout the SDLC. We achieve a quicker feedback cycle by 1) automating configuration and refreshing of test data, 2) deploying the software to the test environment, and 3) executing automated tests.
Delivery follows from continuous integration and involves automating the deployment of the software to the testing, system testing, staging, and production environments.
Amplifying Feedback Loops
We enable the developer to respond and make changes more rapidly through knowledge transfer and knowledge exchange.