Testing

Our testing and evaluation approach is conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Areas covered includes test planning, software quality assurance testing, requirements analysis, test planning, test development, test execution, test reporting, test report analysis, defect retesting, regression testing and test closure. We incorporate best practices such as:

  • Software Component Testing BS 7925-2, Vocabulary in Testing BS 7925-1

  • Classification of software anomalies IEE 1044 and IEEE 1044-1

  • Standard for unit testing IEEE 1008

  • Software Test Documentation IEEE 829

  • Software validation & Verification Plan IEEE 830 & 1012 and

  • Software inspections IEEE 1028

We  work with the customer in developing a Master Test Plan (MTP) following program guidelines and leveraging the IEEE guidelines above then refine test methods as appropriate to the objective such as:

  • White Box Testing – Non-functional testing of software including its internal structures, interfaces and other technical features, which we apply to unit, integration and system testing levels

  • Black Block Testing – Functional testing, using test cases, and built around application specifications and requirements, ensuring it meets the needs it was designed to perform. This is applied to all levels of testing from unit to acceptance testing.

  • Performance Testing – Applied to both White-box and Black box testing, we determine how well the system performs using metrics such memory usage, processor consumption and query response times to ensure the system is performing with the stated capacity goals. Examples of performance testing include: load testing, volume testing, scalability testing and stress testing.

  • Other Test Types – Static testing including: reviews, walkthroughs, inspections; and Dynamic testing including: regression testing, smoke testing, compatibility testing, installation testing, section 508 accessibility testing, and usability testing.

We conduct Qualification Testing according to Qualification Test Plans and Procedures, and  document our results in a Test evaluation summary (Test Report). The objective of our incremental testing method is to help ensure that the system under review can continue with development, demonstration, test and implementation, while meeting the stated functionality and performance within cost, schedule, risk and other programmatic constraints. Our incremental approach at a high level  evaluates:

  • Interface Integrity – Internal and external module interfaces are tested as each module or cluster is added to the software

  • Functional Validity – Test to uncover functional defects in the software

  • Information Content – Test for errors in local or global data structures

  • Performance – Verify specified performance bounds are tested

MicroHealth develops test cases for each requirement, before detailed engineering or software-code programming commences.  Our test cases describe how code should behave, the output expected for a given input, and how it can (may) fail, allowing developers to design for testing early in their development process.  Our test cases also describe the expected result from the system, the circumstances under which the result must be provided, and scoring rules for Use Cases. Our objective scoring addresses technical specifications such as accuracy, timeliness, completeness, and correct data sources.

After source level code has been developed, reviewed, and verified for correspondence to component-level design, unit test case design begins. Using the component-level design description as a guide, we help uncover errors within the boundary of the module. We pay particular attention to how module interfaces are tested, ensuring they have proper information flow, since they are typically a source error from data quality. We also evaluate local data to ensure that integrity is maintained, boundary conditions are tested and all error handling paths are evaluated. We performSoftware Code Quality Checks (SCQC) at unit, integration and system testing by scanning source code, executables, and related artifacts. 

Our testing approach evaluates the behavior when two or more modules are integrated. This helps uncover problems not identified during unit test that inadvertently affected the function or sub-functions of related modules when combined to produce a major capability. Rather than simply testing all functions as found in system testing, this technique helps narrow root causes of issues early in the process, before it is significantly complicated with the entire code set. For this, we apply either a top down or bottom up approach depending on the specific situation.

We  perform regression testing which is used to test the result of bug fixes within a particular code and to ensure no additional errors/defects have been consumed into the software. We take a DevTestOps approach to continuous testing capitalizing on automation particularly for regression tests that free up testers to concentrate more on new functionality and negative testing.  This also frees up time to perform unscripted testing which testers try to break the system by not following the designed workflow as human beings are not as predictable as test cases in their daily use.  Specifically, we help ensure that any changes in a release do not introduce unintended behavior or additional errors. Properly designing and documenting test cases with the purpose of making tests repeatable, and using test generators, is our key to successful regression testing approach.

We perform functional testing as part of the Black box process where we evaluate if the system does what it is intended to do. Specifically, we  identify functions that the software is expected to perform by applying test cases to use cases and comparing actual vs expected outputs. There are a number of tools that can help automate parts of this testing, but not all of functional testing. We identify candidates for testing automation based on several factors particularly in regression testing, by the:

  • Complexity where testing may require multiple builds/patches/fixes in a diverse environment with concurrent user load simulated.

  • Repetitive tasks that can be satisfied by automation tools

  • Level of effort to build the scripts vs. performing the actual tasks

  • Cost benefit ratio

  • Confidence interval of test automation results

  • Stability of features to be automated

We evaluate the end-to-end functionality of a system. With integration testing performed incrementally, the risks of complex system testing will be reduced. Our system testing approach include:

  • Recovery testing – Checks the system’s ability to recover from failures

  • Security testing – Verifies that system protection mechanism prevent improper penetration or data alteration

  • Stress testing – Programs are checked to see how well they deal with abnormal resource demands (i.e., quantity, frequency, or volume)

  • Performance testing – Designed to test the run-time performance of software, especially real-time software

Prior to release, it is important to ensure that the software works correctly for intended user, in his or her normal work environment. The purpose of acceptance testing is more to give confidence that the system is working, rather than to find errors, since bugs were corrected during the incremental testing approach. There are two levels of acceptance testing we perform, which include:

  • Alpha Testing – Version of the complete software is tested by customer under the supervision of the developer at the developer’s site

  • Beta Testing – Version of the complete software is tested by customer at his or her own site without the developer being present