The client is a US-based startup at the forefront of transforming the elementary education sector through innovative remote learning solutions. Their mission focuses on researching, developing, and delivering cutting-edge educational technologies that enhance the learning experience for children and educators alike. By creating a robust, user-friendly platform, they aim to provide tools that facilitate better tracking of students’ educational progress, making it easier for parents, teachers, and administrators to stay engaged with each child’s development.

Client requirements

The client, a company operating in the edTech sector, was actively developing a web and mobile application aimed at tracking the educational progress of children. As the platform expanded in functionality, the client recognized the need to strengthen their QA processes to ensure consistent product quality across frequent releases. While a manual QA team was already in place, the increasing volume of regression checks and short release cycles made it difficult to maintain quality using manual testing alone. This led to the decision to bring in our team to design, implement, and maintain a scalable test automation solution.

The primary requirements included: conducting thorough requirements analysis for each new feature, preparing structured and detailed test plans, integrating into their Agile sprint cycle, and aligning the automation efforts with ongoing development. The client also requested assistance in building a long-term testing roadmap that would support their internal CI/CD pipeline, which was based on GitHub Actions, and adapt to their multi-environment setup (DEV and STAGING).

Initially, our focus was on mobile test automation to improve the stability of app releases. Over time, the scope expanded to include broader layers of the system, allowing for faster and more reliable regression and release certification. One of the key challenges the client faced prior to automation was a high rate of undetected issues slipping into production. Due to time constraints, the manual QA team was unable to perform comprehensive checks before each release, and feedback often came directly from end users. Automating critical paths significantly reduced this risk and helped improve overall development velocity.

Challenge

Implementing a robust and scalable test automation solution for this project came with several unique challenges, both technical and organizational. While the initial scope seemed manageable, the long-term vision required a forward-thinking approach to ensure that the framework could handle significantly increased complexity down the line.

One of the first and most critical challenges was selecting and designing a scalable automation framework structure. The client’s platform included four distinct mobile applications, each available on both iOS and Android — resulting in eight unique target platforms. To maintain efficiency and avoid test bottlenecks, it was essential to build a framework that could execute tests in parallel across all platforms, while remaining maintainable and flexible for future growth. This required careful consideration of the test execution strategy, environment configuration, and codebase modularity.

Another significant challenge revolved around test reporting. The project demanded integration with multiple reporting tools, each used by different stakeholder groups:

  • TestRail was used by the manual QA team for test case management,
  • Allure Reports were essential for upper management visibility,
  • Slack and Email integrations provided real-time feedback loops to the delivery team.

However, these tools counted and reported test results differently, which led to inconsistencies and confusion in early stages. To overcome this, we developed a custom reporting layer within the WDIO (WebdriverIO) configuration, aligning all outputs to ensure consistent and accurate reporting across all platforms and stakeholders.

We also encountered structural challenges in the existing test case base. The manual test cases, while thorough, were not optimized for automation. Many lacked clarity, reusability, or had redundant checks that complicated automated coverage. To address this, we initiated a test case refinement process in close collaboration with the manual QA team. This involved reworking existing cases, creating new ones optimized for automation, and eliminating unnecessary steps — ultimately improving both automated and manual testing quality.

From an infrastructure perspective, there were environmental and connectivity constraints that added additional layers of complexity. The STAGING environment was shared across departments, making it difficult to isolate automated runs or guarantee stable test data. Moreover, since the client’s infrastructure was secured behind an internal VPN, we had to set up tunneled connections for both the CI/CD pipelines (hosted on GitHub) and our BrowserStack integration — ensuring secure, uninterrupted communication between services during automated test runs.

Solution

To address the client’s challenges and lay a foundation for long-term scalability, we designed and implemented a unified, modular automation framework built with WebdriverIO. Given the complexity of maintaining eight unique platform variants (four apps across iOS and Android), we structured the project within a single shared repository. Using object-oriented programming principles, we created a flexible architecture that allowed us to reuse core logic while easily overriding platform-specific behaviors. This significantly reduced code duplication and simplified future maintenance.

Our test execution pipeline was fully integrated into the client’s GitHub Actions (GHA) environment. We established multiple workflows, including:

  • Scheduled daily runs for proactive issue detection,
  • Suite-based execution for targeted testing of new builds,
  • On-demand runs triggered by stakeholders, allowing for real-time validation and reporting.

Test execution was distributed via BrowserStack, enabling parallel runs across multiple devices and platforms. This drastically improved execution time and provided better coverage within limited testing windows.

We introduced a structured release certification process, tailored specifically to their Agile sprint cycle. Each release began with a sanity check — lightweight, high-priority test cases to quickly verify the integrity of a new build. Once passed, the system automatically advanced to a full regression suite, ensuring all critical paths were validated before sign-off.

To optimize test case quality, we redefined the structure using Page Object Model and organized tests into functional modules, improving readability, maintainability, and alignment with business flows. Manual test cases from TestRail were collaboratively refined with the QA team to improve automation readiness, which included rewriting, cleaning up redundancies, and introducing missing scenarios.

We also built a custom reporting layer that synchronized test outcomes across TestRail, Allure Reports, Slack, and Email, delivering consistent and accurate feedback to all stakeholders — from QA teams to upper management.

Through consistent collaboration, regular syncs, and comprehensive documentation, we helped shape a transparent, scalable testing ecosystem. The results were impactful: the number of production bugs dropped by 40%, and the overall release certification time was reduced by approximately 70%, enabling faster and more confident delivery cycles.

Three-Column Banner

40%

less bugs in production

70%

less release certification time


Tech details

  • Automation Framework:
    • Unified WebdriverIO framework for 8 platforms (4 apps, iOS & Android)
    • Single repository for all platforms, using Object-Oriented Programming (OOP) for modularity and reusability
  • CI/CD Integration:
    • GitHub Actions (GHA) used for test automation triggers
    • Scheduled daily test runs for proactive issue detection
    • Suite-based test execution for new builds and ad-hoc management requests
  • Parallel Execution:
    • BrowserStack used to run tests in parallel across multiple devices and platforms
  • Test Coverage Strategy:
    • Sanity checks before full regression to quickly verify build stability
    • Full regression suites run only after sanity checks pass
    • Page Object Model used for test organization, making code modular and maintainable
    • Tests structured by functional modules for better readability and organization
  • Reporting:
    • Integrated TestRail for test case management
    • Custom reporting layer to synchronize results across Allure Reports, Slack, and Email
    • Ensured consistent, accurate reporting across multiple tools
  • Collaboration and Documentation:
    • Close collaboration with the manual QA team to optimize and refine test cases for automation
    • Extensive documentation to align teams and streamline processes
  • Infrastructure Challenges:
    • Set up tunneled connections for secure CI/CD and BrowserStack integration due to internal VPN
    • Worked with shared STAGING environments across multiple departments to ensure smooth automated testing
  • Results:
    • Reduced production bugs by 40%
    • Decreased release certification time by 70%

Other Case Studies

Comments are closed.