Our client, a US-based startup in the advertising technology space, set out to solve a unique problem in the mobile monetization ecosystem. They developed a custom SDK designed to be embedded in Android applications, specifically targeting apps distributed through pirate installations.

This SDK offered a novel approach to mobile advertising: once it detected an installation is not official (pirate installation), it triggered a hidden layer of ad delivery mechanisms that engaged third-party ad networks based on specific user actions and in-app events.

With a growing user base – some of their partner apps reaching millions of downloads – the complexity of managing and testing such a solution increased exponentially. The client needed a QA team that could not only test the product across different environments and platforms but also deeply understand the underlying logic of SDK behavior, data flow, and advertising logic. That’s when they brought our team on board.

Client’s Vision and Initial Needs

The client’s goal was to provide developers and studios with a seamless, revenue-generating ad experience that remained hidden to the user unless very specific, contextual triggers were met.

The SDK, once embedded, operated behind the scenes: monitoring installation sources, identifying user behavior, and triggering ads based on complex interaction flows – such as app launches, exits, and usage patterns.

Alongside the SDK, the client also developed a web-based Client Portal for partner studios, offering integration tools, analytics, and configuration options for how the SDK interacted with their apps. To support the growth of both the portal and SDK, the client needed a quality assurance partner who could work across both domains – supporting manual testing, automation, and infrastructure integration – all while maintaining agility in a fast-paced development cycle.

Challenges

From the beginning, the project posed several complex and highly specific challenges that went far beyond traditional QA tasks.

The SDK itself was technically intricate. Its behavior depended not only on internal logic but also on external environmental factors – like where the application was installed from. If it detected a pirate installation, it initiated background processes to fetch and deliver ad content dynamically, using tools like Kafka for event streaming and Datadog for monitoring performance and tracking user interactions. Testing this system required more than just device-level validation; it required behavioral understanding, log analysis, and a fine-tuned approach to mocking and simulating real-world data.

Further, the SDK needed to function flawlessly across a wide range of Android devices, screen sizes, and OS versions. Ensuring compatibility and stability on such a diverse landscape required rigorous regression testing and strategic test coverage planning.

On the Client Portal side, although less complex than the SDK, automation had to be introduced from scratch. There were no existing test scripts or suites. Requirements were often inferred from historical development tickets, meaning our team needed to bridge gaps between what was documented and how the system actually worked.

As if that wasn’t enough, the shared STAGING environment, used across departments, often introduced instability. VPN constraints made test integration with Jenkins CI/CD and other systems more difficult. And the need to validate end-to-end interactions between the web portal and SDK across different data states and versions added further complexity.

Three-Column Banner

60%

stable release process

70%

reduction in production defects

1000+

active users


Our Approach

We approached the project as both generalist and specialist QA engineers – combining manual expertise with automation implementation to support the full spectrum of the client’s needs.

On the SDK side, we adopted a behavior-driven testing strategy, digging deep into the functional logic of the SDK. We focused on understanding event triggers, analyzing log data, and simulating edge cases such as mocked third-party responses from ad networks. In situations where ads were not triggered by user action but by internal events, we engineered test environments that allowed us to simulate those events on command, making the SDK’s behavior testable and repeatable. Our team regularly tested across a variety of Android versions and physical devices to ensure coverage and eliminate fragmentation issues.

We monitored SDK performance using Datadog, reviewed event streams via Kafka, and worked closely with the developers to trace and isolate issues quickly during regression. These practices allowed us to catch subtle but critical defects before release and improve visibility into SDK health.

For the Client Portal, we implemented an automation framework using Selenium, tightly integrated into the client’s Jenkins-based CI/CD pipeline. Our system supported event-based triggers, daily scheduled runs, and smoke checks on production pushes. We also introduced a tiered testing model – running smoke tests first to catch obvious issues before proceeding to full regression.

Most importantly, our collaboration model emphasized constant communication and documentation. We conducted regular syncs with the client’s QA, development, and product teams. Our shared Confluence space outlined test strategies, known issues, and feature coverage. The QA process became more than just a checkpoint; it became a structured and transparent part of the release pipeline.

In addition to building a robust QA framework, we also had to establish a process for SDK certification, which previously didn’t exist. Given the SDK’s complexity and the high risk of production incidents affecting millions of end users, we introduced a structured pre-release validation cycle. This included sanity checks, regression passes, log analysis, and custom validations based on recent changes. Over time, this certification process became a formalized stage in the release pipeline, serving as a quality gate and giving the development and product teams confidence in each deployment.

To minimize risk and monitor performance in production, we worked with the client to implement controlled A/B testing strategies. Rather than releasing SDK updates to all partner apps at once, we helped define a gradual rollout process – deploying to small cohorts first, collecting performance data, and only then expanding to larger user groups. This phased approach allowed us to monitor real-world impact without exposing the entire ecosystem to potential issues. As part of this, we ensured that the QA team had access to telemetry, logs, and feedback from early users, allowing us to catch and respond to anomalies quickly before full rollout.

A crucial part of our approach involved pre-publication integration testing of the SDK within client applications before they were released to the Play Market. Many of these apps – particularly mobile games – were large in size and complexity, often exceeding a gigabyte. This meant we had to go beyond basic SDK-level tests and validate how the SDK interacted with real application logic, UI flows, and performance constraints. We simulated user behavior, monitored event triggers, verified ad displays, and closely examined log outputs to ensure that integration didn’t interfere with app stability or user experience. This extra layer of QA helped catch integration-specific issues early, reducing the risk of post-release problems and building trust with development teams across multiple partner apps.

Outcome

Over three years of collaboration, our QA efforts yielded significant and lasting impact:

  • The platform saw a marked decrease in production issues, even as its user base scaled into the millions.
  • Our proactive regression strategy and deep SDK understanding reduced issue detection times and ensured that bugs were caught well before release.
  • By implementing automation and structured manual testing, we reduced release certification time, allowing for more agile and confident releases.
  • Our approach to mocking and simulating ad network responses brought predictability to previously unstable integration points, reducing failed ad delivery instances.
  • The transparency and responsiveness of our QA process improved trust with the development team, fostering a calm and controlled release environment.

Tech details

  • Industry: AdTech (Advertising Technology)
  • Product: Android SDK, Web Client Portal
  • Target Users: Mobile development studios
  • Frameworks: Selenium (Web), Manual SDK Testing
  • CI/CD: Jenkins
  • Test Environments: STAGING, DEV
  • Tools: TestRail, Slack, Jenkins, Email, HTML Reports, ClickHouse
  • Results: Fewer bugs, reduced test time, stable releases

Comments are closed.