Written by Hafiz Dhanani, Topics: A/B Testing, CRO

How Aggregate Data Hides Your Most Profitable Insights

When I was the first growth hire at Rocket Doctor, we ran an experiment that highlights some of the subtler nuances of A/B testing.

The results looked clear-cut at first — we had a winner with statistical significance.

But when we dug deeper, we discovered something far more valuable than a simple conversion rate improvement.

Here's what happened, and why it matters for anyone running experiments.


The Setup

We were running a high volume of spend on Google Ads — basically our entire marketing budget. Any improvements in Cost Per Lead meant we could acquire more patients / extend our runway.

Since marketing owned the signup forms post-click, we could iterate quickly without engineering resources.

Our experiment's core hypothesis was that users would respond better to seeing fewer fields at once.

More formally:

“By redistributing the patient intake fields from a single long page into a sequential, multi-step process, we will reduce the user's perceived cognitive load and visual friction. This change is expected to result in a statistically significant increase in form completion rates (conversion rate) and a subsequent decrease in Cost Per Lead.”

We decided to test two form variations:

Version A (Control): A single-page form collecting all patient information at once.

Version B (Experimental): A 3-part form that broke the same fields across multiple screens, asking patients to click "next" between each section.

Form-Types-AB-Tested-Hafiz-Dhanani.png

The Initial Results

After 71 days and 23,273 sessions, the numbers came in: