89. The Science of Cold Emailing: A/B Testing for Better Results
“You can’t manage what you can’t measure.”
- Peter Drucker
Numbers have never been my strong point.
I realised long ago that I didn’t know where to improve without tracking my performance.
Testing and measuring is my cold email approach's third and final step (read the first two steps here and here).
Let’s dive in.
In today’s email:
7Ps
Testing the right things
Measuring the right things
👇🏾 Listen: 9 steps to LinkedIn success!
THE BIG IDEA
7Ps
“Prior Preparation and Planning Prevents Piss Poor Performance”
- British Army Adage
The best way to get data on what works and what doesn’t is from your market.
That’s why speaking with clients and researching competitors is useful preparation for an outbound campaign.
I like running client surveys to understand my target market better.
I’ve used surveys to build ideal client profiles.
I’ve also used surveys to gain insights from my key decision makers at my ideal clients, such as:
their biggest challenges
feedback on our product or service
where improvements can be made
why they started (and have stuck with) you in the first place.
This information gives me a better understanding of the prospects I’m targeting.
I can test different messaging and tailor LITTA’s value prop to my target market.
Testing the right things
I have limited time to run A/B tests, so I focus on testing things with the highest potential upside.
Here’s a list of the things I test, ranked from most to least impactful:
Most impactful
The ideal client’s I’m targeting
The key decision makers or influencers I’m targeting
The challenge/problem I’m highlighting
LITTA’s value prop
Data source (i.e. Apollo, Cognism, Glenigans etc.)
Least impactful
Time of day/day of the week
Follow up frequency
Call to Action
Subject Line
Follow up/Bump emails
I start by A/B testing the highest impact item (ideal clients) and work down the list.
The key is testing one factor at a time until I’ve worked my way down the list.
Measuring the right things
“Outbound is a science. If you treat it that way you can figure it out.”
- Jed Marhle
Scientists run experiments using a control group and a test group. In the test group, only one-factor changes.
If the test group has too many factors from the control group, it is impossible to tell which one affected the outcome.
I adopt the same method to running A/B tests:
I pick 100 prospects to test my hypothesis on. I keep this number consistent for every test.
I pick a minimum period to run the test for—usually two, sometimes four weeks.
I test one thing and keep everything else the same. I start with the most impactful thing (see above).
I track the data. This is critical for success. (Here’s an example spreadsheet I use).
Here are the cold email metrics I track:
Open rate → If emails aren’t getting opened, how can I expect a reply?
Reply rate → When your challenge and value prop align with your prospect, expect a reply.
Booking rate → This is the most important metric. Booking rate = # of meetings booked / # of prospects contacted. A 1% booking rate is my baseline.
Key decision-makers in the construction, FM and property management respond better to phone calls.
If I can book one meeting per 100 emails sent, that would be a good result.
If I don’t hit a 1% booking rate, I keep A/B testing until.
Once I’ve hit that magic number, I know a campaign is ready to scale.
Thanks for reading!
Matt @ The Growth Lab
Forwarded this email?
Subscribe here:
Thanks for subscribing to The Growth Lab.