At Service Allies, we don’t have a “set it and forget it” approach when it comes to running Meta ads for our clients. We have an established process for monitoring and improving our client results.
After 7 years of hands-on experience, personally working on the ad campaigns of hundreds of different contractors, here are my top 5 tips for managing ad campaigns.
5 Tips on Meta Ads Management
1. Give things time to optimize

When you make changes to an ad, or launch a new campaign, the algorithm will get to work. First, it will see what’s in your ad and try to show it to people it thinks are likely to respond. Then it will use data about the user base who actually responds to build an avatar of your ideal customers. And it will try to show the ad to more people like that.
All of that takes time, which is why it’s important to give your ad campaigns a few days - if not 1-2 weeks - before making changes. It is scary if your campaign launches and doesn’t get leads immediately, or gets very few leads. But you need to give the algorithm time to figure it out.
2. Stay on top of campaigns with regular check-ins

When you give Meta a budget for your campaigns, it will spend the budget whether or not you get results. That’s why regularly monitoring your results, and making adjustments, is so critical.
To accomplish this at Service Allies, we keep notes in a live Google doc. These notes include when we checked in, what we observed, and what action we chose to take on the campaigns if any. You can easily use a similar system to track your own campaigns.
I recommend recording spend, lead volume, and lead cost in 7 day increments. As you make changes, you’ll be able to see how those numbers are affected.
3. Split test, choose a winner, repeat

Split testing is absolutely crucial to develop ad campaigns that make you money. The reason is because variations to preview texts, headlines, and images can completely make or break your results. Split testing is the only way to find out what’s going to work in your local market.
Split testing (also called A/B testing) is simply the process of testing two variations of a campaign for a period of time and seeing what works best to accomplish a specific result. You can test things like:
- Different types of offers, such as discounts, tune-ups, financing, etc.
- Using real, authentic images vs. graphic designs to promote your offer
- Variations of the preview text or headline
- Variations in the lead form copy
I recommend always having some type of test running - out of 2 things, one of them will always work better - even if it’s a marginal difference.
So the process for running tests over time might look something like this:
- Start with testing two different campaigns with their own offer; one for an HVAC Tune-Up, one for a 0% APR Offer
- After 2 months, you see that the tune-up is getting better quality leads. Your average ticket for tune-up lead ran is higher than replacement lead ran. So you put all your budget onto the tune-up campaign. Your budget is split between two adsets; adset A has authentic pictures of your techs working on equipment, adset B has graphic designs that feature the benefits of the tune-up.
- After 3 more weeks, you see that the authentic pictures adset is getting leads for $50 each, and the graphic designs adset is getting leads for $35 each. You turn off the authentic pictures adset. Now you’re only running the graphic designs, and this time you split test different preview texts. Preview text A says: “👉 Don’t wait until the heat of summer hits to realize your AC is in bad shape - grab your $89 AC Tune-Up now ($330 value!).” Preview text B says: “👉 Been putting off your AC maintenance? Save money and finally get it done with a $89 Tune-Up voucher ($330 worth of service work!).”
- After 2 more months, lead cost stays very close between the two adsets. Preview text A is at $38 per lead and adset B is at $36 per lead. However, when you look at your service work and equipment sold, you see that your average revenue per lead is $700 on Preview text A, and $1200 on Preview text B.
You may want to just create one “good” version of your campaign and turn it on and hope it works. However, we’ve seen that a different offer can be the difference between leads that are $200 each and leads that are $20 each. We’ve seen a different headline be the difference between $100 leads and $35 leads.
An important thing to note: when I say “split test,” I’m also talking about testing the close rate or average lead value. We’ve seen small differences have a huge impact.
Results may be way different than you expect. The data will tell the truth. Split test!
4. Campaign resets
You may notice that your campaigns perform well for a time, then suddenly the lead cost spikes up. This usually happens because the algorithm narrowed down your target audience too tightly.
As the algorithm figures out the ideal user avatar to show your ads to, it may narrow down your audience. It may end up trying to show your ads over and over to something like 1% of your original audience size.
Now, if you're running ads to 10 million people, that’s fine. But most local HVAC companies are working with an audience size closer to 500,000. And if your ads target the same 5,000 people over and over, your pool of prospects will dry up fast.
In that case, all you have to do is “reset” your campaign. Duplicate it, launch the new copy, and turn off the old one. The algorithm will have to learn again, and it’s unlikely that it will narrow down to the exact same group of people.
Lead costs that slowly rise over time are probably due to ad fatigue, or other outside factors like weather. But when lead costs suddenly spike up, it’s likely you just need to reset your campaign.
5. Be mindful of outside influences
Weather, holidays, back to school season, and other outside events will always have some effect on lead cost and volume. Weather plays an especially important role when you’re advertising for HVAC.
When you see dips in your campaign performance, it’s important to be aware of outside events that may be having an impact; that way you can judge whether the slowdown is temporary, or if you need to make some changes to your campaigns.
Conclusion & Further Reading
Managing ads is an ongoing process of testing, evaluating results, and making adjustments over time. But keep in mind that it’s an art as well as a science. There aren’t right or wrong answers, and sometimes you may have to make decisions by gut feel.
For example, you may wonder what variables you should split test. Should you split test preview texts first, or split test graphics? And if you’re going to split test preview texts - what kind of differences are worth testing?
What helps is to look at previous results of what worked and what didn’t. Not only at the quantitative side (how many leads you got) but the qualitative side (the reasons behind why something worked better than something else.)
For example, in our preview text split test example from earlier, the one that said “👉 Been putting off your AC maintenance? Save money and finally get it done with a $89 Tune-Up voucher ($330 worth of service work!)” performed better in that scenario.
Now, it’s a made up scenario of course, but if this had happened in real life I would have theorized on why it worked better. My theory probably would have been that this headline worked better because it talked to an audience who was already thinking about fixing up their unit - not trying to convince an audience that’s indifferent.
Reviewing the data and theorizing helps you gain a better understanding of how things are working, and helps you make better and better decisions over time.
If you want to gain a better understanding of how to run Meta ads campaigns effectively, take a look at these three articles:
- https://www.serviceallies.com/blog-posts/writing-facebook-lead-ads-that-make-you-money
- https://www.serviceallies.com/blog-posts/7-proven-ways-to-capture-attention-with-meta-ads
- https://www.serviceallies.com/blog-posts/how-to-multiply-lead-volume-from-meta-ads
Thanks for reading, and good luck with your Meta campaigns!