In February 2019, Facebook announced the final migration to campaign budget optimization in September 2019. Now, the budget will need to be set only at the campaign level.
Set Level Budget
For example, you create an ad campaign with 2 ad sets in it, and each of them has 5 ads. A budget is selected for each ad set that works fully and collect its data history. Facebook pays attention to each ad set, but at the ad level impressions are distributed unequally and the most effective ads get more impressions. This looks like a kind of natural selection.
Campaign Level Budget
The budget will be common for all ad sets. The system will automatically distribute it in favor of the most effective ones. Data, collected by each set, will be summarized at the company level for further optimization. “Natural selection” will now take place not only at the ad level but also among sets. The most effective sets will work, and the less effective will not receive enough impressions or will not work at all.
CBO’s Impact On the Advertiser’s Work
We believe that Facebook is trying to simplify advertiser’s work by focusing on the following advantages:
✔ The automated process allows specialists to save time — you don’t need to shift budgets between ad sets manually;
✔ Avoiding the audience overlap. If one ad set has overlap with the audience of another set, then the budget can be spent only on one of these groups;
✔ Avoiding restarting the learning phase. Campaign budget optimization does not activate the learning phase when distributing the budget between sets;
✔ CBO allows you to find the most profitable opportunities for all ad sets;
✔ Summarizing data at the campaign level. For example, if there are 3 conversions in one set and 10 in another one, then the advertiser will see the whole picture at the campaign level (13 conversions), and Facebook will understand how to optimize the work of sets in the best way.
However, this update complicates the testing process. The objectivity of the test can only be assessed if two sets work completely.
How to conduct testing based on functional aspects during campaign budget optimization?
There are two options:
- Create as many campaigns as you need for different types of communications and targeting.
- Limit spends at the ad set level.
Facebook indicates that the spend limits will be useful if you choose the lowest cost bid strategy without a bid cap, and you don’t know the exact budget you need to reach the selected audience. In other cases, the network doesn’t recommend this strategy since it prevents the system’s flexibility to optimize the budget efficiently.
However, the spend limits also obstruct objective testing.
Let’s imagine that at the campaign level you put $60 and one of the ad sets spends $50. If you set a $30 spend limits for each of the sets, then the budget will be distributed but not necessarily to more effective sets. You can set a limit to give other sets the opportunity to work, but this is not objective testing, but only a “redistribution” of attention.
How to analyze the results?
Facebook pays attention to the fact that you shouldn’t analyze the effectiveness of campaign budget optimization by the costs and the average price for an optimization event for each set. The network advises looking at the total amount of data received at the campaign level.
If an ad set is not delivering, Facebook recommends increasing your bid cap or target cost, changing targeting and/or ad creatives, or choosing another optimization event.
It’s important to remember that CBO allows creating of not more than 70 ad sets.
Lookalike Audiences: Use Case From Median ads Team
As part of the promotion of an event, we added to the same campaign different saved audiences with detailed targeting. As a result: the first group worked well, the second one performed weaker, while the rest of them wasn’t delivered at all. Over time, when we received enough data, we created 3 lookalike audiences 0-1%, 1-2%, 2-3% and placed them in 3 new ad sets in the current ad campaign.
Based on the campaigns’ old history, Facebook didn’t give the opportunity to deliver ads in sets with lookalike audiences, spending only 10-20% of the budget in the very beginning and reducing expenses every day.
We realized that these lookalike audiences could have good potential, so we decided to disconnect them in the current ad set and create a new campaign with the same 3 lookalike ad sets. The set with LAL1-2% worked and became more profitable than the most effective set in the first active ad campaign. This is a case from our experience, and everyone will have their own data history from ad campaigns. However, it is important to remember that if you have hypotheses for testing and you need to validate them, then it’s better to create a new campaign and make full testing with a separate optimization point, as it was before at the level of each ad set.
Check how we create and work with lookalike audiences in the following article
Use Case #2. Testing of Communications
In this example, we will visualize a situation. Let’s imagine that there are 3 different types of communications. In the first communication, we offer a discount, in the second one — special offers, and in the third one — the free trial period. In order to test them, we create 3 different campaigns for each communication.
⦿ Imagine that we used one targeting option and one saved audience for testing. The audience was duplicated in different campaigns for testing different communications. After we understood which communication works best, we turned off the rest of the campaigns to avoid an audience overlap.
However, it is important to analyze data in Delivery Insights. If an audience overlap is insignificant, we don’t turn off any of the ad sets since we deal with different communications, and even within the same targeting different people can see your ad.
⦿ If we would create many ad sets in each campaign and different targeting would give results, then we could not turn off the campaigns, as different audiences respond to different communications. However, if one of the communication’s costs per result is better than others, then you can leave this campaign to get results at the best price.
⦿ If you use only one communication, leave ads in one set and create one campaign. Facebook will help you to determine which ad copy and targeting will be most effective and distribute ad impressions and budget.
Since ad impressions optimization is now at the campaign level, data from the results of each ad set will be combined. This will allow the system to analyze quickly the audience and stabilize the result. However, if you have hypotheses for testing different audiences and communications, you need to create many campaigns to get attention to each test and avoid ad impressions fall.
For more qualified advertisers, the migration to CBO will require more time to work with the structure of advertising campaigns. Specialists who have just started working with Facebook ads will get some help. Facebook will independently distribute the budget to campaigns that work more efficiently. However, if you create only one ad campaign and use only one approach, it’s impossible to find the most profitable result without testing.
Subscribe to our Messenger bot and Telegram channel to receive the most useful content about advertising on social networks. If you want to increase your skill in working with online advertising, apply for the Median School courses.