Now that RSA (Responsive Search Advertising) has replaced ETA (Extended Text Ads) in Google Ads, it may be time to rethink your strategy for optimizing your ads.
Optimizing RSA takes a completely different approach than what most advertisers have been doing for years.
While you still want to use a similar approach to deciding which text variants to test, the way you conduct tests that lead to statistically significant results has changed.
RSA testing is different from ETA testing
Ad testing used to consist of A/B experiments where multiple ad variations were competing against each other.
After accumulating enough data for each competing ad, a winner can be picked by analyzing the right metrics.
A common metric for determining winning ads uses a combination of conversion rate and click-through rate to calculate “conversions per impression” (conv/imp).
After enough data has accumulated to allow statistical significance, the ad with the best ratio can be declared the winner.
This technique of finding winning ads no longer works for three reasons.
Let’s see what these are.
Reason 1: Only 3 RSAs can be tested per ad group
In the era of ETA ad testing, advertisers can expand their A/B tests into A/B/C/D/… tests and keep adding more challengers to their experiments until they reach 50 ads per ad group limits.
While I’ve never seen an advertiser running 50 ads simultaneously in an experiment, I’ve seen a lot of advertisers testing 5 or 6 ads at the same time.
But Google now limits ad groups to a maximum of three RSAs, so it has changed how it works in ad testing.
Reason 2: You don’t get full metrics for your ad mix
Remember that each RSA can have up to 15 titles and up to 4 descriptions, so even a single RSA can now be responsible for generating 43,680 variants.
That’s far more than the 50 ETA variants we’ve been allowed to test in the past.
Therefore, when a user sees an RSA, only a subset of the advertiser-submitted titles and descriptions are actually displayed in the ad.
What’s more, the specific titles and descriptions displayed vary with the auction.
When you compare the performance of two RSAs, you are actually comparing the performance of Ad A’s 43,680 possibilities to Ad B’s 43,680 possibilities.
This means that even if you find that Ad A is the winner, there are still many uncontrolled variables in your experiment that will invalidate any results you might find.
For more useful data, you must check out the Combination report, which shows the exact title and description combination for each ad.
But the problem with this data is that Google only shares impressions.
In order to calculate winning ads, we need to know click-through rate and conversion rate, two metrics that are no longer available from Google at this granular level.
Reason 3: Ad group impressions are now as dependent on ads as they are on keywords
But perhaps the most surprising factor that ad testing methods need to evolve is that the old method was built in a world where the assumption is that impressions depend only on the keywords of an ad group.
RSA challenges this assumption, and now ad group impressions can depend on ads as much as keywords.
In Optmyzr’s May 2022 RSA study, we found that ad groups with RSA received 2.1 times more impressions than ad groups with only ETA.
Whether the significant increase in impressions for ad groups using RSA is due to improved ad position and Quality Score, or because Google has established a preference for this ad type, the end result is the same.
The sandboxes we play with prefer RSA, especially those that contain the largest number of assets and use as little pinning as possible.
So when we do modern ad optimization, we not only consider the number of conversions per impression, but also the number of impressions each ad can deliver.
A/B asset testing with ad variations
Fortunately, Google has taken into account the issues RSA introduced for ad testing and has updated ad variationsa subset of their experimental tool for optimizing ads.
Instead of creating multiple RSAs, these experiments operate on assets and allow advertisers to test three types of things: fixed assets, swapped assets, and added assets.
You will find all options in the left menu of Experiments.
Pinning is a way to tell Google what text should always appear in certain parts of an ad.
The simplest fixed form tells Google to display a specific piece of content in a specific location. A common use is to always show branding in heading 1.
A more advanced implementation is to pin multiple paragraphs of text to a specific position.
Of course, ads can only display one fixed text at any one time, so this is a way to balance the benefits of advertiser control with dynamically generated ads.
A common use is to test three variations of a brand message by pinning all three variations to title position 1.
The most extreme form of pinning is to create what some call a “fake ETA” by pinning text to every position of the RSA. Google does not recommend this as it defeats the purpose of RSA.
In Optmyzr’s RSA research, we also found that this type of pinning can significantly reduce the number of impressions an ad group can get.
But to our surprise, we also found that fake ETAs had higher click-through and conversion rates than pure RSAs.
One theory is that advertisers who have spent years perfecting their ads using ETA optimization techniques already have ads so good that machine learning may be of little benefit.
To start testing with fixed ads, look for the “Ad Variations” option to update text Then choose the action to perform pin.
You can then establish rules for which titles and descriptions are pinned to various locations.
For example, you could say that any title that includes your brand name should be pinned to title position 1.
One limitation is that you cannot create an ad variation experiment to test pinning in multiple locations at the same time.
Test adding assets
Another experiment available with ad variants is to test what happens if certain assets are added or removed.
This type of testing is great for testing larger changes, for example, to see what happens if you include a special offer, a different unique value proposition, or a different call to action.
You can also use it to test Advertising Customizer about your performance.
Some of the ad customizers available in RSA include location insertion, countdowns, and business data.
Test replacement assets
The third and final type of ad testing supported in ad variants is testing what happens if the asset is changed.
This type of experiment helps test more subtle changes.
For example, what would be the impact of saying “10% off” instead of “Save 10% today.”
Both are the same quote, but expressed differently.
Ad variation experiments automatically take appropriate measurements.
For example, here you can see the results of the tests we ran with pinning.
Statistically significant results are marked with an asterisk.
When you hover over a statistic, more details are displayed explaining the confidence level of the experiment.
From there, it’s just a click away to facilitate winning tests as part of your RSA.
It’s important to note that these ad variation tests are designed to be done at the campaign level or higher (across campaigns).
Currently, it is not possible to run ad tests for individual RSAs or individual ad groups. Google says they are aware of this limitation and are working hard to find a solution.
With ad formats changing in Google, it’s time to change the way we conduct ad testing.
Ad Variations is an easy way to build directly into Google Ads to create experiments that apply to assets rather than entire ads, and you can even test fixations.
Optmyzr’s recent RSA research shows that impressions now depend as much on having good ads as having good keywords, so it’s modern to strive to create the right mix of not only CTR and conversion rates, but lots of high-quality impressions Ways to optimize your PPC ads.
Featured Image: Imagentle/Shutterstock