Chad Summerhill, AdWords specialistChad Summerhill is an AdWords specialist who manages a multi-million dollar AdWords spend. He’s also a frequent contributor to the WordStream blog, maintains his own blog which is home to not only blog content but also free PPC tools and in-depth PPC tutorials, and is generally one of the smartest guys in paid search.


One of the things I find really cool about Chad’s blog posts and videos – both on his own PPC Prospector blog and other places around the Web – is the depth of detail he goes into. You find a lot of blogs (in PPC as well as other disciplines) where the smartest people seem to talk more about theory and industry trends than actual day-to-day management tasks they’re doing to get results. Chad’s videos are great because they outline an actual process for things like mining search queries for campaign negatives.


One of our favorite such posts was his post on ad testing for statistical significance in which he offers up a great spreadsheet to help determine the validity of an ad text test.


So we decided to ask Chad to stop by and give us some more information on his day-to-day process for optimizing ad text on a large account.


First off: awesome spreadsheet. I find it very useful. The subject of statistical significance in ad testing is a really interesting one for me because I perceive PPC ad copywriting to be a great example of traditional marketing meeting a more Web and data centric means of reaching customers. In other words: its ad copy, its ad creative, but tests and refinements tend to be data driven. So my question is this: if I give you two PPC account managers who are going to focus on nothing but ad text: one is very creative and great at creating copy that resonates with searchers, but very weak on data analysis. The other is a data wonk and religious tester but not much of a copywriter. If we leave them both to their own devices and come back in 12 months, whose ad text would you bet on to be more effective?


Chad: Easy, I would bet on the great copy writer.  Even the very weak data analysis guy can find a best practice guide for ad testing and that should get him/her pretty far.  With the hard-core data guy, you run the risk of optimizing sub-optimal copy.  It’s much easier to give the great copy writer the decision support he needs with some simple tools.  However it’s much harder to teach someone to be a great copy writer.


What advice would you have for the first persona to help them create a process for optimization that would be manageable for them given their weakness in data analysis?


Chad: Don’t let Google pick a winner based on CTR even though it’s very easy to do in the AdWords interface.  Focus on total conversions.  Test your ad groups with significant click volume first, and use what you learn from those test to help create ads for your lower volume ad groups.  Make sure you set your ads to rotate more evenly in your campaign settings.  Remember that with every test you are just as likely, if not more likely, to lose as to win – there is a cost associated with testing and you want to lose fast when you have a loser.   Besides these and a handful of other best practices that are easy to find online and elsewhere, use one of the many free statistical significance calculators when making your decisions.  Nothing is 100% certain, but you want to make the best decision possible.


What advice would you have for the second persona give that they aren’t a strong copywriter (besides signing up for Boost and letting us do the writing for them J)?


Chad: Remember that you are answering a searcher’s question, so you need to align your ad copy with the searchers intent.  This could be as easy as putting the search query in the headline.  Don’t write your ads from a blank canvas, use the Google Ad Preview Tool to see what the competition is doing.  You might copy them, differentiate, etc., but at the very least you should get some good ideas.  Use strong calls to action and try to include your unique value propositions or benefits.  Your landing page needs to follow through on the promise of the ad.


All right now I think it would be really helpful for our audience to learn more about your process for optimizing ads on a very large-scale, profitable PPC campaign. First off: how often are you jumping into the account and tweaking ad text, setting up new ad tests, etc.?


Chad: Not enough. I wish I could say that I had some type of disciplined scheduled approach, but I don’t.   I’ve been testing ads for the same large account for 4 years and anymore I test when I come across a great idea or when performance starts to weaken for an existing ad.  And sometimes other activities like breaking up larger ad groups into tighter groups creates an opportunity for an ad test.  But, even with a fairly undisciplined schedule I’ve seen lots of success around testing ads.  When I first took over our account, I soon discovered that the previous analyst had several ad tests running that were way overdue for decisions.  I literally grew our conversions by 70% year-over-year by just turning off the losing ads.  So, regardless of what spurs me into starting a test, I keep a close eye on it.


When you do dig in and start to optimize ads, how do you decide where to go first? Is it a function of other diagnoses (ie you see that a group’s CTR has dropped and start to drill down) or do you have a process for identifying which ads need help (ie do you look for ads that have high volume and low CTR?)


Chad: you need context to make decisions like this.  So, I compare X ad’s CTR to the campaign’s average CTR, CR, etc.


Once you decide where to work on optimizing ads, how do you decide what to test? Do you have a standard formula or is it entirely variable based on the Ad Group, keywords, landing page, etc.?


Chad: always test: search query/ ad text alignment, value propositions, calls to action, capitalization, display url…need to pay attention to how the ad will look on the screen as well.


In choosing ads to optimize and thinking about what to change, how do you decide to “qualify” more in an ad? Is there a statistical threshold? (By qualify I just mean re-write the ad to better weed out irrelevant traffic that may be clicking but not converting.) If you do make this decision, is it something you’re faced with frequently?


Chad: qualifying an ad comes as a natural result of analyzing performance data and being faced with a decision to lower a bid or improve conversion rates.  I would rather bid higher on a qualifying ad than to lower the bid on a higher CTR ad that doesn’t turn into profit.  This hurts Quality Score, but makes profit.  I don’t come across this decision nearly as much as I used to.  I think that you have to make these types of decisions more when you first start a campaign in a less-than-targeted space trying to increase your reach.


You guys have a very big campaign: are you making use of ad text templates? If so, can you share your basic approach to creating an ad text template (we don’t need Upack’s actual templates obviously but maybe just your approach to creating a template)? Also do you have any best practices around implementation (ie don’t use templates on your highest volume groups, use templates liberally as you launch a campaign but then go back and test, etc.)?


Chad: I use winning ads from my high-volume ad groups to create generic ads to be used/modified for my other ad groups.  Don’t have a formal process though.


How much of a priority is ad text optimization for you? If you could quantify it as a percentage of your optimization efforts roughly what do you think that would look like (5% of your time? 10%?) and how does that compare to activities like managing bids, testing different landing pages, etc.?


Chad: honestly about 10%, probably needs to be closer to 30%.  I end up spending a lot of time drowning in the data managing bids, analyzing the effects of actions/ events, etc.


How does your process for ads on the Content Network (particularly display ads) differ from that of the search network, if at all?


Chad: I find that the ads that work well for us on the Content Network are similar to the ads that work in our brand campaign, which means I’m not nearly as focused on writing ads that align with the keywords in my Content Network ad groups.  I end up focusing more on broader unique value propositions and differentiators.


Can you think of one ad text tweak that was just way more successful than you thought it would be?


Chad: adding sitelinks to my brand campaigns improved CTR by 40% without hurting my conversion rates.


Can you give us an ad text “secret” that you find works for you? Maybe it’s not an actual secret, but something that’s not a commonly known best-practice that you find consistently improves results?


Chad: I tend to pause losing ads instead of deleting them; else I am doomed to repeat the past.


Thanks for the great answers Chad!