After a long hiatus, this week I thought it was about time that I came up with another blog post. Whilst scratching around for ideas, I noticed the perennial debate about automated bid management had sprung up again over on Search Engine Land.
Guest columnist Nic Abramovic posted a rant about automated bidding tools. The article is fairly uninformed and in places insulting, but I suppose he’s entitled to his opinion. Clearly he’s had a bad experience at some point with automated bidding tools.
I don’t know if Nic was taking aim at any tool in particular, but I feel the need to counter a few of his points, at least from an Efficient Frontier perspective. He does bang on about rules-based systems, which are widely acknowledged to be inefficient. However, he doesn’t attack them because of their inefficiency, but rather on perceived short-comings that could affect any system:
“Most rules-based bidding can only accept a limited amount of data (no matter what search marketing agencies may sell you on) – for example: 7 day, 30 day and lifetime “snapshots” of how your keywords are progressing.”
Historical data is essential, and the more the better. Recency techniques allow EF to use it all, while reacting to changes in the keyword market and conversion rates.
“[Agency leaders require] a PhD from a school such as Stanford and anyone who actually knows what they are doing is working at Google and not at a search marketing agency (sorry, a PhD from a state school does not necessarily qualify as “World Class”).”
Fortunately EF’s founder Anil Kamath does have a PhD from Stanford (on top of his MSc). As one of the comments points out, there are many excellent universities, both private and state funded, such as Michigan. That’s good news for EF’s Sid Shah, who got his PhD from there.
“Another area to watch would be when you have different cost-per-acquisitions for different products, campaigns or keywords. If you are selling various products, you might have specific margins and be able to spend up to X amount, depending on the product purchased. Rules-based systems wouldn’t be able to handle this because they are based on CPA targets.”
CPA targets are just one way of running a search campaign. Chasing ROI, margin or net profit means you have to understand that all conversions aren’t equal. Multi-metric optimisation is a basic and we’ve done that at EF since day one.
Fortunately, I wasn’t the only one who found the article odd. Frank Watson at Search Engine Watch also wrote a rebuttal that led to a debate about editorial control with SEL’s founder Danny Sullivan.
Ironically, of course, bid management is not appropriate for everyone. It’s a pity that Nic didn’t provide a more balanced view. It would have made his opinions more credible.
Thankfully, we can always rely on RKG’s George Michie to provide a balanced and sensible discussion. George is one of the best writers when it comes to PPC and I read his blog avidly. The most recent post is the first in a series and I look forward to reading the rest when they are published.
My last point on this topic, for now, is that there is an unwritten assumption in Nic’s article that people can do bid management better than machines. People have their flaws, no matter how experienced and skillful they are. Let people use all their marketing nous and imagination to build great campaigns that sell compelling products and services. But let the algorithms decide the right bids. As ever, PPC management is not black and white, it’s about the combination of human and computer intelligence to find the optimal solution.