Let Them Come…Consultancies Welcome!

It is hard to ignore the steady drumbeat of warnings that management consultants are coming to challenge agencies. Management consulting firms are often seen as the enemies of agencies – new market entrants that need to be stopped.  Many of them have already won, as Forbes illuminated in a recent article. “According to Ad Age, all the top 3, and 8 of the top-10 ad agencies are not those legacy names that might visit your home nightly with their TV commercials. Instead, they are consultancies like Deloitte, Accenture, KPMG and PwC.” Agencies seems to be responding in-kind, building up their consulting expertise.

This trend is driven by many factors, with two of the key drivers being:

  • Brands are increasingly spending more on MadTech, and technology has always been a core capability of management consultants;
  • Digital transformation is now often driven by customer engagement points (MadTech), and agencies have a long history of driving and managing customer engagement points for brands.  

The old three-martini lunch may have passed, but the agencies’ “trust me” attitude often remained, at least until recently. The ANA’s report on agency transparency, the P&G bombshell at the IAB Leadership conference, and recent cries from some YouTube advertisers speaks to the increasing volume of calls for change. The press points fingers at ‘AdTech’ companies, the programmatic nature of buying, fake news, fraud…

I think it’s something different.

Given all the background noise about transparency, agencies, AdTech companies and others have a vested interest in the ‘media’ pricing model, which hides all the dirty little secrets: fees and recharges with agency trading companies, the hidden costs of SSP’s and Exchanges, and the ‘price included’ fees of data targeting. These (and many other) MadTech fees are structured to be imbedded in the holy ‘media-based’ model.

This model is seriously broken. With many claiming the ‘AdTech tax’ is at least 45%—and some declaring it to be as high as 75%—everything is suspect. Brands are spending more on MadTech than ever before. They know they’re getting screwed, they’re just not sure how.

In march the consultants, with their decades of expertise in supply chain management, and a depth of expertise in getting technologies to work together, that few agencies can challenge.

Large consulting firms have spent decades, if not generations, tearing down supply chains to remove waste and friction, reassembling them for higher efficiency. The typical MadTech supply chain to deliver an impression is primed for the consulting axe:

  1. Data and Targeting
  2. Ad Serving
  3. SSP/Exchange fee
  4. Dynamic Creative
  5. Fraud/Verification/Viewability

    …Et cetera.

Each of these technologies, many of which are invaluable, are bundled by agencies into the ‘price of the media’ and distributed behind closed doors. Management consultants know how to play in this game. They are going to continue to gain market share, particularly against major agency holding companies, until the pricing model changes. And, by the way, it’s not that consulting firms come at a bargain, but they don’t hold the same vested interests that agencies do. They expose problems, are more transparent about their own pricing, and are ruthless in attacking the supply chain.

Management consultants will drive transparency in agency pricing models. However, while they are experts in supply chain management, they are not as good at recognizing how MadTech innovation can improve a brand’s performance, and are likely to pick only the largest tech suppliers as they strive for supply chain efficiency.

Let management consultants drive agencies toward embracing transparency. But technology companies, take note: To maintain an innovation-driven ecosystem, rather than to see a culling of the herd with only the largest companies surviving, will require new pricing models. The onus is on MadTech to create them, and to lead the way.

 

You Can’t Spell Exchange without Change

HEADER BIDDING IS LEVELING THE PLAYING FIELD BETWEEN PUBLISHERS AND EXCHANGES

Header Bidding – the new black, taking the market by storm, the must have, top of the trends.

So why is it such a threat?

Header bidding shifts the power dynamic away from the SSPs and exchanges, moving it back to the publishers. Publishers who manage their own header bidding have created what exchanges promised to create for the last five years: a highly competitive auction for a publisher’s inventory. Not only are publishers making more money with header bidding, they are gaining a bigger advantage, transparency of demand, and inventory control.  

Who is winning – and who is losing – thanks to header bidding? Why do I believe it ultimately spells the demise of many exchanges?

THE QUICK HISTORY:

  1. Digital advertising was born.
  2. Digital advertising was sold much like TV, print and radio, mostly on a guaranteed basis via phone calls, faxes and emails.
  3. Digital inventory exploded and “remnant” (unsold) was born.  
  4. Remnant was sold via networks, used for house ads, or went unsold.  
  5. Networks figured out how to create vertical networks and other aggregation methods, providing reach and contextual targeting to buyers.
  6. Networks started deploying audience targeting tools and were the first to really leverage data (with the exception of Google, who always knew how to leverage data).
  7. DMPs and other data companies emerged, created huge pools of profiled cookies.
  8. SSPs, Exchanges, and DSPs emerged to create the ability to “real time target and bid” on remnant inventory.
  9. DSPs leveraged the audience data to drive performance and the size of the RTB market exploded.
  10. Publishers were pushed to “be transparent” with their inventory and add higher quality inventory to what was sold programmatically, increasing eCPMs of exchange inventory, but not necessary the overall eCPM of publishers.
  11. Exchanges, and DSPs starting making some serious money (each taking about 15% of the spend) creating the so called “AdTech Tax.”  
  12. Meanwhile, companies like Criteo realized that being in the header gave them first look for all inventory (not just unsold), driving up eCPMs for publishers while cherry-picking inventory.
  13. Technology-learning publishers began to experiment with “header bidding,” often leveraging DFPs’ dynamic allocation tools.
  14. Open source header bidding wrappers and adapters became available and the market exploded.
  15. Header bidding began driving up CPMs for publishers, allowing them to capture top dollar for high-value inventory.

 

Header bidding is exploding because publishers reason (often correctly) that they are getting screwed by the second price auction and the intermediaries who seem to be rigging the system. With header bidding, publishers are able to bypass the the waterfall of exchanges, increase their overall eCPMs, gain more control of their inventory and only use their exchange(s) (mostly AdX) to sell what is truly now “remnant”.  

Why are the exchanges doomed? First, they’re not all screwed. Some were losing under the “exchange daisy chain market.”  They were the third, fourth, or fifth call on the chain and saw lousy inventory that had been picked over by AdX, AppNexus, Rubicon… Certain companies, in particular OpenX and Index Exchange, made early big bets on the header bidding model.  All of a sudden, they were seeing 100% of a given publisher’s inventory, and their volumes and eCPMs increased. (Yes, they also incurred a “listening fee” for 100% of the inventory, but top line exploded.) AppNexus jumped on the bandwagon a little later, with their open source solution, prebid.org. However Rubicon was late to the party, as was Pubmatic.  

Rubicon has reported header bidding as the primary culprit in their recent earnings calls. One need look no further than the recent stock price collapse of Rubicon. It has been blaming header bidding for its sinking performance for the last few quarters, but was overwhelmed with bad news with its recent reporting (which resulted in hiring Michael Barrett, a consistently successful CEO (successful at selling the companies he walks into).

Most importantly, publishers, particularly the comScore 150, were figuring out how this was all working, and began taking control of the process. Now everyone is talking about server-to-server integration for header bidding. This seems to overcome the real technology problems of header bidding by creating a single header bidding call and having someone else then manage the bidding process. Usually this “someone else” is the exchange. They have the technology infrastructure needed to manage the extraordinary volume created by this system.  

Server-to-server also creates a new problem related to cookie matching, but let’s assume this will get resolved over time. The biggest part of the challenge is exchanges’ unwillingness to cookie sync.  

Shenanigans, deception, and more of the same-old-same-old.  

While server-to-server integration clearly solves most of the problems associated with the speed and load times of header bidding, it leaves publishers exposed to different problems.  One traditional complaint of publishers regarding exchanges was that the actual fees charged by such intermediaries were less than transparent. Also, publishers suspected that, in some cases, exchanges were front running their inventory, creating additional spread within auctions, adding or subtracting data for their own benefit. Publishers felt cheated, but were not certain whether it was true.  

Over the last few years, as more bid stream data has become available, these suspicions have sometimes been confirmed. Certainly, some deceptive practices have been identified.  As with most things, transparency is the greatest disinfectant.  

The problem with header bidding managed by the exchange is that it opens the ecosystem back up to the suspicions of self dealing. In one instance recounted by multiple publishers, some exchanges seemed to win a disproportionate amount of inventory when they were the “wrapper” compared to when they were only an “adapter” inside someone else’s wrapper (i.e., when they manage the whole auction they win too much, compared to when they are just another bidder).

Simply put, server-to-server, at least as most people are discussing, is basically a publisher selecting one exchange and giving them all their inventory. Now they are not just seeing the “unsold,” but 100% of the inventory (like all header-bidding). However publishers are opening themselves up to many of the same transparency problems of yesterday’s exchanges.

Why is this a threat to the exchanges? Simple – the biggest publishers with the highest quality inventory don’t need to use an exchange. Companies like Purch have already tackled this themselves, and we are seeing the emergence of other companies stepping into the breach to either support publishers deploying their own server-to-server deployment, or creating new pricing models so they do not have an incentive to play with the auction.  

I suspect the biggest and best publishers will migrate to a fully-transparent server-to-server model, which will force the exchanges (who are used to working of a healthy percentage of the media spend) to become more transparent, change their pricing, and provide different levels of service. Publishers will jointly create a cookie-syncing solution, and eventually attract the largest DSPs to bid directly, and not through an exchange at all.  

Exchanges will be forced to provide their services to the mid- and long-tail. While some can survive, many will fail. The winners will likely be the exchanges already focussed on the long tail (think SOVRN) versus those competing for the comScore 150.  

There are many exchanges in the market, some focused on a more vertical approach (think mobile or video) and others providing higher-quality service and trying to differentiate in other ways. However, the biggest exchanges were and are “display first,” and their business models will come under increasing pressure as header bidding technologies become more ubiquitous, and as the expertise to deploy one’s own solution extends further into the market.

These trends don’t fix many of the other problems with the programmatic marketplace (fraud, walled gardens, viewability, effectiveness…) but this shift is going to return some of the pricing power back to independent publishers. That can only be good for the industry.

The Multiple Benefits of Research-Based Content

Our recent study found that research-based content is the best way to reach and influence senior marketing and advertising professionals. But the benefits don’t stop there and include:

  • Creating a virtuous cycle. Research can form the linchpin of a cycle of content discovery, thought leadership, influence, customer retention and lead generation all centered around the research findings.
  • More content germination. The research provides material to enhance and reinforce other content creation and distribution efforts. It can be the seed around which articles and blog posts are created.
  • Social media become more enticing when they can use aspects of original research and promise more upon click-through.
  • Email newsletters are made stronger when offering proprietary original research-based content.
  • Videos and podcasts get new source material.
  • Trade conferences are made be stronger for vendors who can hand out original research.
  • Panel discussions and presentations are enhanced by material that springs from original research findings.
  • It inspires sharing, further spreading awareness of the tech vendor’s brand. “Many times [brands and agencies] use it for the stats and data in their own business plans to justify and provide rationale around the budgets against a tech solution and to sell in that strategy,” says Sean Finnegan, Managing Partner, co/Star.
  • It extends reach more than a trade show booth.
  • It can generate press coverage. 
  • It leads to discovery of the tech vendor who provided it. Many marketing and advertising technology decision-makers will search for relevant content when they’re considering a purchase and then find the material.

“Research-based content establishes the vendor who produced it as an intelligent and useful potential partner who knows the spheres in which it operates, who understands how to help customers achieve their objectives,” Finnegan says.

To see details of the findings and learn more, please download our paper, here.

BIG DATA–The New Monopoly

mo-nop-o-ly (noun) –the exclusive possession or control of the supply or trade in a commodity or service.

Generally speaking, monopolies are considered bad for the economy and the consumer.  Two of the most easily remembered monopolies are the original AT&T and Standard Oil.  AT&T was given its monopolistic status by the government, whereas Standard Oil, achieved it through business practices. In both cases, the government eventually broke up these monopolies, and innovation, lower prices, and competition thrived.

Data is the new and most leverageable monopoly commodity!  Big Data companies are successfully creating barriers to entry that stifle competition and create a new type of moat around their success.  Unlike traditional analog monopolies, the marginal cost of a new customer is effectively zero.  No new plants to build, no distribution costs, no new staff to hire (yes, there is a real cost to building and running these data centers, but the costs of that technology continues to drop rapidly).

Today, it appears that some of the largest consumer data aggregators — Google, Amazon, Apple, Facebook, etc. — have emerged as near-monopolies in their ability to collect data and insights about consumers.  Facebook, as one example, built its business on an advertising model, but its real value is data targeting.  It has more and better data about most people (at least in the U.S.)  than almost anyone else.  The more users Facebook engages, the lower Facebook’s data acquisition costs and the higher their value. Amazon has a similarly unique data set as the largest online retailer in the U.S., Google as the dominant search engine, video platform (YouTube) mobile OS (Android), and ad platform (DoubleClick). Apple, through iOS. The other two companies that might be thrown in the mix are AT&T Wireless (thanks to Apple, by the way) and Verizon Wireless (along with its acquisitions of AOL and Yahoo), which have the two largest databases of mobile IDs in the country.

However, our governmental institutions today are ill-equipped to respond to the challenges of global companies growing at exponential rates. Traditionally, the value of a company was built on a combination of intellectual property and physical assets (plants, trucks, machines, etc.). The physical assets were often developed and acquired based upon the underlying intellectual property (think patents).  Today, it still takes around three years to get a patent, but companies’ ability to leverage intellectual property can happen in a few years or even months.  As an example, Uber burst on the scene with its founding as Uber Cab in 2009, and reached a valuation at $3.5 billion in 2013. Had they waited for a patent approval, they would have missed the market opportunity.

Does It Matter?

Each of the big data competitors has emerged with a unique opportunity to collect more and better data and then sell that data to advertisers. Does it matter?

The short answer is yes, it matters. These giant data aggregators are already dominating the ability to leverage data to more effectively to target an ad.  But the insidious part is their ability to disaggregate a supplier from a buyer.  Companies like Amazon or Facebook know (or infer) not just who you are but what you are like. They know not only where you are but they can guess where you are going. They don’t just know what you are doing right now — they have a pretty good idea why you are doing it. And they make excellent guesses about what you will do next, guesses that grow more accurate as you go about your daily life while being carefully observed by the data giants. Amazon is already adjusting its pricing algorithm in real time. Amazon can charge one individual a different amount than another. Since the acquisition costs of many products are pre-negotiated, when it chooses to increase a price, the incremental margin remains with Amazon. Additionally, Amazon knows more about the value of a product than the manufacturer.  Therefore, Amazon can negotiate the price for every item and drive down manufacturers’ margin.

They have users’ shopping data, now married to zip code (which tends to indicate income levels) and to family members (if you set up “sub-accounts” on Prime). Amazon acquires age and demographic information, and if they want to, Amazon can purchase your credit score and other available data to provide a fuller view of a consumer. They can successfully charge you more than the next buyer for something they’re confident you want, like that “soon to obsoleted” piece of technology.

In the narrow confines of online advertising and commerce, combining some of these data clearly makes marketing more efficient by improving targeting, and by identifying and eliminating the famed half of the marketing budget that is wasted. As HBR noted:

“Marketers have trained their big-data telescopes at a single point: predicting each customer’s next transaction. In pursuit of this prize marketers strive to paint an ever more detailed portrait of each consumer, memorizing her media preferences, scrutinizing her shopping habits, and cataloging her interests, aspirations and desires. The result is a detailed, high-resolution close-up of each customer that reveals her next move”.

We have reached an inflection point.  Data are ubiquitous, and the marriage of data from multiple sources is commonplace. We are witnessing the transition from from data improving efficiency, to data becoming a strategy, to data becoming a barrier to entry (monopoly)!

Today, data is a strategy, and we need to start thinking about it as one. While scale is always a source of leverage for a supplier, with data the marginal acquisition cost is near zero and the benefit to data aggregators grows exponentially with each incremental data element. Data should adhere to the same competitive standards as other business strategies. Data monopolists’ ability to block competitors from entering the market is not markedly different from that of the oil monopolist Standard Oil or the telecommunications monopolist AT&T.

The real problem is that our institutions are still moving at the speed of analog while our economy is literally moving at the speed of light. The actions and behaviors of these companies is rational and so far seemingly legal, but left unchecked they will become egregious. Data corrupts and absolute data corrupts absolutely.