DMP · Media Measurement

Data Science is the New Measurement

tumblr_m9hc4jz_pp_x1qg0ltco1_400It’s a hoary old chestnut, but “understanding the customer journey” in a world of fragmented consumer attention and multiple devices is not just an AdExchanger meme. Attribution is a big problem, and one that marketers pay dearly for. Getting away from last touch models is hard to begin with. Add in the fact that many of the largest marketers have no actual relationship with the customer (such as CPG, where the customer is actually a wholesaler or retailer), and its gets even harder. Big companies are selling big money solutions to marketers for multi-touch attribution (MTA) and media-mix modeling (MMM), but some marketers feel light years away from a true understanding of what actually moves the sales needle.

As marketers are taking more direct ownership of their own customer relationships via data management platforms, “consumer data platforms” and the like, they are starting to obtain the missing pieces of the measurement puzzle: highly granular, user-level data. Now marketers are starting to pull in more than just media exposure data, but also offline data such as beacon pings, point-of-sale data (where they can get it), modeled purchase data from vendors like Datalogix and IRI, weather data and more to build a true picture. When that data can be associated with a person through a cross-device graph, it’s like going from a blunt 8-pack of Crayolas to a full set of Faber Castells.

Piercing the Retail Veil

Think about the company that makes single-serve coffee machines. Some make their money on the coffee they sell, rather than the machine—but they have absolutely no idea what their consumers like to drink. Again, they sell coffee but don’t really have a complete picture of who buys it or why. Same problem for the beer or soda company, where the sale (and customer data relationship) resides with the retailer. The default is to go to panel-based solutions that sample a tiny percentage of consumers for insights, or waiting for complicated and expensive media mix models to reveal what drove sales lift. But what if a company could partner with a retailer and a beacon company to understand how in-store visitation and even things like an offline visit to a store shelf compared with online media exposure? The marketer could use geofencing to understand where else consumers shopped, offer a mobile coupon so the user could authenticate upon redemption, get access to POS data from the retailer to confirm purchase and understand basket contents—and ultimately tie that data back to media exposure. That sounds a lot like closed-loop attribution to me.

Overcoming Walled Gardens

Why do specialty health sites charge so much for media? Like any other walled garden, they are taking advantage of a unique set of data—and their own data science capabilities—to better understand user intent. (There’s nothing wrong with that, by the way). If I’m a maker of allergy medicine, the most common trigger for purchase is probably the onset of an allergy attack, but how am I supposed to know when someone is about to sneeze? It’s an incredibly tough problem, but one that the large health site can solve, largely thanks to people who have searched for “hay fever” online. Combine that with a 7-day weather forecast, pollen indices, and past search intent behavior, and you have a pretty good model for finding allergy sufferers. However, almost all of that data—plus past purchase data—can be ingested and modeled inside a marketer DMP, enabling the allergy medicine manufacturer to segment those users in a similar way—and then use an overlap analysis to find them on sites with $5 CPMs, rather than $20. That’s the power of user modeling. Why don’t site like Facebook give marketers user-level media exposure data? The question answers itself.

Understanding the Full Journey

Building journeys always falls down due to one missing piece of the puzzle or another. Panel-based models continually overemphasize the power of print and linear television. CRM-based models always look at the journey from the e-mail perspective, and value declared user data above all else. Digital journeys can get pretty granular with media exposure data, but miss big pieces of data from social networks, website interactions, and things that are hard to measure (like location data from beacon exposure). What we are starting to see today is, through the ability to ingest highly differentiated signals, marketers are able to combine granular attribute data to complete the picture. Think about the data a marketer can ingest: All addressable media exposure (ad logs), all mobile app data (SDKs), location data (beacon or 3rd party), modeled sales data (IRI or DLX), actual sale data (POS systems), website visitation data (javascript on the site), media performance data (through click and impression trackers), real people data through a CRM (that’s been hashed and anonymized), survey data that been mapped to a user (pixel-enabled online survey), and even addressable TV exposure (think Comscore’s Rentrak data set). Wow.

Why is “data science the new measurement?” Because, when a marketer has all of that data at their fingertips, something close to true attribution becomes possible. Now that marketers have the right tools to draw with, the winners are going to be the ones with the most artists (data scientists).

It’s a really interesting space to watch. More and more data is becoming available to marketers, who are increasingly owning the data and technology to manage it, and the models are growing more powerful and accurate with every byte of data that enters their systems.

It’s a great time to be a data-driven marketer!

[This post originally appeared in AdExchanger on 8/12/16]

Advertising Agencies · DMP

New Whitepaper: Agencies and DMP!

RoleOfTheAgencyInDataManagementWe’ve just published our latest best practice guide, entitled ‘The Role of the Agency in Data Management.’

The report looks at the challenges and opportunities for agencies that want to become trusted stewards of their clients’ data.

I sat down with the author, Chris O’Hara, to find out more.

Q. It seems like the industry press is continually heralding the decline of media agencies, but they seem to be very much alive. What’s your take on the current landscape?

For a very long time, agencies have been dependent upon using low-cost labor for media planning and other low-value operational tasks.

While there are many highly-skilled digital media practitioners – strategists and the like – agencies still work against “cost-plus” models that don’t necessarily map to the new realities in omnichannel marketing.

Over the last several years as marketers have come to license technology – data management platforms (DMP) in particular – agencies have lost some ground to the managed services arms of ad tech companies, systems integrators, and management consultancies.

Q. How do agencies compete?

Agencies aren’t giving up the fight to win more technical and strategic work.

Over the last several years, we have seen many smaller, data-led agencies pop up to support challenging work – and we have also seen holding companies up-level staff and build practice groups to accommodate marketers that are licensing DMP technology and starting to take programmatic buying “in-house.”

It’s a trend that is only accelerating as more and more marketer clients are hiring Chief Data Officers and fusing the media, analytics, and IT departments into “centers of excellence” and the like.

Not only are agencies starting to build consultative practices, but it looks like traditional consultancies are starting to build out agency-like services as well.

Not long ago you wouldn’t think of names like Accenture, McKinsey, Infinitive, and Boston Consulting Group when you think of digital media, but they are working closely with a lot of Fortune 500 marketers to do things like DMP and DSP (demand-side platform) evaluations, programmatic strategy, and even creative work.

We are also seeing CRM-type agencies like Merkle and Epsilon acquire technologies and partner with big cloud companies as they start to work with more of a marketer’s first-party data.

As services businesses, they would love to take share away from traditional agencies.

Q. Who is winning?

I think it’s early days in the battle for supremacy in data-driven marketing, but I think agencies that are nimble and willing to take some risk upfront are well positioned to be successful.

They are the closest to the media budgets of marketers, and those with transparent business models are really strongly trusted partners when it comes to bringing new products to market.

Also, as creative starts to touch data more, this gives them a huge advantage.

You can be as efficient as possible in terms of reaching audiences through technology, but at the end of the day, creative is what drives brand building and ultimately sales.

Q. Why should agencies embrace DMPs? What is in it for them? It seems like yet another platform to operate, and agencies are already managing DSPs, search, direct buys, and things like creative optimization platforms.

Ultimately, agencies must align with the marketer’s strategy, and DMPs are starting to become the single source of “people data” that touch all sorts of execution channels, from email to social.

That being said, DMP implementations can be really tough if an agency isn’t scoped (or paid) to do the additional work that the DMP requires.

Think about it: A marketer licenses a DMP and plops a pretty complicated piece of software on an agency team’s desk and says, “get started!”

That can be a recipe for disaster. Agencies need to be involved in scoping the personnel and work they will be required to do to support new technologies, and marketers are better off involving agencies early on in the process.

Q. So, what do agencies do with DMP technology? How can they succeed?

As you’ll read in the new guide, there are a variety of amazing use cases that come out of the box that agencies can use to immediately make an impact.

Because the DMP can control for the delivery of messages against specific people across all channels, a really low-hanging fruit is frequency management.

Doing it well can eliminate anywhere from, 10-40% of wasteful spending on media that reaches consumers too many times.

Doing analytics around customer journeys is another use case – and one that attribution companies get paid handsomely for.

With this newly discovered data at their fingertips, agencies can start proving value quickly, and build entire practice groups around media efficiency, analytics, data science – even leverage DMP tech to build specialized trading desks. There’s a lot to take advantage of.

Q. You interviewed a lot of senior people in the agency and marketer space. Are they optimistic about the future?

Definitely. It’s sort of a biased sample, since I interviewed a lot of practitioners that do data management on a daily basis.

But I think ultimately everyone sees the need to get a lot better at digital marketing and views technology as the way out of what I consider to be the early and dark ages of addressable marketing.

The pace of change is very rapid, and I think we are seeing that people who really lean into the big problems of the moment like cross-device identity, location-based attribution, and advanced analytics are future-proofing themselves.

DMP

DMPs Go Way Beyond Segmentation

AboveAndBeyondAny AdExchanger reader probably knows more about data management technology than the average Joe, but many probably associate data management platforms (DMPs) with creating audience segments for programmatic media.

While segmentation, audience analytics, lookalike modeling and attribution are currently the primary use cases for DMP tech, there is so much more that can be done with access to all that user data in one place. These platforms sitting at the center of a marketer’s operational stack can make an impact far beyond paid media.

As data platforms mature, both publishers and marketers are starting to think beyond devices and browsers, and putting people in the center of what they do. Increasingly, this means focusing on giving the people what they want. In some cases that means no ads at all, while in others it’s the option to value certain audiences over others and serve them an ad first or deliver the right content – not just ads – based on their preferences.

Beyond personalization, there are DMP plays to be made in the areas of ad blocking and header bidding.

Ad Blocking

DMPs see a lot of browsers and devices on a monthly basis and strive to aggregate those disparate identities into a single user or universal consumer ID. They are also intimately involved in the serving of ads by either ingesting ad logs, deploying pixels or having a server-to-server connection with popular ad servers. This is great for influencing the serving of online ads across channels, but maybe it can help with one of the web’s most perplexing problems: the nonserving of ads.

With reports of consumers using applications to block as many as 10% of ads, wouldn’t it be great to know exactly who is blocking those ads? For publishers, that might mean identifying those users and suppressing them from targeting lists so they can help marketers get a better understanding of how much reach they have in certain audience segments. Once the “blockers” are segmented, publishers can get a fine-grained understanding of their composition, giving them insights about what audiences are more receptive to having ad-free or paid content experiences.

A lot of these issues are being solved today with specialized scripts that either aren’t very well coded, leading to page latency, or are scripted in-house, adding to complexity. Scripts trigger the typical “see ads or pay” notifications, which publishers have seen become more effective over time. The DMP, already installed and residing in the header across the enterprise, can provision this small feature alongside the larger application.

Header Bidding

Speaking of DMP architecture being in the header, I often wonder why publishers who have a DMP installed insist on deploying a different header-bidding solution to manage direct deals. Data management tech essentially operates by placing a control tag within the header of a publisher website, enabling a framework that gives direct and primary access to users entering the page. Through an integration with the ad server, the DMP can easily and quickly decide whether or not to deliver high-value “first looks” at inventory.

Today, the typical large publisher has a number of supply-side platforms (SSPs) set up to handle yield management, along with possibly several pieces of infrastructure to manage that critical programmatic direct sale. Publishers can reduce complexity and latency by simply using the pipes that have already been deployed for the very reason header bidding exists: understanding and managing the serving of premium ads to the right audiences.

Maybe publishers should be thinking about header bidding in a new way. Header-bidding tags are just another tag on the page. Those with tag management-enabled DMPs could have their existing architecture handle that – a salient point made recently by STAQ’s James Curran.

Curran also noted that the DMP has access, through ad log ingestion, to how much dough publishers get from every drop in the waterfall, including from private marketplace, programmatic direct header and the open exchanges. Many global publishers are looking at the DMP inside their stack as a hub that can see the pricing landscape at an audience level and power ad servers and SSPs with the type of intelligent decisioning that supercharges yield management.

Personalization

In ad technology, we talk a lot about the various partners enabling “paid, owned and earned” exposures to consumers, but we usually think of DMPs as essential only for the paid part.

But the composition of a web page, for example, is filled with dozens of little boxes, each capable of serving a display ad, video ad, social widget or content. Just as the DMP can influence the serving of ads into those little boxes, it can also influence the type of content that appears to each user. The big automaker might want to show a muscle car to that NASCAR Dad when he hits the page or a shiny new SUV to the Suburban Mom who shuttles the kids around all day.

Or, a marketer with a lot of its own content (“brands are publishers,” right?) may want to recommend its own articles or videos based on the browsing behavior of an anonymous user. The big global publisher may want to show a subscriber of one magazine a series of interesting articles from its other publications, possibly outperforming the CPA deals it has with third parties for subscription marketing.

This one-to-one personalization is possible because DMPs can capture not only the obvious cookie data but also the other 60% of user interactions and data, including mobile apps, mobile web, beacon data and even modeled propensity data from a marketer or publisher’s data warehouse.

Wouldn’t it be cool to serve an ad for a red car when the user has a statistically significant overlap with 10,000 others who have purchased red cars in the past year? That’s how to apply data science to drive real content personalization, rather than typical retargeting.

These are just some of the possibilities available when you start to think as the DMP as not just a central part of the ad technology “stack” but the brains behind everything that can be done with audiences. This critical infrastructure is where audience data gets ingested in real time, deployed to the right channels at speed and turned into insights about people. In a short period of time, the term “DMP” will likely be shorthand for the simple audience targeting use case inside of the data-driven marketing hub.

It’s a great time to be a data-driven marketer.

Follow Chris O’Hara (@chrisohara) and AdExchanger (@adexchanger) on Twitter. 

Big Data · Data Management Platform · DMP · Marketing

Big Data (for Marketing) is Real!

MachineLearningWe’ve been hearing about big data driving marketing for a long time, and to be honest, most is purely aspirational.

Using third-party data to target an ad in real time does deploy some back-end big-data architecture for sure. But the real promise of data-driven marketing has always been that computers, which can crunch more data than people and do it in real time, could find the golden needle of insight in the proverbial haystack of information.

This long-heralded capability is finally moving beyond the early adopters and starting to “cross the chasm” into early majority use among major global marketers and publishers. 

Leveraging Machine Learning For Segmentation 

Now that huge global marketers are embracing data management technology, they are finally able to start activating their carefully built offline audience personas in today’s multichannel world.

Big marketers were always good at segmentation. All kinds of consumer-facing companies already segment their customers along behavioral and psychographic dimensions. Big Beer Company knows how different a loyal, light-beer-drinking “fun lover” is from a trendsetting “craft lover” who likes new music and tries new foods frequently. The difference is that now they can find those people online, across all of their devices.

The magic of data management, however, is not just onboarding offline identities to the addressable media space. Think about how those segments were created. Basically, an army of consultants and marketers took loads of panel-based market data and gut instincts and divided their audience into a few dozen broad segments.

There’s nothing wrong with that. Marketers were working with the most, and best, data available. Those concepts around segmentation were taken to market, where loads of media dollars were applied to find those audiences. Performance data was collected and segments refined over time, based on the results.

In the linear world, those segments are applied to demographics, where loose approximations are made based on television and radio audiences. It’s crude, but the awesome reach power of broadcast media and friendly CPMs somewhat obviate the need for precision.

In digital, those segments find closer approximation with third-party data, similar to Nielsen Prizm segments and the like. These approximations are sharper, but in the online world, precision means more data expense and less reach, so the habit has been to translate offline segments into broader demographic and buckets, such as “men who like sports.”

What if, instead of guessing which online attributes approximated the ideal audience and creating segments from a little bit of data and lot of gut instinct, marketers could look at all of the data at once to see what the important attributes were?

No human being can take the entirety of a website’s audience, which probably shares more than 100,000 granular data attributes, and decide what really matters. Does gender matter for the “Mom site?”Obviously. Having kids? Certainly. Those attributes are evident, and they’re probably shared widely across a great portion of the audience of Popular Mom Site.

But what really defines the special “momness” of the site that only an algorithm can see? Maybe there are key clusters of attributes among the most loyal readers that are the things really driving the engagement. Until you deploy a machine to analyze the entirety of the data and find out which specific attributes cluster together, you really can’t claim a full understanding of your audience.

It’s all about correlations. Of course, it’s pretty easy to find a correlation between only two distinct attributes, such as age and income. But think about having to do a multivariable correlation on hundreds of different attributes. Humans can’t do it. It takes a machine-learning algorithm to parse the data and find the unique clusters that form among a huge audience.

Welcome to machine-discovered segmentation.

Machines can quickly look across the entirety of a specific audience and figure out how many people share the same attributes. Any time folks cluster together around more than five or six specific data attributes, you arguably have struck gold.

Say I’m a carmaker that learned that some of my sedan buyers were men who love NASCAR. But I also discovered that those NASCAR dads loved fitness and gaming, and I found a cluster of single guys who just graduated college and work in finance. Now, instead of guessing who is buying my car, I can let an algorithm create segments from the top 20 clusters, and I can start finding people predisposed to buy right away.

This trend is just starting to happen in both publishing and marketing, and it has been made available thanks to the wider adoption of real big-data technologies, such as Hadoop, Map Reduce and Spark.

This also opens up a larger conversation about data. If I can look at all of my data for segmentation, is there really anything off the table?

Using New Kinds Of Data To Drive Addressable Marketing 

That’s an interesting question. Take the company that’s manufacturing coffee machines for home use. Its loyal customer base buys a machine every five years or so and brews many pods every day.

The problem is that the manufacturer has no clue what the consumer is doing with the machine unless that machine is data-enabled. If a small chip enabled it to connect to the Internet and share data about what was brewed and when, the manufacturer would know everything their customers do with the machine.

Would it be helpful to know that a customer drank Folgers in the morning, Starbucks in the afternoon and Twinings Tea at night? I might want to send the family that brews 200 pods of coffee every month a brand-new machine after a few years for free and offer the lighter-category customers a discount on a new machine.

Moreover, now I can tell Folgers exactly who is brewing their coffee, who drinks how much and how often. I’m no longer blind to customers who buy pods at the supermarket – I actually have hugely valuable insights to share with manufacturers whose products create an ecosystem around my company. That’s possible with real big-data technology that collects and stores highly granular device data.

Marketers are embracing big-data technology, both for segmentation and to go beyond the cookie by using real-world data from the Internet of Things to build audiences.

It’s creating somewhat of a “cluster” for companies that are stuck in 2015.

DMP · Platforms · Real Time Bidding (RTB)

Five controversial predictions for programmatic advertising in 2016

 

robot-blog-flyer
This picture is as bad as my prediction that beacons would provide marketers with scaled, closed-loop attribution in 2016. Didn’t happen. Not even close.  

Programmatic advertising continued to creep into the generalist marketer’s consciousness in 2015.

 

If you’re interested, we recently wrote up a handy digest of some of 2015’s programmatic trends.

But enough of looking backwards, let’s look in the crystal ball and ask ‘What’s in store for 2016?’

Yet again, I’ve recruited two experts to help. Chris O’Hara, VP Strategic Accounts at Krux Digital (and Econsultancy’s programmatic guru – see his articles and research here), and James Bourner, Head of Display at Jellyfish.

Here’s what they had to say…

Real-world attribution may become… well, a reality

Chris O’Hara, VP Strategic Accounts at Krux Digital

One of the biggest gaps with digital media, especially programmatic, is attribution. We still seem to have the Wannamaker problem, where “50% of my marketing works, I just don’t know which 50%.”

Attitudinal “brand lift” studies and latent post-campaign sales attribution modeling have been the defacto for the last 15 years, but marketers are increasingly insisting on real “closed loop” proof. e.g. “Did my Facebook ad move any items off the shelf?”

We are living in a world where technology is starting to shed some light on actual in-store purchases, such that we are going to be able to get ecommerce-like attribution for Corn Flakes soon.

In one real world example, a CPG company has partnered with 7-11, and placed beacon technology in the store.

Consumers can receive a “get 20% off” offer on their mobile device, via notification, when the they approach the store; the beacon can verify whether or not they arrive at the relevant shelf or display and an integration with the point-of-sale (POS) system can tell (immediately) whether the purchase was made.

These marketing fantasies are becoming more real every day.

7/11

Cross-device targeting is important, but so is improving mobile inventory

James Bourner, Head of Display at Jellyfish

2016 will be the year of mobile: Just kidding!

Although on a more serious note, 2016 will be the year of more measured and more integrated mobile activity – we have only really just started to get to grips with cross-device targeting and tracking on a macro level.

While the big companies who are making a play for control of the ad tech industry will put a lot of emphasis on cross-device targeting and tracking in their battle plans, I think there will be a lot of improvements to the quality of inventory, especially in apps.

A lot of mobile supply is from developers, not traditional publishers, which has led to quality issues.

However, as we are now becoming very discerning in what we buy in mobile hopefully the developers will respond to the data and not be tempted to place banners quite so close to where accidental clicks may occur!

Google has been trying to prevent accidental clicks since 2012.

google combating accidental clicks

Frequency management will reduce waste and improve UX

Chris O’Hara, VP Strategic Accounts at Krux Digital

Before marketers could effectively map users to all of their various devices (cross-device identity management) and also match users across various execution platforms (hosting a “match table” that assures user #123 in my DMP is the same guy as user #456 in DataXu, as an example), they were helpless to control frequency to an individual.

Recent studies have revealed that, when marketers are only frequency capping at the individual level, they are serving as many as 100+ ads to individual users every month, and sometimes much, much more.

What if the user’s ideal point of effective frequency is only 10 impressions on a monthly basis? As you can see, there are tremendous opportunities to reduce waste and gain efficiency in communication.

This means big money for marketers, who can finally start to control their messaging – putting recovered dollars back into finding more reach, and starting to influence their bidding strategies to get users into their “sweet spot” of frequency, where conversions happen.

It’s bad news for publishers, who have benefitted from this “frequency blindness” inadvertently. Now, marketers understand when to shut off the spigot.

tap

Increased creativity will harness new forms of inventory

James Bourner, Head of Display at Jellyfish

From the buy side we will be looking forward to more video-on-demand inventory and new areas of supply opening up, especially for the UK market, which is hugely exciting.

Closely linked to this will be far more involvement from the creative guys.

There have been rumblings in the programmatic community for some time that we do not exploit creativity enough and we need to encourage our creative counterparts to the possibilities of programmatic, whether that be simply more permutations of ads to complement targeting or a more subtle but fundamental shift in some of the tools used to build creative.

Additionally, 2016 will see more large and impactful formats, skins and take-over placements served programmatically. This is obviously excellent for both media planners and buyers as well as the creative teams.

On the subject of placements there will be a proliferation of in-feed display (or native-type placements) becoming available programmatically. 2016 will also see more connected TV and digital radio exchanges being added into the programmatic supply line.

Programmatic out of home has been on the horizon for a while but I would predict connected TV will be the faster growing element of these.

An in-stream Guardian ad format. James expects more in-feed display to be available programmatically.

guardian ad unit

We should probably let the machines decide

Chris O’Hara, VP Strategic Accounts at Krux Digital

The adoption of advanced data technology is starting to change the way media is actually planned and bought. In the past, planners would use their online segmentation to make guesses about what online audience segments to target, and test-and-learn their way to gain more precision.

Marketers basically had to guess the data attributes that comprised the ideal converter. Soon, algorithms will start doing the heavy lifting.

What if, instead of guessing at the type of person who buys something, you could start with the exact composition of that buyer? Today’s machine learning algorithms are starting at the end point in order to give marketers a huge edge in execution.

In other words, now we can look at a small group of 1,000 people who have purchased something, and understand the commonalities or clusters of data attributes they all have in common.

Maybe all buyers of a certain car share 20 distinct data attributes. Marketers can have segments automatically generated from that data, and expand it from there.

This brand new approach to segmentation is a small harbinger of things to come, as algorithms start to take over the processes and assumptions of the past 15 years and truly transform marketing.

robot

Econsultancy runs Creatve Programmatic, a one day conference in London, as well as providing training on programmatic advertising.

DMP · Media Buying · Media Measurement · Media Planning

Trends in Programmatic Buying

 

thefuture
The digital marketing future we were promised years ago looks pretty lame in retrospect. This is an image of a trading desk supervisor at Razorfish, circa 2013.

2015 has been one of the most exciting years in digital driven marketing to date. Although publishers have been leading the way in terms of building their programmatic “stacks” to enable more efficient selling of digital media, marketers are now catching up. Wide adoption of data management platforms has given rise to a shift in buying behaviors, where data-driven tactics for achieving effectiveness and efficiency rule. Here’s a some interesting trends that have arisen.

 

Purchase-Based Targeting

Remember when finding the “household CEO” was as easy as picking a demographic target? Marketers are still using demographic targeting (Woman, aged 25-44) to some extent, but we have seen a them shift rapidly to behavioral and contextually based segments (“Active Moms”), and now to Purchase-Based Targeting (PBT). This trend has existed in categories like Automotive and Travel, but is now being seen in CPG. Today, marketers are using small segments of people who have actually purchased the product they are marketing (“Special K Moms”) and using lookalike modeling to drive scale and find more of them. These purchase-defined segments are a more precise starting point in digital segmentation—and can be augmented by behavioral and contextual data attributes to achieve scale. The big winners here are the folks who actually have the in-store purchase information, such as Oracle’s Datalogix, 84.51, Nielsen’s Catalina Solutions, INMAR, and News Corp’s News America Marketing.

Programmatic Direct

For years we have been talking about the disintermediation in the space between advertisers and publishers (essentially, the entire Lumascape map of technology vendors), and how we can find scalable, direct, connections between them. It doesn’t make sense that a marketer has to go through an agency, a trading desk, DSP an exchange, SSP, and other assorted technologies to get to space on a publisher website. Marketers have seen $10 CPMs turn into just $2 of working media. Early efforts with “private marketplaces” inside of exchanges created more automation, but ultimately kept much of the cost structure. A nascent, but quickly emerging, movement of “automated guaranteed” procurement is finally starting to take hold. Advertisers can create audiences inside their DMP and push them directly to a publisher’s ad server where they have user-matching. This is especially effective where marketers seek as “always on” insertion order with a favored, premium publisher. This trend will grow in line with marketers’ adoption of people-based data technology.

Global Frequency Management

The rise in DMPs has also led to another fast-growing trend: global frequency management. Before marketers could effectively map users to all of their various devices (cross-device identity management, or CDIM) and also match users across various execution platforms (hosting a “match table” that assures user #123 in my DMP is the same guy as user #456 in DataXu, as an example), they were helpless to control frequency to an individual. Recent studies have revealed that, when marketers are only frequency capping at the individual level, they are serving as many as 100+ ads to individual users every month, and sometimes much, much more. What is the user’s ideal point of effective frequency is only 10 impressions on a monthly basis? As you can see, there are tremendous opportunities to reduce waste and gain efficiency in communication. This means big money for marketers, who can finally start to control their messaging—putting recovered dollars back into finding more reach, and starting to influence their bidding strategies to get users into their “sweet spot” of frequency, where conversions happen. It’s bad news for publishers, who have benefitted from this “frequency blindness” inadvertently. Now, marketers understand when to shut off the spigot.

Taking it in-House

More and more, we are seeing big marketers decide to “take programmatic in house.” That means hiring former agency and vendor traders, licensing their own technologies, and (most importantly) owning their own data. This trend isn’t as explosive as one might think, based on the industry trades—but it is real and happening steadily. What brought along this shift in sentiment? Certainly concerns about transparency; there is still a great deal of inventory arbitrage going on with popular trading desks. Also, the notion of control. Marketers want and deserve more of a direct connection to one of their biggest marketing costs, and now the technology is readily available. Even the oldest school marketer can license their way into a technology stack any agency would be proud of. The only thing really holding back the trend is the difficulty in staffing such an effort. Programmatic experts are expensive, and that’s just the traders! When the inevitable call for data-science driven analytics comes in, things can really start to get pricey! But, this trend continues for the next several years nonetheless.

Closing the Loop with Data

One of the biggest gaps with digital media, especially programmatic, is attribution. We still seem to have the Wannamaker problem, where “50% of my marketing works, I just don’t know which 50%.” Attitudinal “brand lift” studies, and latent post-campaign sales attribution modeling has been the defacto for the last 15 years, but marketers are increasingly insisting on real “closed loop” proof. “Did my Facebook ad move any items off the shelf?” We are living in a world where technology is starting to shed some light on actual in-store purchases, such that we are going to able to get eCommerce-like attribution for corn flakes soon. In one real world example, a CPG company has partnered with 7-11, and placed beacon technology in the store. Consumers can receive a “get 20% off” offer on their mobile device, via notification, when the they approach the store; the beacon can verify whether or not they arrive at the relevant shelf or display; and an integration with the point-of-sale (POS) system can tell (immediately) whether the purchase was made. These marketing fantasies are becoming more real every day.

Letting the Machines Decide

What’s next? The adoption of advanced data technology is starting to change the way media is actually planned and bought. In the past, planners would use their online segmentation to make guesses about what online audience segments to target, an test-and-learn their way to gain more precision. Marketers basically had to guess the data attributes that comprised the ideal converter. Soon, algorithms will atart doing the heavy lifting. What if, instead of guessing at the type of person who buys something, you could start with the exact composition of that that buyer? Today’s machine learning algorithms are starting at the end point in order to give marketers a hige edge in execution. In other words, now we can look at a small group of 1000 people who have purchased something, and understand the commonalities or clusters of data attributes they all have in common. Maybe all buyers of a certain car share 20 distinct data attributes. Marketers can have segment automatically generated from that data, and expend it from there. This brand new approach to segmentation is a small harbinger of things to come, as algorithms start to take over the processes and assumptions of the past 15 years and truly transform marketing.

It’s a great time to be a data-driven marketer!

 

DMP

Match Game 2015

 

Match game
Ask me what my match rates are. I have no clue, and neither do you.

 

If you work in digital marketing for a brand or an agency, and you are in the market for a data management platform, you have probably asked a vendor about match rates. But, unless you are really ahead of the curve, there is a good chance you don’t really understand what you are asking for. This is nothing to be ashamed of – some of the smartest folks in the industry struggle here. With a few exceptions, like this recent post, there is simply not a lot of plainspoken dialogue in the market about the topic.

Match rates are a key factor in deciding how well your vendor can provide cross-device identity mapping in a world where your consumer has many, many devices. Marketers are starting to request “match rate” numbers as a method of validation and comparison among ad tech platforms in the same way they wanted “click-through rates” from ad networks a few years ago. Why?

As a consumer, I probably carry about twelve different user IDs: A few Chrome cookies, a few Mozilla cookies, several IDFAs for my Apple phone and tablets, a Roku ID, an Experian ID, and also a few hashed e-mail IDs. Marketers looking to achieve true 1:1 marketing have to reconcile all of those child identities to a single universal consumer ID (UID) to make sure I am the “one” they want to market to. It seems pretty obvious when you think about it, but the first problem to solve before any “matching” tales place whatsoever is a vendor’s ability to match people to the devices and browser attached to them. That’s the first, most important match!

So, let’s move on and pretend the vendor nailed the cross-device problem—a fairly tricky proposition for even the most scaled platforms that aren’t Facebook and Google. They now have to match that UID against the places where the consumer can be found. The ability to do that is generally understood as a vendor’s “match rate.”

So, what’s the number? Herein lies the problem. Match rates are really, really hard to determine, and they change all the time. Plus, lots of vendors find it easier to say, “Our match rate with TubeMogul is 92%” and just leave it at that—even though it’s highly unlikely to be the truth. So, how do you separate the real story from the hype and discover what a vendor’s real ability to match user identity is? Here are two great questions you should ask:

What am I matching?

This is the first and most obvious question: Just what are you asking a vendor to match? There are actually two types of matches to consider: A vendor’s ability to match a bunch of offline data to cookies (called “onboarding”), and a vendor’s ability to match a set of cookie IDs to another set of cookie IDs.

First, let’s talk about the former. In onboarding—or matching offline personally identifiable information (PII) identities such as an e-mail with a cookie—it’s pretty widely accepted that you’ll manage to find about 40% of those users in the online space. That seems pretty low, but cookies are a highly volatile form of identity, prone to frequent deletion, and dependent upon a broad network of third parties to fire “match pixels” on behalf of the onboarder to constantly identify users. Over time, a strong correlation between the consumer’s offline ID and their website visitation habits—plus rigor around the collection and normalization of identity data—can yield much higher offline-to-online match results, but it takes effort. Beware the vendor who claims they can match more than 40% of your e-mails to an active cookie ID from the get-go. Matching your users is a process, and nobody has the magic solution.

As far as cookie-to-cookie user mapping, the ability to match users across platforms has more to do with how frequently the your vendors fire match pixels. This happens when one platform (a DMP) calls the other platform (the DSP) and asks, “Hey, dude, do you know this user?” That action is a one-way match. It’s even better when the latter platform fires a match pixel back—“Yes, dude, but do you know this guy?”—creating a two-way identity match. Large data platforms will ask their partners to fire multiple match pixels to make sure they are keeping up with all of the IDs in their ecosystem. As an example, this would consist of a DMP with a big publisher client who sees most of the US population firing a match pixel for a bunch of DSPs like DataXu, TubeMogul, and the Trade Desk at the same time. Therefore, every user visiting a big publisher site would get that publisher’s DMP master ID matched with the three separate DSP IDs. That’s the way it works.

Given the scenario I just described, and even accounting for a high degree of frequency over time, match rates in the high 70 percentile are still considered excellent. So consider all of the work that needs to go into matching before you simply buy a vendor’s claim to have “90%” match rates in the cookie space. Again, this type of matching is also a process—and one involving many parties and counterparties—and not just something that happens overnight by flipping a switch, so beware of the “no problem” vendor answers.

What number are you asking to match?

Let’s say you are a marketer and you’ve gathered a mess of cookie IDs through your first-party web visitors. Now, you want to match those cookies against a bunch of cookie IDs in a popular DSP. Most vendors will come right out and tell you that they have a 90%+ match rate in such situations. That may be a huge sign of danger. Let’s think about the reality of the situation. First of all, many of those online IDs are not cookies at all, but Safari IDs that cannot be matched. So eliminate a good 20% of matches right off the bat. Next, we have to assume that a bunch of those cookies are expired, and no longer matchable, which adds another 20% to the equation. I could go on and on but, as you can see, I’ve just made a pretty realistic case for eliminating about 40% of possible matches right off the bat. That means a 60% match rate is pretty damn good.

Lots of vendors are actually talking about their matchable population of users, or the cookies you give them that they can actually map to their users. In the case of a DMP that is firing match pixels all day long, several times a day with a favored DSP, the match rate at any one time with that vendor may indeed be 90-100%–but only of the matchable population. So always ask what the numerator and denominator represent in a match question.

You might ask whether or not this means the popular DMP/DSP ”combo” platforms come with higher match rates, or so-called “lossless integration” since both the DMP and DSP carry an single architecture an, therefore, a unified identity. The answer is, yes, but that offers little differentiation when two separate DMP/DSP platforms are closely synched and user matching.

In conclusion

Marketers are obsessing over match rates right now, and they should be. There is an awful lot of “FUD” (fear, uncertainty, and doubt) being thrown around by vendors around match rates—and also a lot of BS being tossed around in terms of numbers. The best advice when doing an evaluation?

  • Ask what kind of cross-device graph your vendor supports. Without the fundamental ability to match people to devices, the “match rate” number you get is largely irrelevant.
  • Ask what numbers your vendor is matching. Are we talking about onboarding (matching offline IDs to cookies) or are we talking about cookie matching (mapping different cookie IDs in a match table)?
  • Ask how they are matching (what is the numerator and what is the denominator?)
  • Never trust a number without an explanation. If your vendor tells you “94.5%” be paranoid!
  • And, ask for a match test. The proof is on the pudding!
DMP

Programmatic has a personal side

incite“It’s not a technology revolution, it’s a mind-set revolution,” said Jeremy Hlavacek, VP for Programmatic at the Weather Channel. It’s about building data around customers to target relevant ads: the right message in the right place at the right time. It’s called programmatic, and there’s more to it than you might think.

What is it?

It was one of the key buzzwords of 2014, and everyone involved in selling and buying ad inventory seems set to be talking about “programmatic”–and figuring out what it really means–for years to come. Still young, and increasingly disruptive, it’s both a set of technologies and a mind-set, and it could change marketing and advertising in ways hardly yet foreseen.

Perhaps the simplest way to think of it is by analogy with the modern stock market. Where traders once walked the floor (yes, some still do), shaking hands on deals, most stock transactions these days take placeat lightning speed on automated markets. The programmatic market for digital ad inventory is similar, leveraging software to purchase inventory in a way which also automates pricing–and it’s extending its reach to traditional (TV and billboard) ads too. Essentially, it’s about machines buying ads, thus setting the market price, with humans removed from the process as much as possible.

But is that all it is? Just a way of doing what ad tech already does, but ever faster, and on an ever larger scale? From what I heard at the Incite Programmatic Summit in New York this week, it has the potential to be much more than that.

Putting it Together

The Incite Summit audience might not have been huge, but the concentration of major brands, as speakers or audience members, was impressive: Jaguar, Stolichnaya, ESPN, Sega, Fox News, Wells Fargo. Speaking with attendees between sessions or over lunch, I was surprised to hear no skepticism about programmatic at all. People I met were either completely new to programmatic, or had been using it in some form or other for no more than a couple of years–but everyone thought the potential for business transformation was huge.

Fertile ground for disruption.

Early days, then. As Chris O’Hara of Krux said, the programmatic market is so crowded–there are so many possible choices of vendor or approach–that it’s “fertile ground for disruption; for someone to just come in and change the model.” In other words, we may not even be looking at the true shape of programmatic yet.

Krux is one of the major players in the data management platform segment, sitting between publishers, agencies and brands to optimize the value of inventories and marketing budgets. O’Hara’s breakdown of the data universe helps show both the potential of programmatic, and the challenges facing it, when it comes to delivering personalized messages at blinding speed. There are three kinds of data:

  • First party data: a brand’s own data about their customers based on purchase behavior and other touch-points. Easy to access, in some cases (financial services, for example), very rich indeed, but not usually very extensive.
  • Second party data: the data readers choose to give to publishers and social media platforms. Also very rich, and large in scale, but–like Facebook data–generally in walled gardens, and can be expensive.
  • Third party data: available from data vendors in huge quantities, but the third party providers have incentives to sell as much of it as they can. and it’s regarded as highly unreliable.

If collecting good data is the first challenge, the second lies in identifying customers, especially across multiple channels and devices. As Hlavacek pointed out, with imperfect data sets one can’t expect perfect customer identification. But probabilistic identification can be enough. Even bad data is better than no data, and results which are only ten percent accurate can be very valuable.

If that’s what you want to do, you don’t need all of this.

Programmatic can be used, of course, just to firehose customers with content, but Hlavacek would say, “If that’s what you want to do, you don’t need all of this.” For those customers accurately identified, algorithms can be leveraged to dynamically model the messages they should be receiving. Isn’t that what marketers have always done, with or without the algorithms?  Yes, but programmatic means automating the process on a large scale, at very high speed, and integrating it directly with the purchase of ad inventory, and across multiple channels.

Speakers admitted there’s a still a big gap between the concept of personalized programmatic, and what the creative side–accustomed to developing one compelling message for a large market–is geared up to provide.

Even relevant messaging can be intrusive, of course. Jim Caruso, VP of product strategy at Varick Media Management, a programmatic vendor, had it about right: “Customers are everywhere, but don’t want to be reached everywhere.” But if customer identities can be established and centralized, automated frequency management should be able to cap repeat messaging just at the sweet-spot of providing enough reinforcement without becoming an annoyance.

A Programmatic Future

If you want to take a deeper dive into programmatic, you could do much worse than check outProgrammaticAdvertising.Org. It’s sponsored by the B2B digital marketing company Multiview, but far from being a market-place for the sponsor, it carries wide-ranging and clear-eyed commentary on all things programmatic, from analytics to standards. I spoke with publisher Nicholas Henderson about where programmatic is now–and where it’s going.

“Right now it’s all very high-level and jargony,” he said. “For stakeholders that’s fine, but it can tend towards increasing confusion for marketers.” Henderson emphasizes the human size of programmatic. That’s almost counter-intuitive, given its proffer of large scale automation, but Henderson insists that people aren’t buying mechanication and algorithms, but human creative thinking.

Imagine how it would revolutionize a consumer’s experience.

“There’s a lot of buzz around dynamic creative,” he told me. “Imagine how it would revolutionize a consumer’s experience.” Mobile has all but made the website cookie extinct, but collecting contextual and behavioral data via logins or unique device IDs should make it even more possible to tailor unique and relevant experiences. “Done properly it can be very subtle.” Right now, the real-time analytics involved probably need to be outsourced to something like a data management platform vendor, but there are so many in the space that the skill-sets seem ripe for purchase and integration by brands or large agencies.

The bottom line? Caruso summed it up: “Programmatic is not about pricing and buying ads. It’s about building data around customers to target relevant ads.” We may not be seeing quite the right business model yet, or clean enough data–and creative may not yet realize what’s possible–but once those pieces fall into place, hold tight for a programmatic future.

DMP

DMP 4-5-6

NEXTLEVELAs I’ve previously discussed, there are several basic use cases of the modern data management platform (DMP) for marketers. They include getting “people data” from addressable devices into a single system, controlling how it’s matched with different execution platforms and managing the frequency of messaging across devices.

In a world of ultra-fragmented device identity and multiple addressable media channels, you should be able to tie them together and make sure consumers get the optimal amount of messages. Big marketers use these tactics to save tons of money by chopping off the “long tail” of impressions, such when marketers deliver more than 30 impressions per user each month, and reinvesting to find more deduplicated reach.

There is so much more to the successful application of a DMP, though. The most cutting-edge marketers are taking DMPs to the next level, after investing the time in building consumer identity graphs and getting their match rates with execution platforms as high as possible.

There are several plays you can run when you start to dig in and put the data to work. 

Supercharge The Bidding Strategy

After identifying the long tail of impression frequency and diverting that investment into reach, where users are served up to three impressions per month, the key is driving users down into the sweet spot of frequency. This is where users are more likely to download more coupons, for example, or complete more video views.

If that sweet spot is between four and 20 impressions, marketers can adjust their strategy in biddable environments to ensure they are willing to pay more to “win” users who have only been exposed to three impressions so far. DMPs can match users with fidelity and deliver in near real time these types of targeting sets to multiple execution platforms, including those for display, video and search.

Optimize Partner Investment Through Reach Analysis  

It’s a great start to manage addressable media delivery on a global basis, but what happens after you have identified all of those wasted impressions?

Naturally, the money marketers are spending reaching consumers for the 100th time can be better spent looking for net new consumers. But how do you get them?

For a diaper manufacturer that wants to reach the estimated 6 million new mothers in market every year, it’s critically important to get to 100% reach against that audience. Many marketers start with a single, broad reach partner, such as Yahoo, and see how close they can get to total reach.

It’s fantastic to leverage big spending power to drive down prices and get massive customer service attention to spread a message to as many unique users as possible. But no single partner can get a marketer to 100%. That’s where the DMP comes in.

It’s not just about filling in the missing 25% of an audience that matters; the diaper manufacturer wants to hit those incremental moms across quality, well-lit sites. Determining where you can get a few more million deduplicated moms is the first step. The key is to then decide where to find them more effectively from an investment standpoint, which requires an overlap analysis.

Enhance Partner Selection Through Overlap Analysis 

Say our diapers manufacturer found 4 million new moms on Yahoo at a reasonable CPM. The DMP can then look across all addressable media investments and run a “Where are my people?” type of analysis.

Maybe this advertiser has another 20 partners on the plan after getting the bulk of unique reach from a single partner. How many more unique moms were found on Meredith? Moreover, how about finding moms on classic news and entertainment sites, such as NBC or Turner properties, or even non-endemic sites? Maybe there is an incremental 500,000 first-party “diaper moms” on a particular site, but now the advertiser can decide, based on performance KPIs and price, how valuable those particular moms are.

If those moms on a popular news site can be had for $5 CPM, maybe they are a more valuable reach vehicle than those found on the obvious “Moms.com” site. Without the DMP, they’ll never know.

Plus, marketers are also starting to optimize the way they procure such audiences, by leapfrogging over the existing ad tech ecosystem and doing audience-based programmatic direct buying using their new DMP pipes.

Understand KPIs Drivers Through Journey Building

Marketers that have deduplicated their audience and built an effective reach strategy can now go to the next level and start finding how those diaper moms moved from their first touch point in the customer journey to an actual action, such as downloading a retail coupon or requesting a sample package. When an audience is unified through a DMP, it’s possible to see the channels through which people move across their “customer journey” from awareness to action.

As an example, more large CPG companies are putting more investment into online video and, in fact, one of the world’s largest marketers has embraced a “ban the banner” approach and values engagement more than any other KPI – a metric more easily understood with video. With that in mind, a journey analysis can show marketers if seeing a few search impressions helped drive more completed views on (expensive) video and drive more brand engagement.

Did consumers download more coupons after viewing two equity (branding) impressions or before seeing the “buy now” (direct-response) message? The ability to understand how messages work together sequentially is the ultimate guide to being able to inform media investment strategy.

These are just a few of the next-level media use cases that can be accomplished once DMP fundamentals are put in place. DMPs are starting to shine a light on the “people data” that will drive the next decade of smart media investment. I think we will look back on the last 15 years of addressable marketing and wonder how we ever made such decisions without a clear view of audience first.

DMPs are starting to shine a light on the effectiveness of marketing, and giving marketers lots of new knobs and levers to pull.

It’s a great time to be a data-driven marketer.

Follow Chris O’Hara (@chrisohara) and AdExchanger (@adexchanger) on Twitter