How AI will Change UX

user_experience_sarah_weise In 1960, the US Navy coined a design principle: Keep it simple, stupid.

When it comes to advertising and marketing technology, we haven’t enjoyed a lot of “simple” over the last dozen years or so. In an increasingly data-driven world where delivering a relevant customer experience makes all the difference, we have embraced complexity over simplicity, dealing in acronyms, algorithms and now machine learning and artificial intelligence (AI).

When the numbers are reconciled and the demand side pays the supply side, what we have been mostly doing is pushing a lot of data into digital advertising channels and munching around the edges of performance, trying to optimize sub-1% click-through rates.

That minimal uptick in performance has come at the price of some astounding complexity: ad exchanges, third-party data, second-price auctions and even the befuddling technology known as header bidding. Smart, technical people struggle with these concepts, but we have embraced them as the secret handshake in a club that pays it dues by promising to manage that complexity away.

Marketers, however, are not stupid. They have steadily been taking ownership of their first-party data and starting to build marketing tech stacks that attempt to add transparency and efficiency to their outbound marketing, while eliminating many of the opaque ad tech taxes levied by confusing and ever-growing layers of licensed technology. Data management platforms, at the heart of this effort to take back control, have seen increased penetration among large marketers – and this trend will not stop.

This is a great thing, but we should remember that we are in the third inning of a game that will certainly go into extra innings. I remember what it was like to save a document in WordPerfect, send an email using Lotus Notes and program my VCR. Before point-and-click interfaces, such tasks were needlessly complex. Ever try to program the hotel’s alarm clock just in case your iPhone battery runs out? In a world of delightful user experience and clean, simple graphical interfaces, such a task becomes complex to the point of failure.

Why Have We Designed Such Complexity Into Marketing Technology?

We are, in effect, giving users who want big buttons and levers the equivalent graphical user interface of an Airbus A380: tons of granular and specific controls that may take a minute to learn, but a lifetime to master.

How can we change this? The good news is that change has already arrived, in the form of machine learning and artificial intelligence. When you go on Amazon or Netflix, do you have to program any of your preferences before getting really amazing product and movie recommendations? Of course not. Such algorithmic work happens on the back end where historical purchases and search data are mapped against each other, yielding seemingly magical recommendations.

Yet, when airline marketers go into their ad tech platform, we somehow expect them to inform the system of myriad attributes which comprise someone with “vacation travel intent” and find those potential customers across multiple channels. Companies like Expedia tell us just what to pay for a hotel room with minimal input, but we expect marketers to have internal data science teams to build propensity models so that user scores can be matched to a real-time bidding strategy.

One of the biggest trends we will see over the next several years is what could be thought of as the democratization of data science. As data-driven marketing becomes the norm, the winners and losers will be sorted out by their ability to build robust first-party data assets and leverage data science to sift the proverbial wheat from the chaff.

This capability will go hand-in-hand with an ability to map all kinds of distinct signals – mobile phones, tablets, browsers, connected devices and beacons – to an actual person. This is important for marketers because browsers and devices never buy anything, but customers do. Leading-edge companies will depend on data science to learn more about increasingly hard-to-find customers, understand their habits, gain unique insights about what prompts them to buy and leverage those insights to find them in the very moment they are going to buy.

In today’s world, that starts with data management and ends with finding people on connected devices. The problem is that executing is quite difficult to automate and scale. Systems still require experts that understand data strategy, specific use cases and the value of an organization’s siloed data when stitched together. Plus, you need great internal resources and a smart agency capable of execution once that strategy is actually in place.

However, the basic data problems we face today are not actually that complicated. Thomas Bayes worked them out more than 300 years ago with a series of probabilistic equations we still depend on today. The real trick involves packaging that Bayesian magic in such a way that the everyday marketer can go into a system containing “Hawaiian vacation travel intenders” for a winter travel campaign and push a button that says, “Find me more of these – now!”

Today’s problem is that we depend on either a small amount of “power users” – or the companies themselves – to put all of this amazing technology to work, rather than simply serving up the answers and offering a big red button to push.

A Simpler Future For Marketers?

Instead of building high-propensity segments and waiting for users to target them, tomorrow’s platforms will offer preselected lists of segments to target. Instead of having an agency’s media guru perform a marketing-mix model to determine channel mix, mar tech stacks will simply automatically allocate expenditures across channels based on the people data available. Instead of setting complex bid parameters by segment, artificial intelligence layers will automatically control pricing based on bid density, frequency of exposure and propensity to buy – while automatically suppressing users who have converted from receiving that damn shoe ad again.

This is all happening today, and it is happening right on time. In a world with only tens of thousands of data scientists and enough jobs for millions of them, history will be written by the companies clever enough to hide the math on the server side and give users the elegance of a simple interface where higher-level business decisions will be made.

We are entering into a unique epoch in our industry, one in which the math still rules, but the ability of designers to make it accessible to the English majors who run media will rule supreme.

It’s a great time to be a data-driven marketer! Happy New Year.

Follow Chris O’Hara (@chrisohara) and AdExchanger (@adexchanger) on Twitter. 

(Interview) On Beacons and DMPs

how-beacons-might-alter-the-data-balance-between-manufacturers-and-retailersHow Beacons Might Alter The Data Balance Between Manufacturers And Retailers

As Salesforce integrates DMP Krux, Chris O’Hara considers how proximity-based personalization will complement access to first-party data. For one thing, imagine how coffeemakers could form the basis of the greatest OOH ad network.

How CRM and a DMP can combine to give a 360-degree view of the customer

360-degree-gif-01For years, marketers have been talking about building a bridge between their existing customers, and the potential or yet-to-be-known customer.

Until recently, the two have rarely been connected. Agencies have separate marketing technology, data and analytics groups. Marketers themselves are often separated organizationally between “CRM” and “media” teams – sometimes even by a separate P&L.

Of course, there is a clearer dividing line between marketing tech and ad tech: personally identifiable information, or PII. Marketers today have two different types of data, from different places, with different rules dictating how it can be used.

In some ways, it has been natural for these two marketing disciplines to be separated, and some vendors have made a solid business from the work necessary to bridge PII data with web identifiers so people can be “onboarded” into cookies.

After all, marketers are interested in people, from the very top of the funnel when they visit a website as an anonymous visitor, all the way down the bottom of the funnel, after they are registered as a customer and we want to make them a brand advocate.

It would be great — magic even — if we could accurately understand our customers all the way through their various journeys (the fabled “360-degree view” of the customer) and give them the right message, at the right place and time. The combination of a strong CRM system and an enterprise data management platform (DMP) brings these two worlds together.

Much of this work is happening today, but it’s challenging with lots of ID matching, onboarding, and trying to connect systems that don’t ordinarily talk to one another. However, when CRM and DMP truly come together, it works.

What are some use cases?

Targeting people who haven’t opened an email

You might be one of those people who don’t open or engage with every promotional email in your inbox, or uses a smart filter to capture all of the marketing messages you receive every month.

To an email marketer, these people represent a big chunk of their database. Email is without a doubt the one of the most effective digital marketing channels, even though as few as 5% of people who engage are active buyers. It’s also relatively fairly straightforward way to predict return on advertising spend, based on historical open and conversion rates.

The connection between CRM and DMP enables the marketer to reach the 95% of their database everywhere else on the web, by connecting that (anonymized) email ID to the larger digital ecosystem: places like Facebook, Google, Twitter, advertising exchanges, and even premium publishers.

Understanding where the non-engaged email users are spending their time on the web, what they like, their behavior, income and buying habits is all now possible. The marketer has the “known” view of this customer from their CRM, but can also utilise vast sets of data to enrich their profile, and better engage them across the web.

Combining commerce and service data for journeys and sequencing

When we think of the customer journey, it gets complicated quickly. A typical ad campaign may feature thousands of websites, multiple creatives, different channels, a variety of different ad sizes and placements, delivery at different times of day and more.

When you map these variables against a few dozen audience segments, the combinatorial values get into numbers with a lot of zeros on the end. In other words, the typical campaign may have hundreds of millions of activities — and tens of millions of different ways a customer goes from an initial brand exposure all the way through to a purchase and the becoming a brand advocate.

How can you automatically discover the top 10 performing journeys?

Understanding which channels go together, and which sequences work best, can add up to tremendous lift for marketers.

For example, a media and entertainment company promoting a new show recently discovered that doing display advertising all week and then targeting the same people with a mobile “watch it tonight” message on the night of it aired produced a 20% lift in tune-in compared to display alone. Channel mix and sequencing work.

And that’s just the tip of the iceberg — we are only talking about web data.

What if you could look at a customer journey and find out that the call-to-action message resonated 20% higher one week after a purchase?

A pizza chain that tracks orders in its CRM system can start to understand the cadence of delivery (e.g. Thursday night is “pizza night” for the Johnson family) and map its display efforts to the right delivery frequency, ensuring the Johnsons receive targeted ads during the week, and a mobile coupon offer on Thursday afternoon, when it’s time to order.

How about a customer that has called and complained about a missed delivery, or a bad product experience? It’s probably a terrible idea to try and deliver a new product message when they have an outstanding customer ticket open. Those people can be suppressed from active campaigns, freeing up funds for attracting net new customers.

There are a lot of obvious use cases that come to mind when CRM data and web behavioral data is aligned at the people level. It’s simple stuff, but it works.

As marketers, we find ourselves seeking more and more precise targeting but, half the time, knowing when not to send a message is the more effective action.

As we start to see more seamless connections between CRM (existing customers) and DMPs (potential new customers), we imagine a world in which artificial intelligence can manage the cadence and sequence of messages based on all of the data — not just a subset of cookies, or email open rate.

As the organizational and technological barriers between CRM and DMP break down, we are seeing the next phase of what Gartner says is the “marketing hub” of interconnected systems or “stacks” where all of the different signals from current and potential customers come together to provide that 360-degree customer view.

It’s a great time to be a data-driven marketer!

Chris O’Hara is the head of global marketing for Krux, the Salesforce data management platform.

(Coverage) Salesforce Bolsters Einstein AI With Heavy-Duty Data Management

Through its acquisition of Krux, Salesforce is combining its artificial intelligence (AI) layer with deeper data management in Salesforce Marketing Cloud.

Customer Relationship Management and Data Management come together in a delicious way.

Today at its Salesforce World Tour stop in New York, the company began to roll back the curtain on how its AI and data layers will work together. Salesforce announced new AI, audience segmentation, and targeting features for Marketing Cloud based on its recent acquisition of data management platform Krux. The company’s new Marketing Cloud features, available today, add more data-driven advertising tools and an Einstein Journey Insights dashboard for monitoring end-to-end customer engagement in everything from e-commerce to email marketing.

Salesforce unveiled its Einstein AI platform this year, baking predictive algorithms, machine and deep learning, as well as other data analysis features throughout its Software-as-a-Service (SaaS) cloud. Einstein is essentially an AI layer between the data infrastructure underneath and the Salesforce apps and services on top. The CRM giant is no stranger to big money acquisitions, most recently scooping up Demandware for $2.8 billion and making a play for LinkedIn before Microsoft acquired it. The Krux acquisition gives Salesforce a new, data-driven customer engagement vector.

“We’re working to apply AI to all our applications,” said Eric Stahl, Senior Vice President of Marketing Cloud. “In Marketing Cloud, Krux now gives us the ability to do things like predictive journeys to help the marketer figure out which products to recommend. We can do complex segmentation, inject audiences into various ad networks, and do large-scale advertising informed by Sales Cloud and Service Cloud data.”

As Salesforce and Krux representatives demonstrated Krux and how it fits into the Marketing Cloud, the data management platform acted more like a business intelligence (BI) or data visualization tool than a CRM or marketing platform. Chris O’Hara, head of Global Data Strategy at Krux, talked about the massive quantities of data the platform manages, including an on-demand analytics environment of 20 petabytes (PB)—the entire internet archive is only 15 PB.

526951-krux-data-pattern-analysis“This is our idea of democratizing data for business users who don’t have a PhD in data science,” said O’Hara. You can use Krux machine-learned segments to find out something you don’t know about your audience, or do a pattern analysis [screenshot above] to understand the attributes of those users that correlate greatly. We’re hoping to use those kinds of signals to power Einstein and do things like user scoring and propensity modeling.

The Einstein Journey Insights feature is designed to analyze “hundreds of millions of data points” to identify an optimal customer conversion path. In addition to its Krux-powered Marketing Cloud features, Salesforce also announced a new conversational messaging service called LiveMessage this week for its Salesforce Service Cloud. LiveMessage integrates SMS text and Facebook Messenger with the Service Cloud console for interactions between customers and a company’s helpdesk bots.

The more intriguing implications here are what Salesforce might do with massively scaled data infrastructure like Krux beyond the initial integration. According to O’Hara, in addition to its analytics environment, Krux also processes more than more than 5 billion monthly CRM records and 4.5 million data capture events every minute, and maintains a native device graph of more than 3.5 billion active devices and browsers per month. Without getting into specifics, Salesforce’s Stahl said there will be far more cross-over between Krux data management and Einstein AI to come. In the data plus AI equation, the potential here is exponential scale.


Match Game 2015

Match gameIf you work in digital marketing for a brand or an agency, and you are in the market for a data management platform, you have probably asked a vendor about match rates. But, unless you are really ahead of the curve, there is a good chance you don’t really understand what you are asking for. This is nothing to be ashamed of – some of the smartest folks in the industry struggle here. With a few exceptions, like this recent post, there is simply not a lot of plainspoken dialogue in the market about the topic.

Match rates are a key factor in deciding how well your vendor can provide cross-device identity mapping in a world where your consumer has many, many devices. Marketers are starting to request “match rate” numbers as a method of validation and comparison among ad tech platforms in the same way they wanted “click-through rates” from ad networks a few years ago. Why?

As a consumer, I probably carry about twelve different user IDs: A few Chrome cookies, a few Mozilla cookies, several IDFAs for my Apple phone and tablets, a Roku ID, an Experian ID, and also a few hashed e-mail IDs. Marketers looking to achieve true 1:1 marketing have to reconcile all of those child identities to a single universal consumer ID (UID) to make sure I am the “one” they want to market to. It seems pretty obvious when you think about it, but the first problem to solve before any “matching” tales place whatsoever is a vendor’s ability to match people to the devices and browser attached to them. That’s the first, most important match!

So, let’s move on and pretend the vendor nailed the cross-device problem—a fairly tricky proposition for even the most scaled platforms that aren’t Facebook and Google. They now have to match that UID against the places where the consumer can be found. The ability to do that is generally understood as a vendor’s “match rate.”

So, what’s the number? Herein lies the problem. Match rates are really, really hard to determine, and they change all the time. Plus, lots of vendors find it easier to say, “Our match rate with TubeMogul is 92%” and just leave it at that—even though it’s highly unlikely to be the truth. So, how do you separate the real story from the hype and discover what a vendor’s real ability to match user identity is? Here are two great questions you should ask:

What am I matching?

This is the first and most obvious question: Just what are you asking a vendor to match? There are actually two types of matches to consider: A vendor’s ability to match a bunch of offline data to cookies (called “onboarding”), and a vendor’s ability to match a set of cookie IDs to another set of cookie IDs.

First, let’s talk about the former. In onboarding—or matching offline personally identifiable information (PII) identities such as an e-mail with a cookie—it’s pretty widely accepted that you’ll manage to find about 40% of those users in the online space. That seems pretty low, but cookies are a highly volatile form of identity, prone to frequent deletion, and dependent upon a broad network of third parties to fire “match pixels” on behalf of the onboarder to constantly identify users. Over time, a strong correlation between the consumer’s offline ID and their website visitation habits—plus rigor around the collection and normalization of identity data—can yield much higher offline-to-online match results, but it takes effort. Beware the vendor who claims they can match more than 40% of your e-mails to an active cookie ID from the get-go. Matching your users is a process, and nobody has the magic solution.

As far as cookie-to-cookie user mapping, the ability to match users across platforms has more to do with how frequently the your vendors fire match pixels. This happens when one platform (a DMP) calls the other platform (the DSP) and asks, “Hey, dude, do you know this user?” That action is a one-way match. It’s even better when the latter platform fires a match pixel back—“Yes, dude, but do you know this guy?”—creating a two-way identity match. Large data platforms will ask their partners to fire multiple match pixels to make sure they are keeping up with all of the IDs in their ecosystem. As an example, this would consist of a DMP with a big publisher client who sees most of the US population firing a match pixel for a bunch of DSPs like DataXu, TubeMogul, and the Trade Desk at the same time. Therefore, every user visiting a big publisher site would get that publisher’s DMP master ID matched with the three separate DSP IDs. That’s the way it works.

Given the scenario I just described, and even accounting for a high degree of frequency over time, match rates in the high 70 percentile are still considered excellent. So consider all of the work that needs to go into matching before you simply buy a vendor’s claim to have “90%” match rates in the cookie space. Again, this type of matching is also a process—and one involving many parties and counterparties—and not just something that happens overnight by flipping a switch, so beware of the “no problem” vendor answers.

What number are you asking to match?

Let’s say you are a marketer and you’ve gathered a mess of cookie IDs through your first-party web visitors. Now, you want to match those cookies against a bunch of cookie IDs in a popular DSP. Most vendors will come right out and tell you that they have a 90%+ match rate in such situations. That may be a huge sign of danger. Let’s think about the reality of the situation. First of all, many of those online IDs are not cookies at all, but Safari IDs that cannot be matched. So eliminate a good 20% of matches right off the bat. Next, we have to assume that a bunch of those cookies are expired, and no longer matchable, which adds another 20% to the equation. I could go on and on but, as you can see, I’ve just made a pretty realistic case for eliminating about 40% of possible matches right off the bat. That means a 60% match rate is pretty damn good.

Lots of vendors are actually talking about their matchable population of users, or the cookies you give them that they can actually map to their users. In the case of a DMP that is firing match pixels all day long, several times a day with a favored DSP, the match rate at any one time with that vendor may indeed be 90-100%–but only of the matchable population. So always ask what the numerator and denominator represent in a match question.

You might ask whether or not this means the popular DMP/DSP ”combo” platforms come with higher match rates, or so-called “lossless integration” since both the DMP and DSP carry an single architecture an, therefore, a unified identity. The answer is, yes, but that offers little differentiation when two separate DMP/DSP platforms are closely synched and user matching.

In conclusion

Marketers are obsessing over match rates right now, and they should be. There is an awful lot of “FUD” (fear, uncertainty, and doubt) being thrown around by vendors around match rates—and also a lot of BS being tossed around in terms of numbers. The best advice when doing an evaluation?

  • Ask what kind of cross-device graph your vendor supports. Without the fundamental ability to match people to devices, the “match rate” number you get is largely irrelevant.
  • Ask what numbers your vendor is matching. Are we talking about onboarding (matching offline IDs to cookies) or are we talking about cookie matching (mapping different cookie IDs in a match table)?
  • Ask how they are matching (what is the numerator and what is the denominator?)
  • Never trust a number without an explanation. If your vendor tells you “94.5%” be paranoid!
  • And, ask for a match test. The proof is on the pudding!

DMP 1-2-3

Almost every marketer is starting to lean into data management technology. Whether they are trying to build an in-house programmatic practice, use data for site personalization, or trying to obtain the fabled “360 degree user view,” the goal is to get a handle on their data and weaponize it to beat their competition.

In the right hands, a data management platform (DMP) can do some truly wonderful things. With so many use cases, different ways to leverage data technology, and fast moving buzzwords, it’s easy for early conversations to get way too “deep in the weeds” and devolve into discussions of “match rates” and how cross-device identity management works. The truth is that data management technology can be much simpler than you think.

At its most basic level, DMP comes down to “data in” and “data out.” While there are many nuances around the collection, normalization, and activation of the data itself, let’s look at the “data in” story, the “data out” story, and an example of how those two things come together to create an amazing use case for marketers.

The “Data In” Story

datainTo most marketers, the voodoo that happens inside the machine isn’t the interesting part of the DMP, but it’s really where the action happens. Understanding the “truth” of user identity (who are all these anonymous people I see on my site and apps?) is what makes the DMP useful in the first place, making one-to-one marketing and understanding customer journeys something that goes beyond AdExchanger article concepts, and starts to really make a difference!

  • Not Just Cookies: Early DMPs focused on mapping cookie IDs to a defined taxonomy and matching those cookies with execution platforms. Most DMPs—from lightweight “media DMPs” inside of DSPs to full-blown “first-party” platforms—handle this type of data collection with ease. Most first-generation DMPs were architected as cookie collection and distribution platforms, meant to associate a cookie with an audience segment, and pass it along to a DSP for targeting. The problem is that people are spending more time in cookie-less environments, and more time on mobile (and other devices). That means today’s DMPs have to have the ability to do more than organize cookies, but also be able to capture a large variety of disparate identity data, which can also include hashed CRM data, data from a point-of-sale (POS) system, and maybe even data from a beacon signal.
  • Ability to Capture Device Data: To a marketer, I look like eight different “Chris OHara’s:” three Apple IDFAs, several Safari unique browser signatures, a Roku device ID, and a hashed e-mail identity or two. These “child identities” must be reconciled to a “Universal ID” that is persistent and collects attributes over time. Most DMPs were architected to store and manage cookies for display advertising, not cross-device applications, so platforms’ ability to ingest highly differentiated structured and unstructured data are all over the map. Yet, with more and more time dedicated to devices instead of desktop, cookies only cover 40% of today’s pie.
  • Embedded Device Graph: Cross-device identification is notoriously difficult, requiring both the ability to identify people through deterministic data (users authenticate across mobile and desktop devices), or the skill to apply smart algorithms across massive datasets to make probabilistic guesses that match users and their devices. Over the next several years, the word “device graph” will figure prominently in our industry, as more companies try and innovate a path to cross-device user identity—without data from “walled garden” platforms like Google and Facebook. Since most algorithms operate in the same manner, look for scale of data; the bigger the user set, the more “truth” the algorithms can identify and model to make accurate guesses of user identity.

The “data in” story is the fundamental part of DMP—without being able to ingest all kinds of identifiers and understand the truth of user identity, one-to-one marketing, sequential messaging, and true attribution is impossible

dataoutData Out

While the “data in” story gets pretty technical, the “data out” story starts to really resonate with marketers because it ties three key aspects of data-driven marketing together. Here’s what a DMP should be able to do:

  • Reconcile Platform Identity: Just like I look like eight different “Chris O’Haras” based on my device, I also look like 8 different people across media channels. I am a cookie in DataXu, another cookie in Google’s DoubleClick, and yet another cookie on a site like the New York Times. The role of the DMP is to user match with all of these platforms, so that the DMP’s universal identifier (UID) maps to lots of different platform IDs (child identities). That means the DMP must have the ability to connect directly with each platform (a server-to-server integration being preferable), and also the chops to trade data quickly, and frequently.
  • Unify the Data Across Channels: To a marketer, every click, open, like, tweet, download, and view is another speck of gold to mine from a river of data. When aggregated at scale, these data turn into highly valuable nuggets of information we call “insights.” The problem for most marketers that operate across channels (display, video, mobile, site-direct, social, and search, just to name a few) is that the fantastic data points they receive all live separately. You can log into a DSP and get plenty of campaign information, but how do you relate a click in a DSP with a video view, an e-mail “open,” or someone who has watched a YouTube on an owned and operated channel? The answer is that even the most talented Excel jockey running twelve macros can’t aggregate enough ad reports to get decent insights. You need a “people layer” of data that spans across channels. To a certain extent, who cares what channel performed best, unless you can reconcile the data at the segment level? Maybe Minivan Moms convert at a higher percentage after seeing multiple video ads, but Suburban Dads are more easily converted on display? Without unifying the data across all addressable channels, you are shooting in the dark.
  • Global Delivery Management: The other thing that becomes possible when you tie both cross-device user identity and channel IDs together with a central platform is the ability to manage delivery globally. More on this below!

gdmGlobal Delivery Management

If I am a different user on each channel—and each channel’s platform or site enables me to provide a frequency cap—it is likely that I am being over-served ads. If I run ads in five channels and frequency cap each one at 10 impressions a month per user, I am virtually guaranteed to receive 50 impressions over the course of a month—and probably more depending on my device graph. But what if the ideal frequency to drive conversion is only 10 impressions? I just spent 5 times too much to make an impact. Controlling frequency at the global level means being able to allocate ineffective long-tail impressions to the sweet spot of frequency where users are most likely to convert, and plug that money back into the short tail, where marketers get deduplicated reach.

In the above example, 40% of a marketer’s budget was being spent delivering between 1-3 impressions per user every month. Another 20% was spent delivering between 4-7 impressions, which conversion data revealed to be where the majority of conversions were occurring. The rest of the budget (40%) was spent on impressions with little to very little conversion impact.

In this scenario, there are two basic plays to run: Firstly, the marketer wants to completely eliminate the long tail of impressions and reinvest it into more reach. Secondly, the marketer wants to push more people from the short tail down into the “sweet spot” where conversions happen. Cutting off long tail impressions is relatively easy, through sending suppression sets of users to execution platforms.

“Sweet spot targeting” involves understanding when a user has seen her third impression, and knowing the 4th, 5th, and 6th impressions have a higher likelihood of producing an action. That means sending signals to biddable platforms (search and display) to bid higher to win a potentially more valuable user.

It’s Rocket Science, But It’s Not

If you really want to get deep, the nuts and bolts of data management are very complicated, involving real big data science and velocity at Internet speed. That said, applying DMP science to the common problems within addressable marketing is not only accessible—it’s making DMPs the must-have technology for the next ten years, and global delivery management is only one use case out of many. Marketers are starting to understand the importance of capturing the right data (data in), and applying it to addressable channels (data out), and using the insights they collect to optimize their approach to people (not devices).

It’s a great time to be a data-driven marketer!

Choosing a Data Management Platform

“Big  Data”  is  all  the  rage  right  now,  and  for a good reason. Storing tons and tons of data has gotten very inexpensive, while the accessibility of that data has increased substantially in parallel. For the modern marketer, that means having access to literally dozens of disparate data sources, each of which cranks out large volumes of data every day. Collecting, understanding, and taking action against those data sets is going to make or break companies from now on. Luckily, an almost endless variety of companies have sprung up to assist agencies and advertisers with the challenge. When it comes to the largest volumes of data, however, there are some highly specific attributes you should consider when selecting a data management platform (DMP).

Collection and Storage: Scale, Cost, and Ownership
First of all, before you can do anything with large amounts of data, you need a place to keep it. That  place  is  increasingly  becoming  “the  cloud”  (i.e.,  someone  else’s  servers),  but  it  can  also  be   your own servers. If you think you have a large amount of data now, you will be surprised at how much it will grow. As devices like the iPad proliferate, changing the way we find content, even more data will be generated. Companies that have data solutions with the proven ability to scale at low costs will be best able to extract real value out of this data. Make sure to understand how your DMP scales and what kinds of hardware they use for storage and retrieval.

Speaking of hardware, be on the lookout for companies that formerly sold hardware (servers) getting into the  data  business  so  they  can  sell  you  more  machines.  When  the  data  is  the  “razor,”   the  servers  necessarily  become  the  “blades.”  You  want  a  data  solution  whose  architecture  enables the easy ingestion of large, new data sets, and one that takes advantage of dynamic cloud provisioning to keep ongoing costs low. Not necessarily a hardware partner.

Additionally, your platform should be able to manage extremely high volumes of data quickly, have an architecture that enables other systems to plug in seamlessly, and whose core functionality enables multi-dimensional analysis of the stored data—at a highly granular level. Your data are going to grow exponentially, so the first rule of data management is making sure that, as your data grows, your ability to query them scales as well. Look for a partner that can deliver on those core attributes, and be wary of partners that have expertise in storing limited data sets.
There are a lot of former ad networks out there with a great deal of experience managing common third party data sets from vendors like Nielsen, IXI, and Datalogix. When it comes to basic audience segmentation, there is a need to manage access to those streams. But, if you are planning on capturing and analyzing data that includes CRM and transactional data, social signals, and other large data sets, you should look for a DMP that has experience working with first party data as well as third party datasets.

The concept of ownership is also becoming increasingly important in the world of audience data. While the source of data will continue to be distributed, make sure that whether you choose a hosted or a self-hosted model, your data ultimately belongs to you. This allows you to control the policies around historical storage and enables you to use the data across multiple channels.

Consolidation and Insights: Welcome to the (Second and Third) Party
Third party data (in this context, available audience segments for online targeting and measurement) is the stuff that the famous Kawaja logo vomit map was born from. Look at the map, and you are looking at over 250 companies dedicated to using third party data to define and target audiences. A growing number of platforms help marketers analyze, purchase, and deploy that data for targeting (BlueKai, Exelate, Legolas being great examples). Other networks (Lotame, Collective, Turn) have leveraged their proprietary data along with their clients to offer audience management tools that combine their data and third party data to optimize campaigns. Still others (PulsePoint’s  Aperture  tool  being  a  great  example)  leverage  all  kinds  of  third party data to measure online audiences, so they can be modeled and targeted against.

The key is not having the most third party data, however. Your DMP should be about marrying highly validated first party data, and matching it against third party data for the purposes of identifying, anonymizing, and matching third party users. DMPs must be able to consolidate and create as whole of a view of your audience as possible. Your DMP solution must be able to enrich the audience information using second and third party data. Second party data is the data associated with audience outside your network (for example, an ad viewed on a publisher site or search engine). While you must choose the right set of third party providers that provide the best data set about your audience, your DMP must be able to increase reach by ensuring that you can collect information about as many relevant users as possible and through lookalike modelling.

First Party Data

  • CRM data, such as user registrations
  • Site-site data, including visitor history
  • Self-declared user data (income, interest in a product)

Second Party Data

  • Ad serving data (clicks, views)
  • Social signals from a hosted solution
  • Keyword search data through an analytics platform or campaign

Third Party Data

  • Audience segments acquired through a data provider

For example, if you are selling cars and you discover that your on-site users who register for a test drive are most closely  matched  with  PRIZM’s  “Country  Squires”  audience,  it  is  not  enough  to  buy   that Nielsen segment. A good DMP enables you to create your own lookalike segment by leveraging that insight—and the tons of data you already have. In other words, the right DMP partner can help you leverage third party data to activate your own (first party) data.

Make sure your provider leads with management of first party data, has experience mining both types of data to produce the types of insights you need for your campaigns, and can get that data quickly.  Data  management  platforms  aren’t  just  about  managing  gigantic  spreadsheets.  They  are   about finding out who your customers are, and building an audience DNA that you can replicate.

Making it Work
At the end of the day, it’s  not  just  about  getting  all  kind  of  nifty  insights  from  the  data.  It’s   valuable to know that your visitors that were exposed to search and display ads converted at a 16% higher rate, or that your customers have an average of two females in the household.  But  it’s   making those insights meaningful that really matters.
So, what to look for in a data management platform in terms of actionability? For the large agency or advertiser, the basic functionality has to be creating an audience segment. In other words, when the blend of data in the platform reveals that showing five display ads and two SEM ads to a household with two women in it creates sales, the platform should be able to seamlessly produce that segment and prepare it for ingestion into a DSP or advertising platform. That means having an extensible architecture that enables the platform to integrate easily with other systems.

Moreover, your DSP should enable you to do a wide range of experimentation with your insights. Marketers often wonder what levers they should pull to create specific results (i.e., if I change my display creative, and increase the frequency cap to X for a given audience segment, how much will conversions increase)? Great DMPs can help built those attribution scenarios, and help marketers visualize results. Deploying specific optimizations in a test environment first means less waste, and more performance. Optimizing in the cloud first is going to become the new standard in marketing.

Final Thoughts
There are a lot of great data management companies out there, some better suited than others when it comes to specific needs. If you are in the market for one, and you have a lot of first party data to manage, following these three rules will lead to success:

  • Go beyond third party data by choosing a platform that enables you to develop deep audience profiles that leverage first and third party data insights. With ubiquitous access to third party data, using your proprietary data stream for differentiation is key.
  • Choose a platform  that  makes  acting  on  the  data  easy  and  effective.  “Shiny,  sexy”  reports  are   great, but the right DMP should help you take the beautifully presented insights in your UI, and making them work for you.
  • Make sure your platform has an applications layer. DMPs must not only provide the ability to profile your segments, but also assist you with experimentation and attribution–and provide you with ability to easily perform complicated analyses (Churn, and Closed Loop being two great  examples).  If  your  platform  can’t  make  the  data  dance,  find  another  partner.

Available DMPs, by Type
There are a wide variety of DMPs out there to choose from, depending on your need. Since the space is relatively new, it helps to think about them in terms of their legacy business model:

  • Third Party Data Exchanges / Platforms: Among the most popular DMPs are data aggregators like BlueKai and Exelate, who have made third  party  data  accessible  from  a  single  user  interface.  BlueKai’s  exchange approach enables data buyers  to  bid  for  cookies  (or  “stamps”)  in  a  real-time environment, and offers a wide variety of providers to choose from. Exelate also enables access to multiple third party sources, albeit not in a bidded model. Lotame offers  a  platform  called  “Crowd  Control”  which  was  evolved  from  social   data, but now enables management of a broader range of third party data sets.
  • Legacy Networks: Certain networks with experience in audience segmentation have evolved to provide data management capabilities, including Collective, Audience Science, and Turn. Collective is actively acquiring assets (such as creative optimization provider, Tumri14) to  broaden  its  “technology   stack”  in  order  to  offer  a  complete  digital  media  solution  for  demand  side customers. Turn is, in fact, a fully featured demand-side platform with advanced data management capabilities, albeit lacking  the  backend  chops  to  aggregate  and  handle  “Big  Data”  solutions  (although  that  may   rapidly change, considering their deep engagement with Experian). Audience Science boasts the most advanced native categorical audience segmentation capabilities, having created dozens of specific, readily accessible audience segments, and continues to migrate its core capabilities from media sales to data management.
  • Pure Play DMPs: Demdex (Adobe), Red Aril, Krux, and nPario are all pure-play data management platforms, created from the ground up to ingest, aggregate, and analyze large data sets. Unlike legacy networks, or DMPs that specialize in aggregating third party data sets, these DMPs provide three core offerings: a core platform for storage and retrieval of data; analytics technology for getting insights from the data with a reporting interface; and applications, that enable marketers to take action against that data, such as audience segment creation, or lookalike modeling functionality. Marketers with extremely large sets of structured and unstructured data that go beyond ad serving and audience data (and may include CRM and transactional data, as an example), will want to work with a pure-play DMP

This post is an excerpt of Best Practices in Digital Display Advertising: How to make a complex ecosystem work efficiently for your organization All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording or any information storage and retrieval system, without prior permission in writing from the publisher.

Copyright © Ltd 2012