How Granular Data Collection and a Robust Second-Party Data Strategy Changes the Game
The world’s largest marketers and media companies have strongly embraced data management technology to provide personalization for customers that demand Amazon-like experiences. As a single, smart hub for all of their owned data (CRM, email, etc)—and acquired data, such as 3rd party demographic data —DMPs go a long way towards building a sustainable, modern marketing strategy that accounts for massively fragmented digital audiences.
The good news is most enterprises have taken a technological leap of faith, and embraced a data strategy to help them navigate our digital future. The bad news is, the systems they are using today are deeply flawed and do not produce optimal audience segmentation.
A Little DMP History
Marketers were slower to embrace DMP technology, and they quickly grasped the opportunity too. Now, instead of depending on ad networks to aggregate reach for them, they started to assemble their own first-party data asset—overlapping their known users with publishers’ segments, and buying access to those more relevant audiences. The more cookies, mobile IDs, and other addressable keys they could collect, the bigger their potential reach. Since most marketers had relatively small amounts of their own data, they supplemented with 3rd-party data—segments of “intenders” from providers like Datalogix, Nielsen, and Acxiom.
The two primary use cases for DMPs have not changed all that much over the years: both sides want to leverage technology to understand their users (analytics) and grow their base of addressable IDs (reach). Put simply, “who are these people interacting with my brand, and how can I find more of them?” DMPs seem really efficient at tackling those basic use cases, until you find out that they were doing it the wrong way the whole time.
What’s the Problem?
To dig a bit deeper, the way first-generation DMPs go about analyzing and expanding audiences is through mapping cookies to a predetermined taxonomy, based on user behavior and context. For example, if my 17-year-old son is browsing an article on the cool new Ferrari online, he would be identified as an “auto intender” and placed in a bucket of other auto intenders. The system would not store any of the data associated with that browsing session, or additional context. It is enough that the online behavior met a predetermined set of rules for “auto-intender” to place that cookie among several hundred thousand other “auto- intenders.”
The problem with a fixed, taxonomy-based collection methodology is just that—it is fixed, and based on a rigid set of rules for data collection. Taxonomy results are stored (“cookie 123 equals auto-intender”)—not the underlying data itself. That is called “schema-on-write,” an approach that writes taxonomy results to an existing table when the data is collected. That was fine for the days when data collection was desktop-based and the costs of data storage were sky-high, but it fails in a mobile world where artificial intelligence systems crave truly granular, attribute-level data collected from all consumer interactions to power machine learning.
There is another way to do this. It’s called “schema-on-read,” which is the opposite of schema-on-write. In these types of systems, all of the underlying data is collected, and the taxonomy result is created upon reading all of the raw data. In this instance, say I collected everything that happened on a popular auto site like Cars.com? I would collect how many pages were viewed, dwell times on ads, all of the clickstream collected in the “build your own” car module, and the data from event pixels that collected how many pictures a user viewed of a particular car model. I would store all of this data so I could look it up later.
Then, if my really smart data science team told me that users who viewed 15 of the 20 car pictures in the photo carousel in one viewing session were 50% more likely to buy a car in the next 30 days than the average user, I would build a segment of such users by “reading” the attribute data I had stored. This notion—total data storage at the attribute (or “trait”) level, independent of a fixed taxonomy—is called completeness of data. Most DMPs don’t have it.
Why Completeness Matters
Isn’t one auto-intender as good as another, despite how those data were collected? No. Think about the other main uses of DMPs: overlap reporting and indexing. Overlap reporting seeks to overlay an enterprise’s first party data asset with another. This is like taking all the visitors to Ford’s website, and comparing that audience to every user on a non-endemic site, like the Wall Street Journal. Every auto marketer would love to understand which high-income WSJ readers were interested in their latest model. But, how can they understand the real intent of users if they are just tagged as “auto intenders?” How did the publisher come to that conclusion? What signals contributed to having that those users qualify as “intenders” in the first place? How long ago did they engage with an auto article? Was it a story about a horrific traffic crash, or an article on the hottest new model? Without completeness, these “auto intenders” become very vague. Without all of the attributes stored, Ford cannot put their data science team to work to better understand their true intent.
Indexing, the other prominent use case, scores user IDs based on their similarity to a baseline population. For example, a popular women’s publisher like Meredith might have an index score of 150 against a segment of “active moms.” Another way of saying this is that indexing helps understand the “momness” of those women, based on similarity to the overall population. Index scoring is the way marketers have been buying audience data for the last 20 years. If I can get good reach with an index score above 100 at a good price, then I’m buying those segments all day long. Most of this index-based buying happens with 3rd-party data providers who have been collecting the data in the same flawed way for years. What’s the ultimate source of truth for such indexing? What data underlies the scoring in the first place? The fact is, it is impossible to validate these relevancy scores with the granular, attribute-level data being available to analyze.
Therefore, it is entirely fair to say that most DMPs have excellent intentions, but lack the infrastructure to perform 100% of the most important things DMPs are meant to do: understand IDs, and grow them through overlap analysis and indexing. If the underlying data has been improperly collected (or not there at all), then any type of audience profiling by any means is fundamentally flawed.
What to do?
To be fair, most DMPs were architected during a time when it was unnecessary to collect data through a schema-on-read methodology—and extremely costly. Today’s unrelenting shift to AI-driven marketing necessitates this approach to data collection and storage, and older systems are tooling up to compete. If you want to create a consumer data platform (“CDP”), the hottest new buzzword in marketing, you need to collect data in this way. So, the industry is moving there quickly. That said, many marketers are still stuck in the 1990s. Older DMPs are somewhat like the technology mullet of marketing—businesslike in the front, with something awkward and hideous hidden behind.
Beyond licensing a modern, schema-on-read system for data management so marketers can collect their own data in a granular way, there is another way to do things like indexing and overlap analysis well: license data from other data owners who have collected their data in such a way. This means going well beyond leveraging commoditized third-party data, and looking at the world of second-party data. Done correctly, real audience planning starts with collecting your own data effectively and extends to leveraging similarly collected data from others—second party data that is transparent, exclusive, and unique.