The Technology Layer Cake

spumoni-layer-cakeI saw a great presentation at this year’s Industry Preview where Brian Anderson of LUMA Partners presented on the future of marketing clouds. His unifying marketechture drawings looked like an amalgamation of various whiteboarding sessions I have had recently with big enterprise marketers, many of whom are building the components of their marketing “stacks.” Marketers are feverishly licensing offerings from all kinds of big software companies and smaller adtech and martech players to build a vision that can be summed up like this:

The Data Management Layer

Today’s “stack” really consists of three individual layers when you break it down. The first layer, Data Management (DM), contains all of the “pipes” used to connect people identity together. Every cloud needs to take data in from all kinds of sources, such as internet cookies, mobile IDs, hashed e-mail identity keys, purchase data, and the like. Every signal we can collect results in a richer understanding of the customer, and the DM layer needs access to rich sets of first, second, and third-party data to paint the clearest picture.

The DM layer also needs to tie every single ID and attribute collected to an individual, so all the signals collected can be leveraged to understand their wants and desires. This identity infrastructure is critical for the enterprise; knowing that you are the same guy who saw the display ad for the family minivan, and visited the “March Madness Deals” page on the mobile app goes a long way to attribution. But the DM layer cannot be constrained by anonymous data. Today’s marketing stacks must leverage DMPs to understand pseudonymous identity, but must find trusted ways to mix PII-based data from e-mail and CRM systems. This latter notion has created a new category—the “Customer Data Platform” (CDP), and also resulted in the rush to build data lakes as a method of collecting a variety of differentiated data for analytics purposes.

Finally, the DM layer must be able to seamlessly connect the data out to all kinds of activation channels, whether they are e-mail, programmatic, social, mobile, OTT, or IOT-based. Just as people have many different ID keys, people have different IDs inside of Google, Facebook, Pinterest, and the Wall Street Journal. Connecting those partner IDs to an enterprises’ universal ID solves problems with frequency management, attribution, and offers the ability to sequence messages across various addressable channels.

You can’t have a marketing cloud without data management. This layer is the “who” of the marketing cloud—who are these people and what are they like?

The Orchestration Layer

The next thing marketers need to have (and they often build it first, in pieces) is an orchestration layer. This is the “When, Where, and How” of the stack. E-mail systems can determine when to send that critical e-mail; marketing automation software can decide whether to put someone in a “nurture” campaign, or have a salesperson call them right away; DSPs decide when to bid on a likely internet surfer, and social management platforms can tell us when to Tweet or Snap. Content management systems and site-side personalization vendors orchestrate the perfect content experience on a web page, and dynamic creative optimization systems have gotten pretty good at guessing which ad will perform better for certain segments (show the women the high-heeled shoe ad, please).

The “when” layer is critical for building smart customer journeys. If you get enough systems connected, you start to realize the potential for executing on the “right person, right message, right time” dynamic that has been promised for many years, but never quite delivered at scale. Adtech has been busy nailing the orchestration of display and mobile messages, and the big social platforms have been leveraging their rich people data to deliver relevant messages. However, with lots of marketing money and attention still focused on e-mail and broadcast, there is plenty of work to be done before marketers can build journeys that feature every touchpoint their customers are exposed to.

Marketers today are busy building connectors to their various systems and getting them to talk to each other to figure out the “when, where, and how” of marketing.

The Artificial Intelligence Layer

When every single marketer and big media company owns a DMP,and has figured out how to string their various orchestration platforms together, it is clear that the key point of differentiation will reside in the AI layer. Artificial intelligence represents the “why” problem in marketing—why am I e-mailing this person instead of calling her? Should I be targeting this segment at all? Why does this guy score highly for a new car purchase, and this other guy who looks similar doesn’t? What is the lifetime value of this new business traveler I just acquired?

While the stacks have tons of identity data, advertising data, and sales data, they need a brain to analyze all of that data and decide how to use it most effectively. As marketing systems become more real-time and more connected to on-the-go customers than ever before, artificial intelligence must drive millions of decisions quickly, gleaned from billions of individual data points. How does the soda company know when to deliver an ad for water instead of diet soda? It requires understanding location, the weather, the person, and what they are doing in the moment. AI systems are rapidly building their machine learning capabilities and connecting into orchestration systems to help with decisioning.

All Together Now

The layer cake is a convenient way to look at what is happening today. The vision for tomorrow is to squish the layer cake together in such a way that enterprises get all of that functionality in a single cake. In four or five years, every marketing orchestration system will have some kind of built-in DMP—or seamless connections to any number of them. We see this today with large DSPs; they all need an internal data management system for segmentation. Tomorrow’s orchestration systems will all have built-in artificial intelligence as a means for differentiation. Look at e-mail orchestration today. It is not sold on its ability to deliver messages to inboxes, but rather on its ability to provide that service in a smarter package to increase open rates and provide richer analytics.

It will be fun to watch as these individual components come together to form the marketing clouds of the future. It’s a great time to be a data-driven marketer!

[This post was originally published April 4, 2017 on Econsultancy blog

Deepening The Data Lake: How Second-Party Data Increases AI For Enterprises

chrisohara_managingdata_updated

I have been hearing a lot about data lakes lately. Progressive marketers and some large enterprise publishers have been breaking out of traditional data warehouses, mostly used to store structured data, and investing in infrastructure so they can store tons of their first-party data and query it for analytics purposes.

“A data lake is a storage repository that holds a vast amount of raw data in its native format until it is needed,” according to Amazon Web Services. “While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data.”

A few years ago, data lakes were thought to be limited to Hadoop applications (object storage), but the term is now more broadly applied to an environment in which an enterprise can store both structured and unstructured data and have it organized for fast query processing. In the ad tech and mar tech world, this is almost universally about first-party data. For example, a big airline might want to store transactional data from ecommerce alongside beacon pings to understand how often online ticket buyers in its loyalty program use a certain airport lounge.

However, as we discussed earlier this year, there are many marketers with surprisingly sparse data, like the food marketer who does not get many website visitors or authenticated customers downloading coupons. Today, those marketers face a situation where they want to use data science to do user scoring and modeling but, because they only have enough of their own data to fill a shallow lake, they have trouble justifying the costs of scaling the approach in a way that moves the sales needle.

chris_ohara1

Figure 1: Marketers with sparse data often do not have enough raw data to create measureable outcomes in audience targeting through modeling. Source: Chris O’Hara.

In the example above, we can think of the marketer’s first-party data – media exposure data, email marketing data, website analytics data, etc. – being the water that fills a data lake. That data is pumped into a data management platform (pictured here as a hydroelectric dam), pumped like electricity through ad tech pipes (demand-side platforms, supply-side platforms and ad servers) and finally delivered to places where it is activated (in the town, where people live).

As becomes apparent, this infrastructure can exist with even a tiny bit of water but, at the end of the cycle, not enough electricity will be generated to create decent outcomes and sustain a data-driven approach to marketing. This is a long way of saying that the data itself, both in quality and quantity, is needed in ever-larger amounts to create the potential for better targeting and analytics.

Most marketers today – even those with lots of data – find themselves overly reliant on third-party data to fill in these gaps. However, even if they have the rights to model it in their own environment, there are loads of restrictions on using it for targeting. It is also highly commoditized and can be of questionable provenance. (Is my Ferrari-browsing son really an “auto intender”?) While third-party data can be highly valuable, it would be akin to adding sediment to a data lake, creating murky visibility when trying to peer into the bottom for deep insights.

So, how can marketers fill data lakes with large amounts of high-quality data that can be used for modeling? I am starting to see the emergence of peer-to-peer data-sharing agreements that help marketers fill their lakes, deepen their ability to leverage data science and add layers of artificial intelligence through machine learning to their stacks.

chris_ohara2

Figure 2: Second-party data is simply someone else’s first-party data. When relevant data is added to a data lake, the result is a more robust environment for deeper data-led insights for both targeting and analytics. Source: Chris O’Hara.

In the above example (Figure 2), second-party data deepens the marketer’s data lake, powering the DMP with more rich data that can be used for modeling, activation and analytics. Imagine a huge beer company that was launching a country music promotion for its flagship brand. As a CPG company with relatively sparse amounts of first-party data, the traditional approach would be to seek out music fans of a certain location and demographic through third-party sources and apply those third-party segments to a programmatic campaign.

But what if the beer manufacturer teamed up with a big online ticket seller and arranged a data subscription for “all viewers or buyers of a Garth Brooks ticket in the last 180 days”? Those are exactly the people I would want to target, and they are unavailable anywhere in the third-party data ecosystem.

The data is also of extremely high provenance, and I would also be able to use that data in my own environment, where I could model it against my first-party data, such as site visitors or mobile IDs I gathered when I sponsored free Wi-Fi at the last Country Music Awards. The ability to gather and license those specific data sets and use them for modeling in a data lake is going to create massive outcomes in my addressable campaigns and give me an edge I cannot get using traditional ad network approaches with third-party segments.

Moreover, the flexibility around data capture enables marketers to use highly disparate data sets, combine and normalize them with metadata – and not have to worry about mapping them to a predefined schema. The associative work happens after the query takes place. That means I don’t need a predefined schema in place for that data to become valuable – a way of saying that the inherent observational bias in traditional approaches (“country music fans love mainstream beer, so I’d better capture that”) never hinders the ability to activate against unforeseen insights.

Large, sophisticated marketers and publishers are just starting to get their lakes built and begin gathering the data assets to deepen them, so we will likely see a great many examples of this approach over the coming months.

It’s a great time to be a data-driven marketer.

Follow Chris O’Hara (@chrisohara) and AdExchanger (@adexchanger) on Twitter.

How AI will Change UX

user_experience_sarah_weise In 1960, the US Navy coined a design principle: Keep it simple, stupid.

When it comes to advertising and marketing technology, we haven’t enjoyed a lot of “simple” over the last dozen years or so. In an increasingly data-driven world where delivering a relevant customer experience makes all the difference, we have embraced complexity over simplicity, dealing in acronyms, algorithms and now machine learning and artificial intelligence (AI).

When the numbers are reconciled and the demand side pays the supply side, what we have been mostly doing is pushing a lot of data into digital advertising channels and munching around the edges of performance, trying to optimize sub-1% click-through rates.

That minimal uptick in performance has come at the price of some astounding complexity: ad exchanges, third-party data, second-price auctions and even the befuddling technology known as header bidding. Smart, technical people struggle with these concepts, but we have embraced them as the secret handshake in a club that pays it dues by promising to manage that complexity away.

Marketers, however, are not stupid. They have steadily been taking ownership of their first-party data and starting to build marketing tech stacks that attempt to add transparency and efficiency to their outbound marketing, while eliminating many of the opaque ad tech taxes levied by confusing and ever-growing layers of licensed technology. Data management platforms, at the heart of this effort to take back control, have seen increased penetration among large marketers – and this trend will not stop.

This is a great thing, but we should remember that we are in the third inning of a game that will certainly go into extra innings. I remember what it was like to save a document in WordPerfect, send an email using Lotus Notes and program my VCR. Before point-and-click interfaces, such tasks were needlessly complex. Ever try to program the hotel’s alarm clock just in case your iPhone battery runs out? In a world of delightful user experience and clean, simple graphical interfaces, such a task becomes complex to the point of failure.

Why Have We Designed Such Complexity Into Marketing Technology?

We are, in effect, giving users who want big buttons and levers the equivalent graphical user interface of an Airbus A380: tons of granular and specific controls that may take a minute to learn, but a lifetime to master.

How can we change this? The good news is that change has already arrived, in the form of machine learning and artificial intelligence. When you go on Amazon or Netflix, do you have to program any of your preferences before getting really amazing product and movie recommendations? Of course not. Such algorithmic work happens on the back end where historical purchases and search data are mapped against each other, yielding seemingly magical recommendations.

Yet, when airline marketers go into their ad tech platform, we somehow expect them to inform the system of myriad attributes which comprise someone with “vacation travel intent” and find those potential customers across multiple channels. Companies like Expedia tell us just what to pay for a hotel room with minimal input, but we expect marketers to have internal data science teams to build propensity models so that user scores can be matched to a real-time bidding strategy.

One of the biggest trends we will see over the next several years is what could be thought of as the democratization of data science. As data-driven marketing becomes the norm, the winners and losers will be sorted out by their ability to build robust first-party data assets and leverage data science to sift the proverbial wheat from the chaff.

This capability will go hand-in-hand with an ability to map all kinds of distinct signals – mobile phones, tablets, browsers, connected devices and beacons – to an actual person. This is important for marketers because browsers and devices never buy anything, but customers do. Leading-edge companies will depend on data science to learn more about increasingly hard-to-find customers, understand their habits, gain unique insights about what prompts them to buy and leverage those insights to find them in the very moment they are going to buy.

In today’s world, that starts with data management and ends with finding people on connected devices. The problem is that executing is quite difficult to automate and scale. Systems still require experts that understand data strategy, specific use cases and the value of an organization’s siloed data when stitched together. Plus, you need great internal resources and a smart agency capable of execution once that strategy is actually in place.

However, the basic data problems we face today are not actually that complicated. Thomas Bayes worked them out more than 300 years ago with a series of probabilistic equations we still depend on today. The real trick involves packaging that Bayesian magic in such a way that the everyday marketer can go into a system containing “Hawaiian vacation travel intenders” for a winter travel campaign and push a button that says, “Find me more of these – now!”

Today’s problem is that we depend on either a small amount of “power users” – or the companies themselves – to put all of this amazing technology to work, rather than simply serving up the answers and offering a big red button to push.

A Simpler Future For Marketers?

Instead of building high-propensity segments and waiting for users to target them, tomorrow’s platforms will offer preselected lists of segments to target. Instead of having an agency’s media guru perform a marketing-mix model to determine channel mix, mar tech stacks will simply automatically allocate expenditures across channels based on the people data available. Instead of setting complex bid parameters by segment, artificial intelligence layers will automatically control pricing based on bid density, frequency of exposure and propensity to buy – while automatically suppressing users who have converted from receiving that damn shoe ad again.

This is all happening today, and it is happening right on time. In a world with only tens of thousands of data scientists and enough jobs for millions of them, history will be written by the companies clever enough to hide the math on the server side and give users the elegance of a simple interface where higher-level business decisions will be made.

We are entering into a unique epoch in our industry, one in which the math still rules, but the ability of designers to make it accessible to the English majors who run media will rule supreme.

It’s a great time to be a data-driven marketer! Happy New Year.

Follow Chris O’Hara (@chrisohara) and AdExchanger (@adexchanger) on Twitter. 

(Interview) On Beacons and DMPs

how-beacons-might-alter-the-data-balance-between-manufacturers-and-retailersHow Beacons Might Alter The Data Balance Between Manufacturers And Retailers

As Salesforce integrates DMP Krux, Chris O’Hara considers how proximity-based personalization will complement access to first-party data. For one thing, imagine how coffeemakers could form the basis of the greatest OOH ad network.

How CRM and a DMP can combine to give a 360-degree view of the customer

360-degree-gif-01For years, marketers have been talking about building a bridge between their existing customers, and the potential or yet-to-be-known customer.

Until recently, the two have rarely been connected. Agencies have separate marketing technology, data and analytics groups. Marketers themselves are often separated organizationally between “CRM” and “media” teams – sometimes even by a separate P&L.

Of course, there is a clearer dividing line between marketing tech and ad tech: personally identifiable information, or PII. Marketers today have two different types of data, from different places, with different rules dictating how it can be used.

In some ways, it has been natural for these two marketing disciplines to be separated, and some vendors have made a solid business from the work necessary to bridge PII data with web identifiers so people can be “onboarded” into cookies.

After all, marketers are interested in people, from the very top of the funnel when they visit a website as an anonymous visitor, all the way down the bottom of the funnel, after they are registered as a customer and we want to make them a brand advocate.

It would be great — magic even — if we could accurately understand our customers all the way through their various journeys (the fabled “360-degree view” of the customer) and give them the right message, at the right place and time. The combination of a strong CRM system and an enterprise data management platform (DMP) brings these two worlds together.

Much of this work is happening today, but it’s challenging with lots of ID matching, onboarding, and trying to connect systems that don’t ordinarily talk to one another. However, when CRM and DMP truly come together, it works.

What are some use cases?

Targeting people who haven’t opened an email

You might be one of those people who don’t open or engage with every promotional email in your inbox, or uses a smart filter to capture all of the marketing messages you receive every month.

To an email marketer, these people represent a big chunk of their database. Email is without a doubt the one of the most effective digital marketing channels, even though as few as 5% of people who engage are active buyers. It’s also relatively fairly straightforward way to predict return on advertising spend, based on historical open and conversion rates.

The connection between CRM and DMP enables the marketer to reach the 95% of their database everywhere else on the web, by connecting that (anonymized) email ID to the larger digital ecosystem: places like Facebook, Google, Twitter, advertising exchanges, and even premium publishers.

Understanding where the non-engaged email users are spending their time on the web, what they like, their behavior, income and buying habits is all now possible. The marketer has the “known” view of this customer from their CRM, but can also utilise vast sets of data to enrich their profile, and better engage them across the web.

Combining commerce and service data for journeys and sequencing

When we think of the customer journey, it gets complicated quickly. A typical ad campaign may feature thousands of websites, multiple creatives, different channels, a variety of different ad sizes and placements, delivery at different times of day and more.

When you map these variables against a few dozen audience segments, the combinatorial values get into numbers with a lot of zeros on the end. In other words, the typical campaign may have hundreds of millions of activities — and tens of millions of different ways a customer goes from an initial brand exposure all the way through to a purchase and the becoming a brand advocate.

How can you automatically discover the top 10 performing journeys?

Understanding which channels go together, and which sequences work best, can add up to tremendous lift for marketers.

For example, a media and entertainment company promoting a new show recently discovered that doing display advertising all week and then targeting the same people with a mobile “watch it tonight” message on the night of it aired produced a 20% lift in tune-in compared to display alone. Channel mix and sequencing work.

And that’s just the tip of the iceberg — we are only talking about web data.

What if you could look at a customer journey and find out that the call-to-action message resonated 20% higher one week after a purchase?

A pizza chain that tracks orders in its CRM system can start to understand the cadence of delivery (e.g. Thursday night is “pizza night” for the Johnson family) and map its display efforts to the right delivery frequency, ensuring the Johnsons receive targeted ads during the week, and a mobile coupon offer on Thursday afternoon, when it’s time to order.

How about a customer that has called and complained about a missed delivery, or a bad product experience? It’s probably a terrible idea to try and deliver a new product message when they have an outstanding customer ticket open. Those people can be suppressed from active campaigns, freeing up funds for attracting net new customers.

There are a lot of obvious use cases that come to mind when CRM data and web behavioral data is aligned at the people level. It’s simple stuff, but it works.

As marketers, we find ourselves seeking more and more precise targeting but, half the time, knowing when not to send a message is the more effective action.

As we start to see more seamless connections between CRM (existing customers) and DMPs (potential new customers), we imagine a world in which artificial intelligence can manage the cadence and sequence of messages based on all of the data — not just a subset of cookies, or email open rate.

As the organizational and technological barriers between CRM and DMP break down, we are seeing the next phase of what Gartner says is the “marketing hub” of interconnected systems or “stacks” where all of the different signals from current and potential customers come together to provide that 360-degree customer view.

It’s a great time to be a data-driven marketer!

Chris O’Hara is the head of global marketing for Krux, the Salesforce data management platform.