OpenWeb’s Tiffany Xingyu Wang on making publishers sustainable with first-party data

Episode 23 January 20, 2023 00:31:15
OpenWeb’s Tiffany Xingyu Wang on making publishers sustainable with first-party data
The Georgian Impact Podcast | AI, ML & More
OpenWeb’s Tiffany Xingyu Wang on making publishers sustainable with first-party data

Jan 20 2023 | 00:31:15

/

Hosted By

Jon Prial

Show Notes


In this episode of the Georgian Impact Podcast, we talk to Tiffany Xingyu Wang, OpenWeb’s first-ever Chief Marketing Officer. 


With a mission to “save online conversations,” OpenWeb wants to improve the quality of conversations online while enabling conversation-based advertising, which allows brands to connect with their most active audiences. Through connecting with audiences, publishers can garner first-party data for ad targeting — a valuable tool as publishers prepare for the disappearance of third-party cookies. 



View Full Transcript

Episode Transcript

[00:00:00] Speaker A: The material and information presented in this podcast is for discussion and general informational purposes only and is not intended to be, and should not be construed as legal, business tax, investment advice or other professional advice. The material and information does not constitute a recommendation, offer, solicitation or invitation for the sale of any securities, financial instruments, investments or other services, including any securities of any investment fund or other entity managed or advised directly or indirectly by Georgian or any of its affiliates. The views and opinions expressed by any guest are their own views and does not reflect the opinions of Georgian. To be truthful, when it comes to consuming books, movies or the news, I think I'm a snob. You see, there's no crowdsourcing for me. No rotten tomatoes for movies, no goodreads for reviews of books, no news aggregators. I have my set of trusted curators and news sources who are curators themselves. You see, I like to trust the source and not just my habits that are feeding an AI. Part of this is also me not paying attention to ads or avoiding comments, which sometimes I think are toxic. So why is all this relevant today? Because I am a user of content and brands and there should be a better way for me to learn and. [00:01:17] Speaker B: Interact across the board. [00:01:19] Speaker A: I like the idea of providers knowing me. First party data, anyone? Welcome to Georgian's Impact podcast. [00:01:25] Speaker B: I'm John Pryle. Tiffany, welcome. [00:01:36] Speaker C: Thank you, John. [00:01:37] Speaker A: Before we dive in, Tiffany, can you give us some background on what open web does? [00:01:42] Speaker C: So, Open Web is a community engagement platform that helps publishers and brands host healthy conversations, build their own first party data on their own properties, and therefore build more sustainable monetization strategies. And today we serve over 1000 customers, including publishers and brands. And really, our vision for the future is that we can really build what I call the community economy, where all the businesses can really thrive on the communities they care about and they curate and they help thrive and be safe. [00:02:24] Speaker B: Excellent. We're so glad you're here. So where I think we are today is we live in, for the most part, an ad supported world, although maybe that's getting overlaid with a world that has more and more monthly recurring revenue, maybe a little bit of both. What's your view of what's happening in terms of the publishers view and them having ad supported internets versus maybe we should have had micro payments a million years ago. What's your thought of where we are today? [00:02:53] Speaker C: Yes, I think if you look at the past 15 to 20 years, the so called web two, or social web, which was dominated by the social media platforms and during that time, publishers and brands started at that moment to rely on the social media platforms to leverage their data and to monetize their content. But fast forward where we are today, publishers and brands realized that, hey, why not build our own social media layer and to really monetize directly to our loyal communities and readers? And that's honestly where open web is coming from, right? We want to build a social media as a service for all the publishers because they're in this sturgeon and unmet need from the publishers to say, why should we rely on social media while we are the content generators and the creators to monetize through social media wireless? We can do that ourselves, but there are different key components. To actually do that so called social media service, you need the safety layer, you need the first party data layer, and then you need to monetize through advertising. So I think publishers and brands shift from these partners and investors or stakeholders in social media into potentially the competitors with the social media to have their own data and more sustainable growth. [00:04:31] Speaker B: I take a couple of critical takeaways from that. I think you're happy that I might live in one particular news provider versus an aggregator, because if I then grow up and get comfortable and join the comment section, it's a self selecting group. I'm in a channel already where we kind of know what the topic is, whether I'll pick it a New York Times reader or I'm reading an article about climate, all of a sudden I'm now self selecting into that. You're probably. That's where the publishers want to go. [00:05:03] Speaker C: Absolutely. And I think it's not only the benefit for the publishers, it's also benefit for the brands who start now advertise directly on publishers real estate. And it's also beneficial for the readers or users. For publishers, it's a win win situation for many reasons. One of many is that you actually have way better social and interest graphs, as you exactly mentioned, that I know exactly which column, which article, and through the comment, I know exactly the sentiment against that article. So publishers, if done well, not only curate their own communities, which can be more sustainable for their business, but also actually curate way better quality data. For the brands who want to advertise. [00:05:52] Speaker B: On publishers, it's now their first party data. They're not relying on getting it from. And as Apple blocks everything anyway, they're going to begin to grow and grow and amass the right first party data that they can then offer to their advertisers. [00:06:08] Speaker C: Absolutely. [00:06:09] Speaker B: So I'm just thinking about now you've got the content providers, whether it's right or wrong. I'll just use New York Times for a second and General Motors as a car. General Motors doesn't do a generic advertisement on the New York Times or generic advertisement on Twitter, but they'll do one very specific to maybe an article about evs that adds some sophistication that's required on the part of the brands. Do they have that level of technology yet? Or is that an investment that's going to be required by all the brands? Or does open web help them with that? [00:06:43] Speaker C: There is a term in the industry called brand suitability. With what's going on with Twitter and Elon and long before that, brand safety is a very common used term. But under that, there's really this concept of brand suitability is what you're talking about. Because if you understand your audience through the first party data of whatever platform or publisher that you advertise through, then you have a better idea of the intent. And many people have many controversies about what's going on with Twitter. But one of the controversial point is about if the data is quality data and for the publishers, they know, because they're content creators, if they own that first party data, they should have that data. And brands through that data can directly know if my advertising is suitable for the environment where the advertising is placed. Now, to your point, it isn't easy. I think many of the listeners might be familiar with this immense lumascape where there are hundreds of players in the advertising and marketing tech space, right? And you see on one end you have the publishers, the other end you have brands. But in the middle, there are so many middle players in between. So brand suitability and brand safety was so hard because you have SSP, of DSP, of ad server. When you disconnect the real estate and the person who want to buy the real estate, suitability becomes way harder. But if you skip automate players, you actually have the right data to tell, hey, this is how my room looks, my swimming pool looks, and this is where my garden is. It's much easier for the buyer to actually understand the surrounding and place where they want the furniture, where they want to be. So I think suitability is much easier in the era. To your initial question, this inflection point where publishers and brands start to build their own communities and therefore their own data, the so called brand safety and suitability becomes a more feasible problem to solve, which was technically, from infrastructure perspective, was so hard and it was never a priority. [00:09:05] Speaker B: Neither do you see then some of these middle layers getting cut out, do you see a market shift that you'll be able to provide information about a specific community, that the brands can go directly to the publishers and bypass some of those middle layers where everything really is quite opaque. [00:09:23] Speaker C: So I started my career in the US, actually in the space of DMP, and I was with Salesforce and I had two patents with USPTO about how to understand the journey, insights, the touch points of a customer's journey in this opaque world. To figure out what is your next best action is now that was the world many years ago. And then fast forward. My last company and open web are playing in the space of helping our customers to build their first party data, to build their own communities in a safer and more trusted way. And I think when I look back, my personal thesis of where the market is going is the following. What we call the digital transformation for the past 20 years. In the next 20 years, trust in a new digital transformation. So all the companies which succeeded in digital transformation or helped their customer to drive digital transformation are the companies we heard today, those are the likes of Salesforce will help b, two B businesses. I mean, there are b, two B companies help their customers to drive digital transformation or they're directly b, two c, either in ecommerce, gaming and social media to really centralize the data and to monetize. But the future is different. The future is if you don't place trust at the center of your strategic growth, if you don't place the people at the center of your strategic growth, you won't be the brands and the platforms or the names we hear in 20 years. So in this thesis of if trust is the next digital transformation for the next 20 years, then coming back to what you were asking is, I don't think any business, including publishers or brands, have the choice but choose this new growth model, right? And also because of that, and because that elevated awareness of the imposed trust and the awakening of the new generation caring about safety and the privacy. And as a result, you started to see there this paradigm shift of the pathways. Finally we can break this opaque loom, escape and then go into the world which is trust and people centric, that you actually can connect the two dots. I mean, brands and platforms directly and brands and publishers direct together. [00:12:07] Speaker B: I think about the way things were for placing ads and I'm thinking about a Facebook ad placement interface where you click a bunch of boxes and they got in all kinds of trouble because there were boxes they should never allow people to click or Google AdWords. This allows you or your brands to work with the content providers and maybe have a different set of facets they might click on that become more important. Do you see that evolving away from the level of, I mean, maybe there's things like location still matters, age matters, but maybe the things that don't matter. Do you see the brands and the content providers evolving in terms of what's going to be on that kind of set of check marks for where to place advertisement? Do you see that evolving? [00:12:56] Speaker C: Yeah, I think the general principle is it has to go with the consumer behavior change. The check marks is a way to gather the data. Again, I think in the past 20 years, there are three things happened. People were productized, data was centralized, and growth was above everything else. So every single checkmark, every single layout, every single user experience to optimize one single thing is how much time you spend on site, not because your social interaction, it's because how much you're going to spend time and purchase that next item through advertiser. [00:13:37] Speaker B: Right. [00:13:38] Speaker C: So I think what happened is we start to see, at least from the trust perspective you mentioned, is people start to ask how my data is actually consumed. How can I record my data in this whole new GDPR, CCPA and many other many regulations world? So it start to have consent management by design start to be actually part of the user experience, right? And from safety perspective, people start to request, why is the underrepresented voice I need to be assaulted today, over 40% of the US Internet users have reported to be harassed or subject to hate speech. And users saying, why should I be in this global digital society but not feeling I'm part of the society? And people start to request, well, you need to embed a safety experience in horizons metaverse, with many controversies, like how you actually include that in your experience. But even in the two dimensional world we have today, how can I appeal if there was a wrong call about my behavior? And how can I be involved in the conversation of you designing your safety enforcement and policies? So many new aspects of trust that new generations of users demand are now being considered into the user experience design, which was never the case before. [00:15:04] Speaker B: So you wrote a tremendous piece for the World Economic Forum. We're going to put a link to that in the show notes I was talking about building a community economy and a couple of things that we've already mentioned, kind of shifting away, giving the brands an opportunity to do more, particularly becoming a community builder because they'll own more information about the people in their consumer. We're moving to this different type of economy, you called it, shifting away from a creator led content towards a community economy. And we talked about safe spaces. Talk to me a little more about where you see all this going. This is really an interesting, this is kind of wrapping up the directional piece. And then I've got a couple more questions as we get to web three and the like. [00:15:44] Speaker C: So there are three lags to lift this community economy, and they are, first, the safety layer, how you create a safety net for everyone to involve. And I mentioned the data, about 40% of us Internet users being harassed or subject to hate speech. If you don't feel safe, you can't really be engaged. Especially you briefly alluded to web three. There are many components of web three. One of them is the metaverse. The immersiveness will make the impact way more immersive and way more impactful. So that's the first layer. That's why all the content and moderation has to be in place. And then the second layer is, if you get that covered, the second layer is really the respect for data dignity. And the flip side is the privacy. The whole idea of community economy is how you put people at center. So that's why I mentioned you have to respect the consumers for their data dignity. And the way to do that is to have privacy preserving first party data approach. So that's the second layer. And for businesses, it's very important. Think about that. The more you actually respect the privacy, the more you construct data in a very clean, structured and managed consent. First way, the more you actually can sustain your community. And on top of that, when you have people who feel safe and engaged, and you have the people who are willing to give you the data to allow you to understand their social behaviors and interests, to graphs. Now, you actually can monetize. Where's your foundational layers covered? It can be advertising. It was just one way of it. It can be ecommerce and can be subscription. So there are many. It's the summit of diversification of monetization. But at the end of the day, if you look at that, it's no different from a mass flow pyramid, right? People want to feel safe first, and people want to actually have that interaction, which translates to the data. And then you actually can build a summit of the monetization for businesses and for people to really consume the content, commerce on the platform. So this is the idea and the pillars of the community economy. The last thing I will say is it's really a pendulum swing from before we called attention economy, which you centralize all the data, centralize all the people, centralize everything. And this is an awakening and pendulum swing back to decentralization, where you put people back to the center, put community back to the center. [00:18:22] Speaker B: As I think about this community, we can have a community led. Let's say a piece of news is out or something is out, and 100 people get engaged with it. Do you want the hundred people to just kind of have at it, or do you need some moderation? So if I'm General Motors and I'm running an ad, does a general Motors communications or marketing person stay engaged in those interactions? They could obviously do more selling and the like. Or do you kind of set it up and move away? How do you see it, really? What's an optimal way for work? If I was a brand? [00:19:01] Speaker C: Yeah. I think we are all learning in terms of the best practice for brands and the publishers to create their communities, for two reasons. One is we already learned the lessons of somebody who has already done that, which is called social media platforms, right? So if you look at social media platforms, what has been done is to start with just drive growth, no moderation. And then suddenly you see a deproliferation. And when that happened, you already have toxicity online. It's kind of big snowball running very fast. It's very hard to break at that point. And since 2016, the election, there was an uptick of the attention to that. It was accelerated by the cultural movements like Black lives Matter, stop asian hate, then capital insurrection, and now with what's going on with Twitter, so it's accelerated and accelerated. So what we have learned, lesson one is if you can put moderation towards policies, enforcement at the get go, do it at the get go. Because it's always start with what you want the brand to be, what identity is, what do you want a community to believe and act in your community, and then set policies and enforce that. It's much easier to do in the beginning, which is the term we call safety by design. So that's lesson one that brands and publishers can learn. The second, what we can learn from also what's happening is kind of the version to an evolution of the social media is kind of, people say Web 2.5, or the cusp moment between all the social media and the new generation of the business models and revenue models is the companies like Roblox, Discord. They're very community driven platform. And I think it's just from multi dimensional to one dimensional, become more broadcast. Right. In many ways. And in those platforms, another thing we can learn is they always have facilitators and moderators on each server and in each community. And it was a necessity. If you ask anyone at Discord and a roblox, they will tell you you cannot almost do without it. Because, first of all, in many cases, the policy wasn't clear. Enforcement was not there. So you have to facilitate. But even you get policies right, you get enforcement right. You still need the person to keep an eye on things before things go wide and give people to facilitate that conversation. So that is still happening. People are still learning the best practices of that. So that's something we can learn to keep an eye on. So I think the benefit for anybody who is starting to build their own communities today, the good thing is you're not starting from scratch. There's so many mistakes made. Learn them as I taught a class two years ago, just to gather all the best practices in the space so that people can just learn and take shortcuts. Because we made many mistakes. Why not repeat those? I think that's a good news, is web three builders should get things right at get go, at least avoid the mistakes that we've seen in the web. [00:22:17] Speaker B: Two, two things coming out of web three. One, obviously, you made the point. It used to be a centralized data controlled by some large behemoth back at the cloud. And not only is there security risks, but what are they doing with this data risk? So web three pushes it back out into the hands of the users. So I got a couple of questions for you. The first one is, if I've got my data, I can give you my name or I can give you my avatar. Doesn't anonymity change in this world? Is it still not a good thing to allow anonymous content? Clearly, if there's a moderator there, maybe it doesn't matter as much. I'm curious what your thought is moderation versus people still having to have validated identities. And let us not go down the blue check mark. [00:23:06] Speaker C: Yeah. Gosh, once I feel in our space, once it gets to the point of identities, it gets murkier, I have to say. So the way I look at this is you really have three levels. You have message level moderation, you have conversation level moderation, and you have really identity level conversation. So actually, probably I have one layer added to it is, do you moderate within your platform or do you moderate across platforms? It gets even trickier. Right. And very quickly, it gets to the controversy about censorship. Right. So if you do cross platforms. So the industry standards and practice right now is you definitely monitor at a conversation level and a message level. And at a message level, people usually use what we call the keywords based technologies to do moderation. And usually then suffice. I will explain why at a conversation level, what we call contextual AI. So you use the context and metadata to determine if the message is safe or not based on different use cases. So contextual AI is usually kind of the golden standard right now to do moderation at identity level, it's less important to know about exactly who that person is, except when it's about age verification, especially for the underage users use case. But beyond that, more and more companies and platforms and brands looking to reputational score. So how you behave, what is the history of that identity? On my platform in the past? So is the person pretty toxic so far, and based on that reputational score and what that person just acted on that incidence, the platform or brand or publisher would decide how to take action. That means is a warning. Is that banning? Is that banning forever now cross platforms moderation is much harder. But you can't avoid talking about this if you really talk about metaverse, right? Metaverse. The whole idea is I can jump from a gaming platform to ecommerce platform, to another gaming platform, to a dating platform. You kind of have an identity crossed different platforms. But the question is, do you carry the same reputational score from this game to next game and to ecommerce and dating platform? Maybe not, right? And do you carry this reputation from this publisher to that publisher and the other publisher? Maybe. One thing I do want to mention is one thing I see in many industries, but because digital trust and safety is pretty nascent industry, what I encourage a lot is cross disciplinary collaboration. This is an area where you need not only just technologists, you need social science background and behavioral science background, people to actually collaborate on that, which is often what is lacking when you zoom only into the Silicon Valley technologies builders and forget the broader societal elements into it. But we are building a technology to serve the society, so you have to think about those aspects. [00:26:33] Speaker B: I like the idea of even let's get it right in context based. There's going to be AI and a layer of moderation on top of that. But let's get it right. So if it happens to be an article about climate and evs, or happens to be a section about New York Knicks or whatever it might be, you could begin to manage that. And it does make sense that if there is a bunch of sport providers just staying with the nick thing for a second, a bunch of sports providers that I am a troll on bad things to one particular thing. It would be great if, like you said, if the blot wars began to recognize a list of bad people, which means identity has to play a bigger role in it. So you co founded the Oasis consortium, and it's really the first set of user safety standards for web three. This is kind of what we've been talking about. I don't need you necessarily to describe it, unless you think you need to, or if we kind of covered what the issues are that you're trying to figure out in this consortium. [00:27:32] Speaker C: I found it in 2020. It was when the big brands like Unilever and product Gambo boycotted platforms like YouTube and Twitter due to hate speech. And very quickly I realized the problem is not only just technology. Right. The problem is that the people on the brand side don't necessarily appreciate how the platforms and publishers do the moderation and what they can do and what they should do. So there is this expectation and a feasibility gap. So very quickly I was saying to myself, the best way to move the industry forward is to get the people on both sides together. So Oasis consortium was on that basis, let's get brands, agencies, publishers and platforms all together and discuss what has been done, what is expected and what should be done next. And very quickly we got to the conclusion is there wasn't a definition of good. People know when bad things happen, but people really don't have what is what. How much I should do to not to end in the New York Times headline for scandalous news, that I didn't do my moderation properly. Right? So very quickly we say, let's build the user safety standards for the upcoming web three, learning from all the mistakes we made and that served as the best practices and the golden metrics you can start with. And the ideas, we can evolve that every year. And also we are bringing more academia and other nonprofits to look at the standards. Because when you look at user safety, for example, user safety is the basis for brand safety, right? At open web, the first thing we think about is how we keep users safe. And then as a result, you will find the foundations up to keep brands up. I think I read an article recently, very interesting. People say, are we really doing brand safety for keeping people safe or keeping brands safe? Right. We need to take a pause to think about that. The whole hypothesis, if we need web three, is to give the power back to the people, because people demand it. So you can't keep just thinking the old way. And for brands to grow, they want to partner with the platforms and technologies vendors who actually understand how to make people feel safe. So coming back to what you were saying is user safety for open web and for Oasis is the foundation of brand safety. So we build a user safety standards and then we bring all the people who care about brand safety, look at, they say, hey, this is what you can do actually to keep users safe. And if you keep users safe, very naturally, you actually can keep brand know and not the way around. [00:30:37] Speaker B: And I have this vision now of a little of a ven diagram with users and brands and publishers altogether. And it's really this safety radiating out and just, it's terrific. Tiffany, this was a great discussion. Thank you so much for spending the time with us. We really appreciate it. [00:30:52] Speaker C: Thank you, John. I love the conversation. Really great question to spark this wonderful conversation with.

Other Episodes

Episode 13

November 25, 2019 00:17:34
Episode Cover

Episode 13: Tapping into the Power of Natural Language Processing with Jason Brenier

Natural language processing (NLP) is a growing application of artificial intelligence. It can bring tremendous value to your client interactions and take your company...

Listen

Episode 112

November 25, 2019 00:22:49
Episode Cover

Episode 112: Designing for Humans and Machines

Human-centered design has helped to make truly great products, by designing the product so that the user has a great experience and perceives value....

Listen

Episode 8

May 09, 2022 00:23:48
Episode Cover

Cybersecurity in the Age of Quantum Computing With Michele Mosca

When it comes to quantum computing breaking encryption, it’s not a question of whether or not it will affect you, but when. In this...

Listen