Adapting teams to GenAI with Marketing AI Institute's Paul Roetzer

Episode 28 October 23, 2023 00:26:32
Adapting teams to GenAI with Marketing AI Institute's Paul Roetzer
The Georgian Impact Podcast | AI, ML & More
Adapting teams to GenAI with Marketing AI Institute's Paul Roetzer

Oct 23 2023 | 00:26:32

/

Hosted By

Jon Prial

Show Notes

On this episode of the Georgian Impact podcast, we are talking to a guest we last had in 2020. Back in 2020, we were talking about AI and marketing and how to use things like automation tools to make our jobs easier. Now in 2023, generative AI tools are basically the biggest topic of conversation right now. So, we're here to break that down with Paul Roetzer.


Paul is the author of several books on marketing and AI, including Marketing Artificial Intelligence, and he's the creator of the Marketing AI Conference.


You'll Hear About:

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: The material and information presented in this podcast is for discussion and general informational purposes only and is not intended to be, and should not be construed as legal, business, tax, investment advice, or other professional advice. The material and information does not constitute a recommendation, offer, solicitation, or invitation for the sale of any securities, financial instruments, investments or other services, including any securities of any investment fund or other entity manage or advised directly or indirectly by Georgian or any of its affiliates. The views and opinions expressed by any guest are their own views and does not reflect the opinions of Georgian hi everyone, and welcome to the Impact podcast. I'm your host and Georgian's content editor, Jessica Galang. When we last had today's guest on the podcast back in 2020, we were talking about AI and marketing and how to use things like automation tools to make our jobs easier. Now in 2023, generative AI tools are basically the biggest topic of conversation in marketing right now, and it feels like so much has changed. While I feel like there may be some anxiety around the ways generative AI will impact our work as marketers, I think a lot of marketers are also really excited about the new opportunities that Gen AI will bring. So we're here to break that down with Paul Razor. Paul is the author of several books on marketing and AI, including marketing artificial intelligence, and he's the creator of the Marketing AI conference. So I'm really excited to have Paul on the show to talk about how generative AI will impact marketers and also how it'll impact organizations. So welcome, Paul. [00:01:31] Speaker B: I can't believe it's been three years. I feel like we've lived a lifetime of AI progress in three years. [00:01:36] Speaker A: So Paul would love to hear about your journey working with generative AI and some of these tools. Is there anything that you've discovered and anything that stands out to you, especially in the last few months as so much has changed? [00:01:46] Speaker B: I think for a lot of people, Chat GPT was their first real hands on experience with artificial intelligence, and so the language generation is really where a lot of people's minds go. With generative AI, obviously, dolly two prior to that, with image generation, and now we have mid journey and staple diffusion, and so image and language generation are the two main things. But I think with language, the assumption is that it's like a writing tool. Like a writing. In a lot of cases, people think of it as a writing replacement tool. I actually don't think of it that way. I don't use AI to write anything. I'm a writer by trade, though, so I actually enjoy the process of writing. I find it therapeutic and helpful to think things through. It's how I think and analyze things. However, that being said, I use GPT four all the time for ideation, summarization. We use transcription, other tools for transcription. We develop outlines for things. So I'm always using it in many ways more as a strategy tool than a writing tool. And so a lot of the stuff I use GPT for and other AI writing tools for is stuff that no one would ever actually see publicly. The stuff I write and put on LinkedIn and other places I don't use writing tools for. So strategy tool, ideation tool, that's the thing that to me is undervalued right now. And then a data analysis tool, in the very near future, it's going to be a lot of marketers and business professionals are going to use it for data analysis. In the same way you would give a text prompt to get an output, it'll analyze your data for you. [00:03:13] Speaker A: It's interesting you say that, because I think we're both former journalists, so I really understand that sort of creative process and that need to put in your sort of 10,000 hours in order to actually get good at this thing. And my perspective as well is, in order to prompt the gen AI tool properly or be able to create something good, you need to know what good actually is in the first place. I'd love for you to break that down a little bit. You mentioned that use it as sort of a strategy and ideation tool. In what ways are you doing that specifically? Especially on the strategy side? [00:03:39] Speaker B: Yeah, so, I mean, I'll do it with ideas around business models. I was thinking about launching a separate company. And so I go in and I say, okay, what are the fundamental elements I need to do to build a startup, as an example? And so I can go Google search that, and I can spend an hour pulling the latest list. I can sit there and think back in my own mind about the different companies I've launched over the last two decades. But in 5 seconds I can get a full blown checklist from GPT four to launch a company and say, make sure to include all the financial to dos, all the legal to dos, and it'll actually generate a tactical list of things I need to do to build a company. So really, anything where I have to think about a strategy, you can actually apply this. I did another one for a friend of mine who runs a dental practice, and I said, the costs are going up because of inflation. So this is the prompt, like costs are going up because of inflation. Pricing is controlled by the insurance companies. What are ways that dental practices can drive profitability or maintain profitability despite the increase in costs? And it spit out ten things immediately that dental practices could do to apply AI to drive efficiency. And not being the expert, I just sent them to my friend. It was like, here, you're the dentist. I don't know if these are any good. And he came back, he's like, man, there's like three of these are absolutely worth exploring. It's like, cool. So to your point, he's the expert, so he can assess the output and know if it's any good. But I know how to go in and prompt the system to get something out of it. So I think for people like you're saying if you have a domain expertise, you're way better at prompting the machine and then figuring out if what it gave you is any good. So you still need the expertise and the knowledge to assess the AI. [00:05:17] Speaker A: I think there's a lot of questions about how Gen AI is redefining creativity and what that means when you can prompt a thing and it can make something that closely resembles a human and is trained on human expertise as well. What do you think about Gen AI's role in enabling creative work and how that's redefining creativity? [00:05:34] Speaker B: If you allow it to be, it definitely is an assistant in creation. You can use it to enhance your own creativity, to stimulate ideas. When you're not being able to get the ideas going, you're staring at the blank page. So I do think that people have to really think about these language, image and video generation. We'd probably have to throw in there as ideation engines, as like a true assistant there to help you along and take an idea and expand it out or write a thing and then make it more empathetic. Really think about how it can help you improve and enhance the output and expand your own creativity. I think so many creative people immediately just assume it's just going to replace them, and so they're hesitant to even dive in. They don't even want to experience it. And my feeling has been the opposite is true. My friends who are graphic designers can get way more value out of an image generation tool than I can, because I don't even know what to tell the thing. I'm not a visual person. I can't explain what I'm trying to get out of it. If we just think about a logo design process, for example, if I was going to work with a graphic designer, I would try and put into words what is in my head of what I want that logo to look like, and I wouldn't be very good at it, where I can go into, like, dolly two or mid journey, and I can start just playing around with prompts to create imagery that starts to inspire me. That's like, yeah, that's kind of what's in my head. And now I take that and I give that to my designer and say, here you go. This is what I'm envisioning now. Go do your thing. But now they have a starting point. That's more than my words. And that's the biggest challenge I found working with designers, and my wife's a painting major working with artists, they just see things. I do not have that ability. So in many ways, AI can enhance my creative ability because I can now actually get my words into something that gives me a starting point to go work with the actual artist to create something. [00:07:24] Speaker A: When you were last on the podcast in 2020, you mentioned that adoption of AI tools was low at the time. Are you seeing a change now since it's a little bit more accessible? [00:07:33] Speaker B: Oh, yeah. So we do an intro to AI for marketers class every few weeks. We've done 25 of them now, starting back in November of 2021 and in the start of 2023. So we've done six so far this year. We ask, have you experimented with Chat GPT? The first time in January, it was like 63% of people said yes. The last one we just did was 86%. And now when I ask that question when I'm giving talks, it's basically 100% of the room. Like, everyone has at least tried Chat GPT now adoption, in terms of infusing it into your daily workflows, if we want to kind of consider that adoption, my guess is it's still low overall, but certainly Chat GPT and generative AI rapidly advanced adoption rates and infusion into processes. I think image generation and language generation became the very obvious gateway for people to start using AI on a daily basis that they just didn't have back in 2020. [00:08:30] Speaker A: Are there ways that marketers or even yourself are thinking about using Gen AI as part of the broader content strategy? Because I feel like right now we're in a place where it's really helpful for ideation and sort of just ad hoc, getting a vibe check if you're doing some work. But are there ways to tangibly at this point, include it as part? Like, this is part of our research, for example. This has to be part of it. There are certain prompts that we use. Are we there yet when it comes to actually making geni part of the formal content or marketing strategy? [00:08:59] Speaker B: Yeah, definitely. I mean, I'll just use our podcast as an example. So in our podcast, there's between 18 and 21 steps we go through from the curation of topics each week to then selecting the main one. So our format is three main topics and then rapid fire. So me and Mike, my co host, basically throughout the week, just share links back and forth. He then takes that and synthesizes those and identifies them and puts them into an outline. So there's not much AI involved there. But then from there we record it and then we do a transcription. So that's AI does the transcription. Then we can take chunks of the transcription and drop it into like a GPT four and do a summarization of those transcriptions. Those summarizations become blog posts. So we turn each main topic into a blog post and then the whole thing. So that podcast now just created four blog posts. We also use AI to help split each of those segments up into videos. So we create four YouTube videos as a result of it. Then you use AI of the transcript to create social media shares for LinkedIn, Facebook and Twitter. So using AI there use image generation technology to create the images for each of the blog posts and the social. So like, we've infused AI into a process that was already happening and drove massive efficiency in the entire production and promotion of each podcast using those tools. [00:10:13] Speaker A: So I guess it's just a matter of understanding the different points throughout your own process and just integrating it as it's fit for yourself. So actually, I wanted to ask as well, a lot of what we cover is actionable advice, of course, for founders on our show who are thinking about how to integrate it into their business. Do you have any advice for maybe people, leaders, or any leaders of teams who are getting their teams used to using gen AI tools? Because even as you mentioned, there's some hesitation maybe to even adopt these tools or people can sometimes give up easily when it comes to prompting and it doesn't turn out how you want it to. So what was sort of your journey in navigating getting teams used to and embracing Gen AI in the team? [00:10:49] Speaker B: It starts with education, so you can't just assume the team's going to figure this stuff out. Yes, they're maybe using Chat GPT already and maybe don't even want to tell you because they're not sure if they're allowed to be, things like that. So you have to start with education within the team. So everyone's on a level playing field of what exactly this technology is and how it works. We then advise people to build an AI council. So have a few people internally or a cross functional team. If you're a bigger organization of people who are thinking about this technology on a regular basis and how it impacts the business moving forward, we always recommend having responsible AI principles and generative AI policies. So the responsible AI principles are bigger picture. How do we think about the application of AI in our organization? How do we put humans at the center of that application? Generative AI policies is here are policies of how and when we use generative AI technologies. We do or we do not create blog posts with it. We do or do not use image generation tools. So you're kind of setting those guidelines. From there, I would look at your team and do an exposure assessment of AI. So how likely over the next one to two years is AI to automate portions of these people's jobs? So you start to think about the impact it's going to have on your team, and then the last thing I tell people is build an AI roadmap. So set a three year vision for becoming a smarter organization. We don't know what the tech's going to look like in three years, but you can put the policies and processes in place to stay at the forefront of this stuff and figure out how to infuse it into your own company. [00:12:12] Speaker A: Okay, there's a lot of helpful advice there, and I want to touch on the responsible AI bit later in this podcast. But is it that the AI council helped inform some of the AI principles that you adopted? What were some of the things that you were thinking about when it came to creating that policy around generative AI? [00:12:27] Speaker B: So for mean, we're kind of inventing all this in flight because there aren't really models to go look at. So our responsible AI principles came from years of thinking about the fact that the industry needed them, and that most people that had them were the SaaS companies building the tech, but the average marketer and brand didn't have them. And so I literally just wrote twelve principles one morning and then just published it, put it out into the world and said, here, it's creative commons. Like if you don't have one in your brand, which most likely you don't use this as a starting point, think of it as a template for it. So for us, that just kind of formed the council was never really a formal thing we created because we were doing all this every day, because it was what we did running the institute, so we had to stay at the forefront of all of it. But we're starting to see some people within our community that are doing this. So there's a big technology company in particular, they have 15 people on their AI council, and it's sort of formed with people kind of raising their hands that they're interested in being a part of it. And then they got to a point where they're doing weekly meetings and they're developing processes of how to share information back and forth and then how to build action items based on that. So the AI council to me is a relatively new thing. I'm hearing of organizations that are starting to build them, but the simplest way to think about is just formalizing information sharing. So you may have three or four people on your team who are obsessing over this AI stuff right now. You're listening to podcasts, you're reading articles, and maybe it's on slack or email. You're just like sending each other stuff. That's great, but that's not like scalable. So the council helps to start put guardrails around this that, okay, here's the five people today who are really interested in this. Let's bring them together and create the council and have some weekly or monthly meetings. And the meeting agenda is going to be what are the latest things? Is anyone in our company running pilot tests? We need to know about what are the core tech companies we're working with? What did they announce about their AI initiatives this month? So you can start to build your own framework of what that should look. [00:14:19] Speaker A: Like and diving into those responsible AI principles. So yeah, trust and data privacy. And these topics are really at the forefront of the conversation. For example, do you know how companies are using the data that you input, and are you sharing sensitive information? Or even just this technology is evolving so quickly, what does it mean in the future for jobs? And even the fact that this data is trained on so much of other people's third party work and information, what is the ethics around using it? So I'm curious, what are the responsible AI principles that you adopted? And I guess what is top of mind for you when it comes to responsible AI? [00:14:54] Speaker B: So we talk about believing in a human centered approach that empowers and augments professionals, that technologies should be assistive, not autonomous, which again, you just start to get a sense of the human centered approach to this, that humans remain accountable for all decisions and actions, even when assisted by AI. The human has to remain in the loop in all AI applications. We believe in the critical role that human knowledge, experience, emotions, imagination and creativity play, and will promote those capabilities as part of future career paths. There's elements that get into the fact that law isn't going to catch up anytime soon. The regulatory bodies aren't going to fix this for everyone. So that brands need to have a moral compass, and they need to make decisions that fit align with the values and principles of their organization and the culture they're building. We talk about the need for not dehumanizing your customers, just turning them into data points, because data trains all this stuff. And so not making decisions that aren't in the best interest of the humans on the other side. Who are those data points? And then a big one is the commitment to upskilling and reskilling team members who maybe have larger portions of their jobs intelligently automated in future years. That your first instinct isn't let's save costs. The first instinct is how do we redistribute their time to other more fulfilling activities. So those kinds of things where it's very human centered and always, how does this benefit the human? [00:16:14] Speaker A: I think a big principle coming out of that is having a moral compass as well. I feel like tech historically has not had the greatest reputation on this. So what does it mean for a business who's maybe looking at this, and they do want to adopt this technology ethically and responsibly? What does it mean to integrate a moral purpose or a moral compass into that work? Just for background, at Georgian, we have a thesis called product led purpose, where we believe that companies that do have a purpose can use that to drive growth opportunities and strategies as well, while making a positive impact in the world. Which is why I'm particularly interested in this area. So, long question short, how does a moral compass tie into building an AI strategy? Tangibly. [00:16:53] Speaker B: I mean, my instinct is you either have one already or you don't. And if you're working at a company, you already know whether or not they have a strong moral compass. So the moral compass isn't being created because of AI, it's you're good people doing good things in the world, and so you're going to apply AI in the most responsible way possible. If you're working for an organization that generally takes shortcuts on data privacy and policies and doesn't value their people or their customers, then you probably already know that they're likely going to take shortcuts with AI too. Some of this is just common sense, building on what great companies already have great culture, great people, great missions, and saying we're going to follow the same patterns. Even though AI is going to enable a bunch of really interesting things, we're going to have to make some tough decisions not to use all of the capabilities of AI because sometimes it's going to cross lines for us that, yeah, it's not against the law, but we're just not going to do it ethically or morally. [00:17:53] Speaker A: You also mentioned the need to not just look at your customers as data points and just using that data. Have you thought about what kind of guardrails companies can use to ensure that they're protecting their company's privacy? And considering the human on the other. [00:18:07] Speaker B: Side of that, I think it really just comes down to a lot, again, of how do you treat people now? It's not like we didn't have the ability to do personalization and capture a bunch of data and buy third party data and blend that. We've had the ability to do a lot of these things for the last few years through just basic machine learning capabilities. So generative AI wasn't there yet, but the machine learning was where we could dump all this data in and make a bunch of predictions about people, and you could go get all kinds of data about people. And so some companies probably had guardrails say, like, we're not going to know that about that person. Yes, we could buy that data and know it, but we don't need to know that to do what we're doing. And so again, I think this comes down to the data practices and policies the organization probably already has thought through and making sure that if they have been ethical about that to date, that they don't go down the path of feeling tempted to maybe break some of that. Because what has happened in tech is some of the ethical AI teams at these big companies have been let go, not got rid of the whole teams, but certainly some high profile cases and some others that probably weren't as high profile for sure. [00:19:13] Speaker A: And do you think in the future there's going to be editorial policies or disclaimers for using AI in work? Because, for example, when you're reading a news article, if a journalist is connected to a subject, they have to disclose that. And I know that I've seen some organizations put out policies about how they're using AI, and as you've mentioned, you'll never use it to just straight write blog posts and things like that. Do you think that's going to become more popular among different organizations as well? Is being transparent about that? [00:19:37] Speaker B: Yeah, we definitely teach that all the time. So what we call the generative AI policy part of it, Wired magazine has a really good one. I often reference that when I give talks. There's just like seven really quick things. We will not write articles with it. We will use it for ideation, but it will not do otherwise. We will not use image generation technology. They're very clear. But I think that it's critical that every organization, not just media companies, but every company, develops these, because you do have employees right now who aren't sure if they're allowed to use Chat GPT. And even if they are, they might not know not to put proprietary information into the prompts. So you have people just like, copying and pasting notes from confidential meetings into Chat GPT to summarize them not knowing that data goes back to OpenAI and trains the next models. So the genitive AI policies to me are very important. And I think what will happen is brands will not have every article where it says, oh, an AI wrote 30% of this. You're not going to need that level of detail. I think what you'll do is have your generative AI policy link at the bottom, almost like your terms of use. And then anytime someone wants to go see how you're using AI, they could just click on that and it's right there for. [00:20:42] Speaker A: So you know, you're the head of the marketing AI Institute. What role do organizations like yours, or even just tech, have in coming together to keep this top of mind and create some of these policies or best practices? [00:20:55] Speaker B: For me, putting out the response play I manifesto and just making it basically open source creative commons was an effort to try and make a bigger play into this space. Because what happened was, before chat GBT, nobody cared about any of this. Not the ethicists cared, but the general business world didn't care because they didn't understand it. They didn't realize why it even mattered, because they didn't understand what a language model was. They still don't. There's more awareness now about generative AI. And so as that awareness level has skyrocketed, smart moral people are starting to be like, well, hold on a second, there's more to the story here. And it's like, yes, okay, thank you. We're now to the point where we can talk about the important stuff, but, like, going back to 2019, my first marketing AI conference we did, we had a panel on ethics, and what I told my team is like, it is a general session panel. I am going to force the people coming to this conference to hear this conversation from day one. And so that's how I feel about it. Now is anytime we're talking, anytime I go give a talk, I make sure to infuse the responsible AI principles into that talk. So if it's in front of 300 people or 3000, doesn't matter, we're going to have this conversation. And that's my way of trying to advance it as much as possible, is make sure it's always a part of the conversation, whether it's an event. We're running online courses, presentations, whatever it may be, okay. [00:22:13] Speaker A: And looking a little bit towards the future and how we're using some of these tools. Are there any trends that you're really excited about? It can be about marketing, but if you want to expand that broadly in terms of how it'll change businesses and how it'll make people's lives potentially easier. [00:22:26] Speaker B: I mean, there's definitely things I'm excited about. There are things I worry about. I mean, I think the next major breakthrough is these action transformers. The ability for the machines to not only generate things, but take actions on your behalf. Like think about it, booking your flights for you or planning a trip and then actually going through and scheduling reservations, like doing everything for you, not just giving you a here's what to do. And we're seeing the early signs of that with these auto gpts. They're not good yet, but they're going to improve really quickly. So I don't know that I would say I'm excited about those. I worry about those probably more than I am excited. But in terms of the things that I do look forward to and that give me hope, I believe we're going to enter a rapid phase of innovation and entrepreneurship. I think you're going to see amazing companies built that disrupt industries and create lots of jobs along the way. Not as many jobs as a traditional company would have, but you're starting from scratch, so any job is a job. I feel like we're going to see entrepreneurship go. I feel like we're going to see people emerging within companies at every level who raise their hand and want to help figure this stuff out and lead. And since most organizations don't have people internally who understand the business side of AI, there's lots of opportunity for people to advance their careers and do really interesting things very quickly. So I'm excited by that. I am excited about the idea of enhancing creativity. I think it's going to be a little messy. I think it's going to be disruptive to creators and artists. But I think there's going to be a lot of positive that comes out of that. And I just think that at a broader level for humanity, there's going to be massive scientific breakthroughs, like incredible things that we just didn't think were going to be possible in our lifetimes, that probably in the next five to ten years you're going to start hearing about these massive breakthroughs in biology and astronomy and just like amazing, inspiring things for humanity. And that gives me hope. I think a lot of good will come in the end from this, but it just won't be really, it won't be a straight line and it won't be all good along the way. [00:24:19] Speaker A: Amazing. That's all from me. But was there anything that we didn't cover? Whether it's related to your work or just with generative AI and marketing or the world? I feel like we got into some pretty deep topics here. [00:24:29] Speaker B: It goes way deeper. I mean, I'll tell you, some of the conversations I end up having after these talks are just private dinners. People are scared. I think the biggest thing for me is if this topic is abstract to you, if it feels overwhelming, if it scares you a little bit, you are not alone. That is, the majority of the world feels that way. And so I've been thinking about this stuff and working on it for twelve years. I have had time to come to grips with our future and kind of what this stuff means. And I spend most of my time now trying to help other people just understand it and figure out what it means to them and how they can leverage it. But being real, it's going to hurt sometimes if you're an artist. My wife, like I mentioned, is an artist. The first time I had to show her dolly, too. It wasn't an easy thing for me to do. And I had to show my ten year old daughter, who wants to be an artist, that AI can create images. That was a weird experience. And then we held an AI for writers summit in March. We thought we were going to get 1000 people. We had 4200 register because writers are scared. They think they're going to be replaced by these things. I would encourage you to keep learning. Don't be afraid to ask questions and don't feel like you're alone and fearing that this is going to take your job. A lot of people are in the same spot. So I think the more people are talking openly about this, the better off we're going to be as a society and certainly as an industry. [00:25:48] Speaker A: Awesome. I think even before Gen AI started making headlines. This idea of how to adapt with AI and how to showcase your value as a marketer or as a creative has been top of mind for a lot of us, so it'll definitely be a learning curve. But to your point, I think it helps people to know that they aren't alone in this journey. So Paul, thanks again for coming on the podcast and for diving deep into AI adoption and how to build teams and values around AI adoption. It's so appreciated and thank you again. [00:26:15] Speaker B: Happy to do it's.

Other Episodes

Episode 24

November 25, 2019 00:29:11
Episode Cover

Episode 24: Using AI to Humanize Today's Digital World

Artificial intelligence is starting to reach the point where it can make your interactions with computers and software seem much more authentic and real....

Listen

Episode 55

November 25, 2019 00:19:32
Episode Cover

Episode 55: Using Conversational AI to Make Finance More Accessible

They say that no one cares about banking, but that everyone cares about their financial well-being. And while that’s probably true, it also creates...

Listen

Episode 102

November 25, 2019 00:18:41
Episode Cover

Episode 102: Even with AI, Great Is the Enemy of Good Enough

We’re all tempted by the newest, shiniest technology when looking for a solution to a problem. But often the best results come from proven...

Listen