How Georgian's AI team supports companies in adopting GenAI

Episode 1 November 23, 2023 00:20:35
How Georgian's AI team supports companies in adopting GenAI
The Georgian Impact Podcast | AI, ML & More
How Georgian's AI team supports companies in adopting GenAI

Nov 23 2023 | 00:20:35

/

Hosted By

Jon Prial

Show Notes

Generative AI is redefining businesses with its capacity to write text, generate code, execute tasks, create images, and more. Gen AI is fundamentally changing how companies have to build their products.


This is the first in a series of podcasts featuring our AI team, where they share their experiences on the generative AI work they've already done with more than 20 of our portfolio companies. In this episode, we are joined by two technical leaders of Georgian's R&D team Parinaz Sobhani and David Tingle. Parinaz is the Head of AI at Georgian and David is the team's Engagement Manager for our work with our customers. 


You’ll Hear About: 


View Full Transcript

Episode Transcript

[00:00:00] Speaker A: The material and information presented in this podcast is for discussion and general informational purposes only and is not intended to be, and should not be construed as legal, business tax, investment advice, or other professional advice. The material and information does not constitute a recommendation, offer, solicitation or invitation for the sale of any securities, financial instruments, investments or other services, including any securities of any investment fund or other entity managed or advised directly or indirectly by Georgian or any of its affiliates. The views and opinions expressed by any guest are their own views and does not reflect the opinions of Georgian. Welcome to the Impact podcast. I'm John Pryle. Generative AI is redefining businesses with its capacity to write text, generate code, execute tasks, create images and more. Gen AI is fundamentally changing how companies have to build their products, as many companies are working hard to keep up with Generative AI. This is our first in a series of podcasts featuring our AI team where they share their experiences on the generative AI work they've already done with more than 20 of our portfolio companies. With me today are two technical leaders from Georgian's r and D team. We have Paranoia Sabani and David Tingle. Paranoz is Georgian's head of AI and David is the team's engagement manager. For our work with our customers, they'll talk about how companies are building with Gen AI, how we've worked with our customers, and how finding differentiation has changed. With Gen AI Parry, we often talk about how technology development needs to basically be done across an entire company to some degree, no longer just an AI team or a data science AI team. Is this still something you agree with? Has anything changed? [00:01:53] Speaker B: I do agree, and I believe even before the GenaI excitement, we were actually recommending exactly the same principles like data science and AI team shouldn't work in isolation. It should be cross functional collaboration. We should treat it as a top down problem, starting with the actual problem customers pain points. We should have more collaboration with the engineering and data science team and the type of skill set that you need. It's not only people with science background, you also need people with engineering background. You also need to make sure that you have the right data foundation and you have the right data pipelines. It's not a garbage in, garbage out, and to some extent I think all these principles are still valid. What maybe is easier is that building proof of concepts are easier now because these technologies are very accessible and level of performance that you can quickly get to using those pretrained models or foundational models as a base. I guess that's also a big difference, because you can easily get to a good level of performance even without bringing your own data in. And of course you can bring your data in and then build on top of that. [00:03:06] Speaker A: Interesting. And of course, it's not just the teams, but the data and how companies build around all that. But I think we should go back. I'd like a little bit of history on how our AI team operates and how we've evolved over the past few years. Can you give me some background, please? [00:03:21] Speaker C: I think data and machine learning has always been at the core of Jordan's investment thesis. We've defined a thesis around data and machine learning very early on in the Jordan journey, and so a lot of the early hires were technical folks who had kind of interesting perspectives on the market and on investment opportunities from a technical perspective, specifically in the area of data and machine learning, people like Matz, who's our head of R D, or Parnas, both of them have phds in machine learning. And over time, Durgen's kind of built around that core with engineers and research scientists to form a team that is focused on helping our customers build and deliver software and code, specifically in the area of machine learning. And that's kind of what we still do in the generative AI world. But a lot of the tools that we're using have changed to kind of incorporate these new large language models and other innovations in the space. [00:04:17] Speaker A: So does the need to have different skill sets look totally different, slightly different, for companies that are trying to adopt Gen AI? And of course, the question is, is even more important now. [00:04:28] Speaker B: Yes, the number of use cases that these technologies can solve have expanded. What does it mean? It means that there might be more and more automation and less and less human in the loop. And it's very, very hard to actually evaluate and monitor the performance of these models. At the end of the day, that's a very, very hard problem. Still, many organizations are dealing with what would the right framework for quality assurance. We know some of the challenges with these large language models, like hallucinations, and they can make things up, and they are not very consistent. You might ask a similar question and get very different answers. They've been always important, but now it's even more important to have diverse teams thinking about some of these challenges because the number of use cases, because of more adoption and because of more automation. [00:05:24] Speaker A: Okay, interesting. But just in case any of our listeners are not familiar with hallucinations, it's a well known technical challenge where large language models are involved because they generate output by predicting the next word that could be fabricated, factually incorrect or nonsensical, because they're not tied to a corpus of facts, they're just tied to a corpus of lots and lots of text that make sentences. But enough of that. I mean, it's too easy for just to spend time on cool examples that we could talk about over dinner with friends and family. Yeah, I guess I really am a nerd. But let's talk about these foundational models. In many cases, companies are going to start with a model from one of the big tech companies, Amazon, Med, OpenAI. But aren't the competitors of these same companies also working with these big tech models? So what's the starting point and where do things evolve to from there? [00:06:14] Speaker D: Like a lot of things, the starting point is where the company's currently at in terms of infrastructure, what they've already adopted, what partners they already work with, and most of these large platform providers that would provide services or technology in other domains offer solutions in this space. And so that's the starting point. A lot of the time is kind of quick adoption of what's closest at hand. But I think what we're seeing is many of our partners that we work with are monitoring things like the leaderboards for model performance, new releases for both open source and closed source models, and they're really staying on top of the ecosystem. And I think there's a desire to be able to kind of move between models, between foundational models, and also the technology that layers on top of them. [00:07:00] Speaker A: It will be interesting to see how startups begin to move between open source models and closed source models to build their products for our listeners. Open source models are publicly available models that companies can build on top of and modify, while closed source models are more tightly controlled by the company who created it. There are valid business models on both sides, by the way. So you actually both mentioned models, and we know model selection is a focus area for companies, but I'd like to just step a little higher level here and stay with differentiation. And I also want to bring in data. Of course, there's data that built these foundational models, but there's also first party data that companies have acquired through their application and working with their customers. So when it comes to where companies put their focus on finding differentiation, when and how do these factors enter into the picture? [00:07:48] Speaker B: I would highlight David's point around. It depends where their current status is. Have they deployed machine learning models before or not? Have they used non deterministic systems using their own data before or not? Have they built that model of deploying these models and monitoring their performance over time or not. If they have, then yes, they can start with bringing their own data in and fine tuning some of these models, or start with some sort of vector databases and some sort of augmented retrieval. But if they are just starting, most likely they shouldn't start with fine tuning or starting from scratch, like building some of these large language models. Most likely they have to start with some sort of integration with one of these APIs or large language models. They can start with Google or OpenAI or cohere of the world, and over time bring their own data in. So then they can come up with the right design, getting more feedback from their customers, collecting the right data, and for the next version of the product. They can actually fine tune these large language models and custom it and make it more customized for a specific problem or specific domain. Our recommendation is normally kral walk run. [00:09:06] Speaker A: That's exactly what I was thinking about, that you're running with your own data, but you're not ready yet. You've got to have the strategy, you have to have the base knowledge. I love that you talked about, where are they today? I think what's interesting is we talked about it affecting everybody. I also love that you talked about the rise of vector databases, and I know it's a hot topic in Gen AI, and I thought I'd explain it just a bit before we move on to David. Large language models like Chat GPT have a limit on how long prompts can be. So if you're bringing a lot of information in and feeding that into Chat GPT and asking it to do some work with it, context can be lost just because of the length. A vector database can store large amounts of information, which can then be broken into smaller chunks, and then based on the question or the prompt, the vector database actually retrieves the most relevant chunks. Let's call the paragraphs that contain the answer somewhere. It then feeds that information to Chat GPT along with your question and prompt the result. You get higher quality answers with the question and context which has been provided to Chat GPT. It's always been the case where AI adoption has been a process. Companies define the right teams and the right use cases. They choose the technologies. With Gen AI showing up everywhere, there has to be even more urgency to get this right. So, David, we recently ran an AI bootcamp to help our customers learn together, and we were able to share our perspectives and experiences. Can you talk about what you saw during the bootcamp and how it helped companies that were maybe at this early. [00:10:37] Speaker C: Crawl stage, I think across the board. [00:10:40] Speaker D: We saw really high levels of interest. So just to maybe give a little bit of context, Georgian organized and ran a bootcamp in June that was mostly for companies in our portfolio. And it was split between kind of educational elements where we went through some of the kind of foundational technical expertise that the teams would need to build. And then we gave some time for each team to kind of develop their own proof of concept or what they wanted to develop. So that was the boot camp. And I think that there was a really, really strong reception to the educational component because the baseline understanding of how to work with these models, how to integrate them into kind of a semi production grade system from a data perspective or from a model input output perspective that wasn't there across the board. And it goes to Pari's point about when you're starting off with that crawl stage, you don't necessarily need to be fine tuning on your own data or to have this really robust kind of instrumentation or kind of pipeline that's coming in in a way that previously was really important for training these models or like working with machine learning. Now, we saw a lot of teams in the boot camp, and in all of the work that we do with portfolio companies get a lot of mileage out of good customer knowledge, like good understanding of what the customer pain point is, and then very targeted ways of working with these models to help them learn the context and then provide kind of results that made a difference for the end user. [00:11:59] Speaker A: Now, what do you think in terms of the boot camp? Did you see one challenge more than another? [00:12:04] Speaker D: We had 22 participating teams. I think at the end, 19 of those built technical projects or proof of concept that they demoed at the end of the week. So an amazing amount of participation and across that group, a lot of breadth in terms of what they worked on and what challenges they faced. I do think there are a couple of themes that probably applied to a lot of the different themes. One is with these large language models, they can be really good at getting to kind of 80% delivery or 80% quality, where they're like handling most of what you throw at them pretty well, but then really getting to an excellent user experience or excellent customer experience where you're kind of tackling that long tail of edge cases or problematic questions or contexts that they might not be ideally suited for, that can take a lot of work. Solving those long tail of kind of edge cases. That's one thing. [00:12:55] Speaker A: Since we recorded this, we recently completed our second boot camp and we dived deeper into the geni development process and brought a different set of skills to this process. Stay tuned for more details on that in a future episode. Now you've got to have your eye on the prize and think about what those differentiators are. So as you work with customers and you think about it, how do you help them find and think about what might be a differentiator for them? [00:13:18] Speaker B: Thinking about differentiation starts with finding the golden use case. The golden use case means because there are so many opportunities now to use generative AI as part of your operational and as part of your products offerings. But it's very important to find one use case that can help your customers the most. It's very aligned with your core value proposition. [00:13:42] Speaker A: It's funny that you say that, because if you go back to some of our early thesis that we wrote, we talked about evaluating new technology in terms of impact to your customers and revenue to your business. There's nice to have, there's needs, but when you want to get down to business, you want that upper right hand quadrant where you could really help. It sounds like that's exactly where your head is at. [00:14:01] Speaker B: That's why I said it's kind of similar. Still, most of the principles are valid. [00:14:06] Speaker A: So parry the thoughts of prioritization and finding the golden use case really resonates with me. So if you look at our bootcamp participants, what roles were particularly important to help keep this kind of focus and how did it play out? [00:14:20] Speaker B: We had product managers and as David mentioned, one of the goals for that bootcamp was helping our companies to build the muscle of leveraging these technologies. So we didn't push so hard to get to that level of strategic thinking to really pick the best use case. Our goal was helping them to pick a valuable use case and help them to experiment, to run some experiments using these technologies and build that muscle of how to leverage these technologies. We also thought as they going back to their companies and showcasing and demoing what they have built, it's going to also help the strategic thinkers to understand what might be that golden use case. Because if you don't know what is possible with these technologies, it might be harder for you to identify that golden use case. [00:15:14] Speaker A: Before we start recording, we talked about children or grandchildren like we always do. I want to stay with this crawling walking metaphor. How do you define the different milestones of the crawl walk run stages for Gen AI adoption that we use at Georgian? [00:15:28] Speaker D: I think there's a lot of learnings as we work through this new paradigm. What the different milestones are so generative. [00:15:34] Speaker C: AI is obviously a pretty nascent field. There's been a lot of innovation over the last several months, and things are still changing rapidly. But at the same time, we're trying to kind of form a perspective on some of the patterns that we're seeing in how companies pursue these initiatives and what types of kind of things they're trying to build or activities they take on to support this work. And the maturity framework that we're starting to coalesce around has three stages. We're referring to them as kind of the crawl stage, the walk stage, and the run stage of generative AI maturity. At the crawl stage, we're primarily referring to kind of the necessary work of thinking about the most valuable use case you might apply generative AI to, or thinking about the strategy that you need to develop to support incorporating this type of functionality or this type of technology into your existing product or environment. And really, the goal here is to identify the most impactful way of adding value for customers using some of this new technology, like large language models. So that's the crawl stage use case identification. The walk stage is where you've kind of started to build technical solutions, products, or features with this technology, and you're starting to put that into production. So maybe you start with a proof of concept that illustrates that you can solve the use case that you want to solve with this technology. And then we see companies at this stage, this walk stage, starting to put this into production. And often these are relatively simple solutions, right, where they're doing single or few shot prompting, and they're working to tune the prompts that they're supplying to these underlying foundation models so that they can kind of iterate really fast. And that's the walk stage. And I think we're seeing a lot of examples of different use cases at this stage. The run stage is something we're still kind of trying to wrap our heads around because it's evolving quite quickly. But here we're talking about kind of more cutting edge applications or more sophisticated applications that still try to solve a core use case. But maybe now we're working on kind of fine tuning the foundation models ourselves, or thinking about the challenges of deploying these models at scale within kind of an enterprise software environment. And so that run stage is really the highest level of maturity from a generative AI perspective that we're seeing at the moment. [00:17:44] Speaker A: So what do you think about finding differentiators and how you help our companies find that differentiation? It sounds to me like, and I love your answer part. When you're later and you're running, you can begin to really do fine tuning and bring more data to. So David Pari talked about the need to build muscle memory for leveraging these technologies. So how do we partner with our customers in a way that we could support them as they start to adopt gen AI tech, but then leave them in a place where they can move between crawl, walk, run stages on their own? [00:18:14] Speaker D: Each challenge is kind of new, and we work very closely with the team to scope things in a way that makes sense for them. We're doing development work, we're doing technical work with these teams, learning with them, helping them contribute code. And the ultimate goal is to make sure that they can do that in the future on their own. [00:18:30] Speaker C: But the way to do that is. [00:18:31] Speaker D: Like get your hands dirty with the materials, I would say, and the technical work so very much player coach type model. [00:18:38] Speaker A: We clearly feel the sense of urgency, and clearly our customers do as well. But this technology is moving so fast. Is this like any other technology where the work is good and sticks around for a while, or is there a different thought in how we must future proof what we all do together? [00:18:53] Speaker E: By taking the trust first approach. What does it mean? It means that definitely the quality, quality assurance, some of these kind of trust challenges of these large language models, engine AI technologies, if they take a proactive approach. And again, we have our trustees, we have our principles, how you can be very proactive in communicating what kind of problems you are approaching, what kind of data, how do you use data, how do you protect your customers data, how do you mitigate the bias, how do you prevent any hallucinations? And how we can bring more consistency and reliability to these kind of technologies by building on top of that and by putting some guard base around the output of these models and making sure that you earn your customers trust. I think that's a way to build on top of that trust over time and introducing more of these technologies to. [00:19:50] Speaker B: Your workflows and core offerings. [00:19:52] Speaker A: I'm glad that you brought trust into this one. Really can't effectively run a business without understanding risks and rewards. And we're seeing companies that are building their products and driving their companies with purpose. But I really appreciate the work that you are doing to help companies build products for differentiation that leverage these new technologies. This is an exciting time. So much more to come. Thanks to you both. For Georgia's Impact podcast, I'm John Pryle.

Other Episodes

Episode 63

November 25, 2019 00:24:00
Episode Cover

Episode 63: Cracking the Data Science Code

Everyone knows that data is essential for any modern business. It’s critical for targeting your customers effectively, extracting important insights, and automating business processes....

Listen

Episode 12

July 24, 2020 00:21:35
Episode Cover

Scaling with AI

As an early or growth stage company, scaling is always top of mind. Skills are scarce and expensive, so machine learning and AI have...

Listen

Episode 35

November 25, 2019 00:19:30
Episode Cover

Episode 35: Let's Chat About Conversational AI

At Georgian Partners’ recent annual portfolio conference, Jason Brenier hosted a panel discussion with a variety of experts representing the full range of the...

Listen