Commercializing AI with Vector Institute’s Cameron Schuler

Episode 17 October 21, 2022 00:30:18
Commercializing AI with Vector Institute’s Cameron Schuler
The Georgian Impact Podcast | AI, ML & More
Commercializing AI with Vector Institute’s Cameron Schuler

Oct 21 2022 | 00:30:18

/

Hosted By

Jon Prial

Show Notes

In this episode on commercializing AI, we speak with Cameron Schuler, a key contributor to AI's game-changing prominence. Cameron is the Chief Commercialization Officer at the Vector Institute and is dedicated to advancing the transformative field of AI. 

 

View Full Transcript

Episode Transcript

[00:00:04] Speaker A: Hi everyone, and welcome to the Impact podcast. I'm your host, John Pryle. There's no question that artificial intelligence has been a game changer for businesses. [00:00:12] Speaker B: But this wasn't always the case. [00:00:14] Speaker A: It's been the result of decades of efforts from researchers and industry alike to unlock its potential. And of course, there are many people around the world constantly finding new and exciting use cases for the technology. Today's guest is among the many people helping to make that happen. Today we'll be talking with Cameron Schuler, chief commercialization officer at the Vector Institute. The Vector Institute is dedicated to advancing the transformative field of AI with top researchers and experts from around the world. He'll be chatting with us about what it takes to build an AI ecosystem, the key to commercializing AI discoveries, and even how human behavior and progress can impact AI development. [00:00:56] Speaker C: Cameron, welcome. You were also part of another research institute, Amy. Am I? And I'm really looking forward to your insights today. So just tell us what you're working on right now and how you got there. [00:01:08] Speaker B: I originally got into this field in 2008, which was at the tail end of the winter when it was considered that AI had no commercial values. I thought that's a great place to go to advance your career. But as it turns out, I happened to be in a pretty good place where I sit inside of vector is the head of the industry innovation team. And so our role is the interface between our industrial partners, of which there's enterprise companies, there's scale ups that are leading edge in the AI domain, as well as SME's that are canadian companies that may not be AI first companies. Our role is to impact them through what we call our three t's. So technology teams and training under the technology piece really is experiential training and experiential learning. So being able to roll up your sleeves, work on new methodologies. On the talent side, it is exclusive opportunities for our partners to recruit newly graduating talent. But we also have a highly curated list of people that are already in industry who might be looking for new opportunities. And that sits within our digital talent hub. And on the training side, it really is keeping people on the leading edge of AI. [00:02:16] Speaker C: Well, you've been around a while and I like that you've got this focus with the partnerships on commercial I mentioned, AI is killer, ubiquitous. But there's a phrase that I've seen all too much of and I don't really like it, called takex and add IAI. I mean, it's a naive view of data and business processes. But I'd like you to talk about AI and its predecessor, machine learning. Where it was and where you see it is today. [00:02:38] Speaker B: So, just for context, I tend to use AI and machine learning synonymously. They are different things, but I will use them synonymously. So I may not be referring to any particular one thing. So the field is interesting because we were in a position where the field was so small for so long ten years ago, you could go by first name globally and you knew who you were talking about in the field. And so what happened was research problems became industry problems, because there was a lot of research in very specific areas, things like overfitting. So it's basically training a model on your training data, and then when you put it in the real world or run more data through it, the model doesn't work. And there was a lot of research going on in those areas. But again, we went from a small group of highly talented people, industry said, hey, there is value started taking people out of academia, which was a problem, and then those problems not having been solved, becoming real world problems. And so ultimately, when we start taking a look at AI right now, it does have broad industrial use, but there's also the potential for companies and individuals to use it in a naive way that may not provide the best outcomes. [00:03:47] Speaker C: I often think about that AI is a little different from other tech. The constant refreshing of data, the need to refresh models, is a different space than. I don't know. We have an algorithm that debits my bank account for $100, and I take $100. That's probably the same algorithm that was written in 1960. How does that work for you? [00:04:09] Speaker B: So, that is a very good question. So if you think about a pandemic or a financial crisis, which are both recent history, who knew that we'd have a shortage of toilet paper, right? That was certainly an unexpected thing. So if you think about AI models, you take historical data, you train models to then predict the future. And humans have done this forever, right? It was chicken bones that they throw around and look, should I predict the future? So it's a pretty natural state for us to do. We recognized during the pandemic that whether changes be dramatic like that or subtle, let's say you're a retailer and your demographic over a three year period is now five years older and tastes have changed. You need a way to go back and actually validate your model. So that would be something called data set shift or model drift. And so there are methodologies you have to deploy. So, yes, it is more complex to deploy AI models, but ultimately it has a lot of the same characteristics of any good project plan. Right? So if you think about what problem you're trying to solve, what's your baseline understanding how it fits into the rest of your business problems? Because technology is quite commonly a hammer looking for a nail. So we approach it in a very different way. Which what type of business problems you try to solve and how is AI a good methodology for doing that? And some of the things it does well are pattern recognition or some of the language models as they've evolved have been pretty incredible. But ultimately it is starting out with that what problem you're trying to solve and what is the best methodology for doing so, not just using AI for everything. [00:05:37] Speaker C: So it's really not a shift from research to development per se. You're actually starting with development and commercialization what's required and then backing into the tech that might be required, is that a fair way to put it? [00:05:50] Speaker B: Ultimately, when working with industry, it has to have a practical side. Our vector sponsors, we have 29 large enterprises, and there's a piece of what's in it for them. So how do Canadians adopt technology? And you talked about our vision up front, but there's also a good for Canada component, right? It's about building an ecosystem, and both those pieces are important. And so we do focus an awful lot on how do canadian companies take advantage of what's been built here? Because that is important. But ultimately it really is about ensuring that Canadians can get the benefit of decades of investment in AI when nobody cared about it. How do we make sure we retain and canadian industry benefits from there? And it does start out with, what are your overall objectives and where does AI a good fit for that? [00:06:33] Speaker C: Cameron, tell me about the pan canadian AI strategy, please. [00:06:38] Speaker B: So Canada was the first country to have an AI national AI strategy. And where this came from is when commercial value was recognized. You can get commercial value out of AI again. After decades of thinking it's a dead field. That industry was making it very attractive for people to leave academia. And because the field was so small, pretty soon we were going to end up losing advantage that we had as a country. And places like Toronto are expensive to live, so if you're an academic, you may not be able to afford to live here. So we developed the bank and AI strategy to ensure that we could take advantage of that. So it really was around ensuring that we had the next generation of talent and growing that talent and ensuring there was a large supply of it. And that's worked in terms of companies relocating to the Greater Toronto Area to ensure that there is a supply of. [00:07:28] Speaker C: Talent and a reason to come here. So we'll talk about the end users a little bit. There's so much backdrop in this tech space. Privacy tracking, deep fakes, the proliferation of untruths. I think we'll stay focused on the end user and giving them a safe and reliable application that adds value, minimizes risks. And one of the most prominent topics I saw from Vector Institute is what you call. I'm excited. It has a hashtag, trustworthy AI, which seems to be both obviously a research topic and this business practice. So before I dig into the piece parts with you, how important do you see this as a focus area? [00:08:04] Speaker B: I think it's critical. So, unfortunately, a lot of humanity is informed by AI through science fiction. And I'm pretty sure your toaster is not going to grow legs and put them on itself and dive into the bathtub and electrocute you, right? So if we look at understanding that AI world is about teamwork, it's about humans. So if you think about the positive side, things like precision medicine, something that's completely aligned to you, not something that's aligned to some other genetic characteristics that are nothing like you. If you think about learning, right, should you really spend a lot of time learning things you get, or should you spend more time on learning the things you have a challenge with? So those are the types of things where that personalization, I think, is incredibly, it will change the world in the future and is doing so already. So that's important. But that trusting AI is the hard piece. Well, when you think about harms from AI, it can be just as simple as something that is biased in nature because your training data was biased, and it tends to be because humans put that together, could be something that just gives you a bad answer, or it could be something that we've seen online examples where an AI is put on Twitter and it starts to mimic some really bad traits of humanity. So I don't think we get anywhere without humans saying, this is making my life better. And part of that is ensuring that there is a trust component to this. There's been enough thought and enough effort put into it to make sure the outcomes are the right outcomes. [00:09:29] Speaker C: And we've got some components that you've got in trustworthy AI. So I want to work through them. The first one is fairness. So I'd like to start by asking you if you see a difference between fairness and you already mentioned potentially biased data, but between fairness and bias there's. [00:09:43] Speaker B: Going to be proper definitions, but let's talk about it in a practical sense. So if you think about where they've found that hiring practices are strongly geared toward people that look like me, right, that is clearly unfair. And I would say that's a bias issue as well. Fairness to me would be giving everyone an equal opportunity. Bias would be discriminating against people, if that makes sense. And if you take a look at historically, if a consumer lender wouldn't lend to people, in particular postal codes, that would definitely be a bias. Right. But it also would be considered not to be fair. So there are practical definitions around that. But when you think about what do we really want? We want everyone on an even playing field. [00:10:28] Speaker C: And do you feel a challenge to the way you're describing that? To me, there's a human side of bias. And all this was very human oriented. It's been around well before technology, yet we're evolving to an extremely technical set of solutions. Is there ways to bridge that? [00:10:44] Speaker B: So I think there are. Some of these areas are unsolved. So when you think about AI, it's probabilistic. So if you had a probabilistic keyboard, maybe one out of every 100 times your a would give you something else, right? So that's not particularly practical. So AI is trying to make good decisions in ambiguous environments. So if you think about an autonomous system. So an autonomous car. So when we drive, we may not generally think about where do I need to put my steering wheel to get my car in the right spot. It's fairly integrated in how we function. But if you think about an autonomous Us car, you'd have to say, what environment am I in? Is there a school zone here? Are there cars on the side? Am I on a highway? Each one of those will be different areas in which it operates. And from there it will then have to make a decision on where should I put the car on the road. So those are some of the challenging ways that, when I say engineer things, you need to develop things. Whereas humans, as humans, we can look at these as fairly innate things. Another part would be, when you think about data, we interpret data. So I have this on my desk. So is this a cup or a glass? And the answer could be both. Right? So depending on your lens that you're looking through in that moment in time, you may define it a different way. So those are just some fundamental challenges of where we come from as humans. So that bias in the data would really start out as a. I think this is a cup you think it's a glass, and all of a sudden we've diverged somewhere. [00:12:07] Speaker C: You mentioned self driving a little bit, which makes me think. And you talk about steering where you steer. There are companies that are going to deliver a self driving car without a steering wheel, which has to deliver a great deal of trust if you're going to sit there behind a self driving car and you still have a wheel in front of you. So how it gets delivered is going to be interesting over these next few, let's say, the next ten years. [00:12:29] Speaker B: Agreed. [00:12:30] Speaker C: Talk a little about impact. You talked about, I don't want the toaster with legs to jump in the bathtub. I may get a recommendation for a toaster to buy. That's probably a great AI solution. Someone's, here's a bunch of features, and here's a toaster you want. Then again, if it gave me a bad toaster, I'll be all right. I'm not getting the toaster with the legs, of course. But then what about a medical diagnosis, perhaps? Then I get a little more concerned about the impact of getting things right for good end user experiences. Thoughts about the impact of the AI. [00:13:03] Speaker B: So two things I want to talk about in terms of that. So one is humans are imperfect, right? So your doctor can make a misdiagnosis that can actually happen. But think about it in a different way. So when I think about the FDA and how they would approve or Health Canada approve medical devices, have AI in them, that was a challenge, right? Because there's that cause and effect component, the transparency. How much do you trust the way it works? That's a challenge. So one of the first things I saw was actually around cardiac imaging, and it was neat in that it said a human would normally take an hour and a quarter to interpret this particular image, and this machine can do it in 15 minutes. And the hurdle wasn't the machine needed to be perfect. The hurdle was the machine need to be as good or better than a doctor. So, very interesting thing about that is do you want your doctor to spend an hour and 15 minutes diagnosing something that may or may not be correct and then spend five minutes with you, or would you actually like to have the doctor to have close to an hour with you to actually talk about what are the next steps? Right. So that's that human aspect of it. Another example of this would be there's stuff that's going to be fairly straightforward. A broken leg is a broken leg. Now, someone's going to probably disagree with that, but ultimately, things that are very obvious in imaging. Do you really need a doctor to look at that, or do you need a system? When you think about augmented intelligence. Or helping humans do their job better. Of things that need human judgment because the machine can't decide. And ultimately, you could be more efficient by saying, we really need a doctor to take a look at this, because we're not sure what it is. Maybe we need some biopsies. Maybe we need something that's a far more complicated case, because nobody's seen it before. So think about using people's time more efficiently. And making it more human, getting more time with the doctor. [00:14:45] Speaker C: And that leads us to explainability. And I'll definitely conclude, hearing you, that there are things that don't necessarily need to be explained. A broken leg is a broken leg. Image processing is very data rich. I guess the story I've always heard is that we almost impossible to write an algorithm, a true algorithm, to recognize a tree on the side of the road versus a person on the side of the road. Yet I could feed a neural network, zillions of images. And absolutely tell the difference between a tree and a human. And I don't need a lot of explanations of that. But there are times when I think we do need explainability. So, does it have to do with the final impact of this decision? What are your thoughts on when explainability is more important than in other cases? [00:15:27] Speaker B: I think you brought it up. It's risk. Right. And let's go away from learning systems and just talk about intelligent systems. So if you were going to get on an airplane and the pilot said, you know what? The computer is not working, but let's go for it anyhow, you'd be pretty uncomfortable, and that pilot probably wouldn't be allowed to leave. But what they did find is if the pilot is too engaged and an emergency happens, they're fatigued. If the pilot is not engaged enough, then an emergency happens, then they're not prepared for it. So there's an ideal state in between those two. But ultimately, technology has made it safer. The number one cause of crashes or people die still is human error. But ultimately, flying has been especially commercial aircraft has gotten safer over the years, just due to protocols and making sure that there's trust, making sure they're well thought out. And it doesn't always turn out the way you want it to, but ultimately, it's incredibly safe. So if you think about that intelligent system, it's actually made our world a much better place in a very risky environment. And so now it comes back to is this something that's going to operate on me as a human or something that's going to potentially impact my life? Or is this something where I'm going to buy something and have to return it? So really, that risk profile is going to be pretty critically important as to the explainability of it. [00:16:42] Speaker C: It's interesting because it makes me understand better. I never really thought of. I know that there are rules how long pilots have to. Are allowed to work and not work, or how long truck drivers can work or not work. And it really comes down to not the driving of the truck, but to handle those edge cases of crises. Time to make sure that they're fully prepared. And obviously an hour on and an hour off is optimal, but not smart as not a good business decision. Right. But I guess there's a little bit of subjectivity, and objectivity maybe is part of the name of the game here. I'll think about judges issuing sentences. I mean, there are guidelines, right. And those are very objective, but at the same time, there's a clear subjective side of things. The judge is going to listen to what the defendant has to say. So there's an element where subjectivity is important. And I don't know that we'll ever. I think we want to maybe augment, but not replace same thing for maybe college applications. And are essays purely a subjective thing or are they becoming objective? What's your thoughts on how this gets used? [00:17:42] Speaker B: Those are hard things. Right. So if memory serves me correct, napoleonic law was trying to have a very binary outcome, and humans just don't work that way. Right. There's extenuating circumstances depending on the country you live in. Right. Our legal system is very different. Well, we have two different legal systems in Canada, but Quebec has a different one than the rest of Canada. But there's fundamental things that we can look at, but then it also ties into values. So I think there needs to be a case where we have an appeal system, too. Right. So if a judge feel a judgment wasn't fair, you go back. So you could put something in that's AI based, but then you have to go back to, is this biased? Right. If we have an inherent bias toward a particular group of Canadians or any other country who are incarcerated more often, depending how you train your model, it may incarcerate more of them just based on characteristics that you may not be taking into account. And it doesn't mean that humans aren't biased as well. And we've seen cases of that in the canadian legal system. So I think there's an opportunity to do that again. When we wrote something called the pancanadian AI strategy, we made sure there was an AI and society piece in it. Like, how do you actually make sure humans are thought of when you're doing these things? So I think there's opportunities for AI to participate in some of these, but it's taking the human out of the loop. And the judgment piece out of the loop, I think, has risk associated with it. [00:19:03] Speaker C: I really like the fact that you say there's an appeals process, and I would argue, I know that if you get rejected by your insurance company, at least in America, health insurance company, there's an appeals process that if we do do more autonomous type systems, and there is a mechanism in place to question it, that will force the explainability. Well, that'll force the research to biases or not. So we could get there with a good, broad, holistic approach to things. So I think I'm encouraged. That's good. [00:19:34] Speaker B: If you think about our justice system, too. We don't want people wrongly incarcerated. That's a pretty big thing, which means there's some people that are guilty that actually get away with it. And so it really comes back to values and what you really want out of the system in which you live. [00:19:49] Speaker C: And another one that I'm fascinated by, and I had not really spent any time really concentating on it. We talk about safety, protecting the physical, which I also was interesting because to me, it's buying a toaster is not physical. It's a purchase. It's still a human action. Most of these things are actions. Maybe a self driving car obviously implies there's some physical to that. And now I'm going to love self driving cars because I'm about to say you could have autonomous weapon systems, but protected physical gets kind of interesting that I hadn't really thought much about. Can you talk a little more about that, please? Yeah. [00:20:22] Speaker B: So let's think of it in an industrial environment. So let's think about robotics in an industrial environment. So you could have a robotic arm that could truly operate in a hemisphere, right? So 360 degrees around and 180 degrees. But in reality, it doesn't need to do that because there might be humans in place. So the neat thing about a factory is you can reduce the amount of variables. And what I mean by that is you're not going to have a helicopter likely crashing out of the sky, coming through the roof and hitting it to cause some strange thing. Whereas if you're out in the real world, all sorts of other things can happen. So you reduce the number of variables. And in a case like that, you could certainly turn it to a point of as long as you're operating in this particular area, then it can do what it needs to do. If it goes outside of this, then it needs to shut down. So there's lots of ways you can approach it. And again, it comes back to risk. Again, if it's a welding robot, what's it doing? Welding a foot off of where it should be so that you can put a bunch of controls in place where it would say, this seems to be operating outside of this parameter. Therefore, you need to do something. Another example would be you can have a very complex system, and every one of the interactions in that system is going to have parameters in which it operates, sensors, voltage, pick whatever other thing. And if you sit there and say, all right, so we know all these things together work well, so can we actually optimize within that to actually get a better system so you can look at the other side of it as well? So how do you take complex systems and get better outcomes from them? [00:21:44] Speaker C: So we don't really need to worry about Asimov's three laws of robotics think so. I'm glad you brought that up. [00:21:54] Speaker B: If somebody says, that's really far off, I think that's a problem. I think we need to be able to have these conversations, and the conversations are we think about these things humans traditionally have. It may not always feel like it, but the world gets better as time goes on. We have hiccups along the way, but go back 100 years to where we are today. I'd much rather be living today. So ultimately, I think it is important to have these discussions, because if you don't, people are going to come up with their own conjecture. So being part of that voice, and that was where the AI and society piece came from, we want to have a voice. We want to engage people. We want to make sure that it's well understood that we are thinking about humanity and not just the systems themselves. I think it's incredibly important. [00:22:35] Speaker C: And one direct impact that I thought of, that's not quite robotics, but it's an unbelievable human impact. So I live in Vermont, and we'll end up after winter with mud season. And most of our roads in Vermont are not paved. And the GPS systems where people are driving up do not realize they're putting them on unpaved roads. And we have had cars stuck and people stuck overnight and near death experiences and things. And that is not something that was written into any GPS algorithm. Yet I don't know that any map says dirt road versus solid road, and it's after November or December. You should not be on that dirt road. Right? Or as rainy says, you should not be on that dirt road. So there's a good safety one, I think as well, for the physical. [00:23:12] Speaker B: Well, I agree with you. And that comes back to how do you design things to begin with? What assumptions are you making? So I have a finance and economics background, so you would have all things being equal, what does this look like? And if you're like it's a road, therefore it's like any other road, is that the right assumption? So again, that planning piece up front, a good project generally is well thought of. You spend 10% of your time building your plan, building the norms team and all those other things. AI is no different. You really have to spend a lot of time thinking up front and then being able to test and understand for unintended consequences. [00:23:47] Speaker C: Excellent. So the last one I want to hit from this trustworthy AI is privacy. And it's been talked about before, but I just love talking to you. Anonymization is insufficient, necessarily. So talk to me about why that's the case, please. [00:24:02] Speaker B: A lot of things come from inference. And so anonymous data may not actually be anonymous. And if you think about the values that we have, certainly in Canada and the US, privacy is way up there on the list. Other countries, not so much, but for us it matters. So anonymization, you could actually release enough information that somebody can turn around and figure out what that looks like. Right. Just postal code in Canada or zip code in the US. A couple of different characteristics. Like, I know who that is, that's John or that's Cameron. Right. And so privacy has to be entered into by design. And we do a lot of work on privacy enhancing techniques and we work with companies to be able to actually extract data without needing all the other pieces. So again, these are open areas that haven't been solved, but areas that are critically important. So we spend quite a bit of time working on that. [00:24:51] Speaker C: Do you see some of the techniques and strategies from the finance industry bleeding into other industries now? Are they everyone beginning to kind of get the message? My argument, of course, is that finance is ahead of the game. I may be wrong. Good argument around that one as well. [00:25:04] Speaker B: The answer is, it depends. I think we'll have things like banks actually can't share data. So if you think about enforcement, you can't see both sides of a transaction because they can't actually talk to each other. Right. And so you have to infer, is this a nefarious transaction or not? So if you had a way of not sharing data, but sharing enough information, you could figure something out that would be worthwhile. I do think our banks are quite progressive in Canada in terms of how they deploy and use AI. There's lots of rules around how they can do that. And we have large financial institutions we work with. Roughly a third of our enterprise portfolio is finance, other investment management, insurance or banking. And the things we do work on really are around how do we make the world a better place. [00:25:47] Speaker C: Now, when you talk about regulations on states, North America and Canada, there are regulations or governance. As I look about maybe integrating and thinking across this space of fairness and explainability and safety and privacy, I feel like we're beginning to get into or broadening the definition of ML ops, which is a relatively hot term. What's your sense of how that's evolving? [00:26:10] Speaker B: Yeah, that's a really good one. I think when we think about how companies have approached AI, and I'm not talking about the Googles of the world, right, that are leaders in AI, I'm talking about general industries. Some are behind. We have 29 large companies, and they're companies that a lot of Americans will recognize as well, or people throughout the world. We have roughly 20 companies that are on the leading edge of AI that are in the unicorn status. A bunch of them are unicorn status. Then we have a number of small companies and we talk to people outside the industry. So we get a purview across industries, size of company, predominantly canadian related, but certainly elsewhere in the world. And that gives us an opportunity to see common problems, and mlops certainly is one of them. And so if you think about how a lot of companies would have approached this, it's, we're going to do a proof of concept, so we're going to get lots of attention, lots of funding the data that we want. We're going to produce something and then we're going to throw it over the fence, right? It's going from that POC to something that is actually in production, and there's a challenge related to that and some of that's structural. Do you have the right computing? Do you have data governance? Do you collect data one way today and you've changed it tomorrow and nobody knows, right. So you're getting different results out of it. Do you have the right talent in place? Making AI accessible, I think is really important. And what I mean by that is there's packages and libraries you can load up and things like Python to be able to use, but do you know enough not to get. When I talked about overfitting earlier, do you train it on a particular data set and say you're done, but in reality, next time somebody uses it, it doesn't work in any way, shape or form the way it would. So that mlops piece is critical, and it is things like what I described earlier around a pandemic or something where things change over time. That's called model drift, and they're either very extreme or they can be something that's subtle, but having the processes in place to make sure that you actually have models that are still giving you the outcomes you need, another one would just be your outcomes. Can you do something with that? Right, so it's the change management with humans. Do you have the ability to actually go and influence this? I think those are incredibly important things that have to be a focus of this. So mlop is most definitely one of the next pieces where there's a lot of energy going into it and it's being able to use AI in a meaningful way. [00:28:25] Speaker C: One last question. Because you work with so many different partners and you bring so many different skills to the table, and unless you're a large, large corporation, you're something within our audience here, our wheelhouse of our audience. How about collaboration? How do you get companies to work with others? [00:28:42] Speaker B: Yeah, it's a good question. So Canada has more than five banks, but if you look at our five large banks, they're roughly 90% market share. And so we're not going to do anything that is competitive because two reasons. One, nobody wants to go to jail. Two, it's a bad idea. Like, why would we do that? So it's working on pre competitive things. So it's, again, being able to look across industries, we find common challenges that companies have. And the analogy we would use is we teach them to fish versus giving them a fish and AI and fish replace those. So for us, we will look at things like, as I described earlier, data set shift. Because of the pandemic. Here's something that we can help with right away. Let's go do something related to that, or the privacy enhancing techniques. There's more. A forecasting of what the future looks like. What do we think companies will need in the future? So we'll have large companies working with small companies, but it's a very unique model. We have collaborative groups that get together that could be competitors to work on common techniques and what they take back home that they can then build on. And that's how they create their ip out of this. And so it really is a collaborative. [00:29:45] Speaker C: Model now that I want to thank you, I want to congratulate you. It's so clear that you and Vector Institute have just a true purpose and I know we're going to hear so much more about this. Thanks for your time. It was great chatting with you. [00:29:55] Speaker B: Thank you for the kind words. And I will say it is vector Institute and not me, I'm only be part of the team. But thank you for the kind words.

Other Episodes

Episode 16

September 22, 2020 00:25:05
Episode Cover

Creating a Privacy Culture with Spotify's Vivian Byrwa

Leading on privacy means more than compliance and technical solutions. To excel, companies should also foster a privacy culture. Vivian Byrwa, our guest on...

Listen

Episode 33

November 25, 2019 00:14:39
Episode Cover

Episode 33: Big Banks Aren’t Taking Disruption Lying Down

At Georgian Partners, we usually talk about tech trends that impact the software industry. But in many cases, those trends are also having a...

Listen

Episode 18

November 11, 2020 00:21:27
Episode Cover

AI Adoption Starts with Product Management

You might think that getting your customer base to adopt your AI product is a sales and marketing challenge. But it starts much earlier...

Listen