Overcoming Barriers to AI Implementation
)
“This is a No BS session. AI is hard. It's getting better, but it is hard. And we want to make sure to help you find solutions.” began AI Business’ editor Deborah Yao chairing a panel on the barriers to implementation at The AI Summit Austin last November.
Exploring cost objections, stakeholder buy-in, and when to build or buy this 40-minute panel discussion looks at the reality of how these obstacles can impact AI implementation and what organizations can do to address these challenges.
Watch to discover:
- How should you communicate the value of projects to business teams and decision makers?
- What’s the best way to work with our internal teams to make our business AI ready?
- How should companies build trust in new processes and AI from the outset and ensure we’re operating with AI ethics in mind?
Thanks to our panelists:
- Deborah Yao, Editor - AI Business
- Santiago Giraldo Anduaga, Senior Director - Product Marketing - Cloudera
- Matt Robertson, Staff Data Architect - Pure Storage
- Nick Genatone, Vice President, Digital First Engagement Business - Verint
The Transcript:
00:00 Deborah
Welcome to the panel overcoming barriers in AI implementation. And I'm here to introduce our distinguished panel of speakers. First is Matt Robertson, who is a Data Architect at Pure Storage. And we have Santiago Geraldo who is Senior Director of Product Marketing at Cloudera. And thirdly, we have Nick Genatone, who is Vice President of digital first engagement at Verint. Welcome, everyone.
00:41 Deborah
So why don't we start by you guys telling me what your company does what you do at your company, and also what AI solutions you offer.
00:50 Nick
So Nick Genatone with Verint, we are a customer engagement company. We're a company that's been for decades in the contact center, supporting contact center representatives, we've expanded into digital engagement. And our business is focused on human-to-human communications, as well as human to machine and then human to machine back to human. So, we're focused on providing people the opportunity to engage through any endpoint and do an AI an automated way. And when the point in time is ready, we want to take it to a human, make sure that human that's on the other end, understands that customer journey, and that's in front of them and help orchestrate and solve those individuals’ problems there immediately, so that they have a good overall good experience.
01:34 Santiago
Hey, thanks for thanks for having me here. And my name is Santiago Giraldo, lead product marketing for Cloudera. Cloudera is essentially a platform provider. We deliver the world's first hybrid data cloud that enables portability of analytic workloads, data security, metadata, and of course, machine learning. We're an all-inclusive platform, so soup to nuts, from ingest of data all the way through to how it impacts your business through AI or analytics, we enable that entire lifecycle in an intended fashion through the Cloudera data platform. I guess a little bit about myself is my background is in data science primarily. And while now I lead product marketing, I have a very deep technical background in terms of how the work gets done, and, and what's required in order to actually fulfill those promises that AI has given to your business. Thanks.
02:34 Matt
I'm Matt Robertson, I'm with Pure Storage. I'm a Data Architect. So we are a storage company, we sell software and hardware products. And we believe they're the best products on the market to deliver high performance data throughput, that's critical component for machine learning and AI. So we work closely with our customers not just providing our products and services, but understanding their workflows, and making sure it's delivering business value to them in a timely fashion. We love it when our customers say you can after implementing our products, they can iterate faster on their models, they have a better experience and drive better business outcomes faster. That's the goal.
03:24 Deborah
Wonderful. So we're going to be a little bit different in this session, I'm going to make it very interactive. So at any point during the conversation, you have a question, raise your hand and ask it, you know, we want your questions answered. This is a No BS session. AI is hard. It's getting better, but it is hard. And we want to make sure to help you find solutions. A little bit of background. According to Deloitte, state of the AI in the enterprise report, which just came out 76 of leaders surveyed have the fully deployed three or more types of AI applications. And that's up from 62% in 2021. However, 22% of those organizations saw disappointing results despite actively deploying AI. And that's up from 17% last year. In fact, as companies got deeper into AI deployment, they saw what the Lloyd says is middling results from deploying AI at scale across the business. So, the three guys before, you're come from very different industries, but complementary. And I want to start my question with Nick. Your company offers services closest to the consumer. So what are examples of AI implementations have you worked with in the past?
04:56 Nick
So we've worked in probably just about any vertical ares. We've gone from being a virtual recruiter for the US Army, where you can go talk to soccer star, we've worked for USCIS.gov, with immigration, speaking both English and Spanish helping people immigrate into the United States, all the way to the complexities of patient engagement, and even emergency roadside services of helping insurers. In those examples, and I'll start with patient engagement, because it's obviously a highly critical area, it's a sensitive subject. But when you talk about how you can use AI and automation with those individuals, it's very impactful in their lives. So an example would be there is a virtual nurse and everyone's back pocket. We started early, doing this with managing people's when to take the medication, how to take their medication, and a lot of different therapies are pretty difficult to follow and adhere to. So people have to find out where to take their injection, we're going to rotate that for them. But we're also going to be there and available to answer questions. Not everybody can afford to have a nurse in their back pocket, but we can train systems to have those conversations to get them to the right support, and obviously guide them through that process. Other areas, if you're talking about Roadside Service, and for helping an insurance organization, you're going to take multiple different engagement points and situations to help an individual and example that could be, you break down on the side of the road, and you need to call in and get help. Well, not every you're not always going to get right to a contact center person or a live person to solve something for you. So what we can do is we can automate that interaction for you, we can answer that phone. And we're going to answer that phone knowing who you are based off of the information that came in through the phone call. Once we know who you are, we can better serve you. But then we're going to answer with a bot. What do you what do you need help with. And then once we find out that you're broken down that you need a tow truck, we're going to find that information out, we're going to integrate into other systems provide, you know an SMS text message to you let you geolocate to your through a web app to yourself. So we know exactly where you are, confirm that, and then pass the information to the tow truck agency to the dispatcher to come pick you up after you've confirmed your making model. So we're going to channel shift as well. And making the experience as easy as possible. Because you can imagine, you know, AI is improving AI getting better. But trying to capture an utterance of what someone is saying on a noisy roadway in California might be a little difficult. So we want to make sure that we can transition to new channels to make it easy for them to be able to visualize things in touch. But that's an example. And what I would say across both patient engagement. And this example of roadside assistance. Is this the same technology, the underlying technology and AI is our ability to communicate with humans, and have a bot emulate a human interaction and be able to train off those interactions. But you have to be able to integrate, you have to have data, you have to work with large amount of stakeholders to make those things a success. So I'd say that's the most important thing that you're doing while you're building out these types of solutions.
08:00 Deborah
What are some of the challenges you believe most companies face when they're engaging in customer when they're trying to apply AI and customer engagement?
08:11 Nick
So I think the first thing you have to do is you have to have all the stakeholders there. But you'll have to have a common goal. What is our common goal? You know, as a veteran has been an organization that's been in the contact center aid business for a long time. And they've focused a lot of things on deflection and containment. Those are measurements. But let's peel back that onion, let's look at that. How are we actually gonna solve something for somebody. And that starts with building out KPIs. If you call a hotel, or you're engaged with a hotel, it could be through a chat bot or whatnot, I don't just want simple information, I want to actually modify my reservation. And I want to be able to do that without necessarily always going to a human. So you've got to say I want to book tickets, or I want to be able to modify tickets, we want to make that as our KPI. Were we able to solve that in that channel and complete it from front to end without either hanging up or frustrating that individual because you can have deflection you can have containment, but did the person give up? Are we able to follow that in the data and track back to that? So that's an important piece? The second part of that would be that once your system because AI systems are not perfect, is can we get them to a person in that point in time of need, what was their time to resolution? So let's measure those types of things. Let's be able to measure those successes or not. But again, to do all that it takes people it takes the right people that are following that common goal, but all the right stakeholders at the table. Do we have security involved? Do we have safety involved? Do we have our architect team involved that can make sure that we can access the right data entities and that we can get data to actually train our system as well. So those are all very important things. You wanna make sure that you're working towards.
10:02 Deborah
So Santiago, the same question to you? What are examples of AI implementation you have worked with in the past? And you're in Cloud Data Management?
10:12 Santiago
Yeah, I think I think Nick really touched on a lot of the practicality of artificial intelligence and machine learning systems today. It's really about how, how you actually deliver it in a way that makes sense for people, specifically, businesspeople, or people that are not technical that essentially need to solve specific types of problems. By clutter we work with a very wide range of organizations all the way from governmental organizations through to private equity firms and across finance and across almost every single sector that exists. Some of the solutions that we implement, are that are incredibly varied. I think that the potential for machine learning and artificial intelligence across industries has, has really exploded, and in a lot of ways, we still haven't realized the full potential of what can actually be achieved. I think one of the most interesting companies that we've worked with is united overseas bank, where they do a lot of work in terms of automating their customer success management, and also automating things like fraud detection or provisioning of loans, they have over 25 Different AI use cases that they tackle quite elegantly, actually, on the Cloudera platform. You know, it's, it's kind of a tough question to kind of nail down one or two specific use cases, just because the depth and breadth of what's possible with AI today. And, you know, so that, that's really one example of many. For me, I think that the most critical piece is how you enable the work to get done, and unintended fashion. So a lot of times we think about AI systems, we think about machine learning systems, and we have a tendency to gravitate towards these new and innovative things. But the reality of it is that not every project is going to be successful. And a lot of times, you have to take a portfolio style of approach in order to reach that success. A lot of it is venturing into the unknown and trying to solve problems that have not been solved yet. And how do we actually get to that point where we can solve new things or things that we haven't rationalized ourselves quite yet. And then the last piece of that is explainability. So similar to how Nick was talking about, how you can automate, for example, roadside assistance and understand how, what a person needs, whether it's on the phone or via chat bot or by voice recognition. And how do you extrapolate that information? At the end of the day, AI should be unseen in a lot of ways. Think about for example, you know, however, you got to the conference today, whether it's Uber or Lyft, or, or any other rideshare app, for the user itself, it's pretty painless. But the reality is that that application is doing several million of calculations every second, just to give you a proper time, when you'll be picked up, when you'll be dropped off how much it should cost, those types of things. And that's really what we should strive for is that level of ambiguity where you don't even see those AI systems. But they're delivering essentially incredible value in a way that that is incredibly user friendly and explainable.
13:51 Deborah
Thank you, Santiago. Matt, what are some of the critical elements necessary for success? In an AI proof of concept all the way to deployment? I mean, you work with a lot of companies in different verticals.
14:05 Matt
Yeah, I think what you touched on was a portfolio of options or not, don't get locked into just one KPI that you read about in Forbes or something, have a portfolio of options, because you don't know what your data set can deliver and the insights you can deliver out of that. So having flexibility there with your stakeholders, as well as your teams all the way down to infrastructure as well. So as a storage company, we've have a great data platform, we think we were we accelerate the data pipeline. And by having a flexible, you know, what were the previous session, one of the organizations, they said, you know, performance is very critical to our pipeline. And that's because we invest in these data scientists when we invest in this hardware, and we want to iterate quickly to learn and to explore, does this path provide success or not. And so by having a high performance platform, which is what we provide with Flash blade, it's a scale out platform for unstructured data. And you can leverage multiple tools. And you can explore that data set. So if you have data all over your organization, sometimes maybe in manufacturing, maybe in this database, and if you can consolidate them onto a platform that provides scalable performance. And performance is not just, there's multiple dimensions, there's a number of objects, there's bandwidth, there's operations per second, there's searching. So if you have a platform that delivers this flexibility of performance and all these dimensions, you can basically power all the upstream applications. And you don't have to go back to your team and re architect and delay your project because we have a bottleneck in our in our workflow. So I think just having flexibility in both your, your hardware platforms, your software stacks, and then I mean, some of the organizations that are coming into this just they say, Oh, we have this apple, all these software apps, but we want to deploy them on top of Kubernetes. And that might be that might be a barrier in itself to these organizations that don't have any Kubernetes in their environments at all. And, you know, I primarily work with on prem solutions, we do have a port works. That is our software defined storage layer for Kubernetes does also run in the clouds as well. But I think, some of the use cases we've worked on in the past, we see, me personally, I see quite a bit of log analytics from the security space. But some of the bigger opportunities there we you know, two years ago, we had a big issue where the CDC required lots of storage very quickly. So we're able to kind of help them out in a in a very constrained environment that was kind of fun to be involved with that project. And then I have to mention that the big one recently was Facebook meta. They're their research supercluster, they chose Pure Storage to be their partner in delivering their data platform for the for their cluster, which is the largest AI supercomputer in the world. So that's on the scale of hundreds of petabytes, hundreds of gigabytes a second. And, you know, we don't all get to have those types of environments to play with. But it's pretty cool to be working with them and their teams and enabling their workflows.
17:47 Deborah
Great, thank you. So all three of your vendors, and one of the biggest decisions companies make is to choose the right vendor, choose the right technology. Do you have any, any tips on how companies can best do that? What do they look for?
18:08 Santiago
I can start off and I and my team? So yes, you're right, we are, I think all of us up here are vendors of some sort. But we all address a different part of the lifecycle in terms of how you realize your machine learning journey, or your artificial intelligence projects. I think the most critical piece is, is really having the foundations of really good scalable data storage, as well as the ability to move across the entire lifecycle seamlessly. So one of the biggest hang ups that we often see, with many organizations that end up failing or not being as successful as they would like with AI has to do with how these different teams collaborate. So we tend to think about machine learning as an algorithm or as machine learning model, when the reality of it is that it's actually an entire process to get to that outcome. The model itself is actually a teeny tiny part of what is required in order to do good machine learning. And it's those junction points between where you get your data from how clean and pure is your data, how accurate is it all the way through to how you do your ETL or ELT processes to data engineering, and how you can begin to automate those things that then can inform a machine learning model or an AI application. That's, that's the model piece of it. And then beyond that, it's how it ends up getting used. So I mentioned before about like, rideshare apps, right? That is what we should really be looking towards is how do we make this accessible and usable for people? First and foremost. When considering a platform or considering some kind of vendor, I think that there's sort of two approaches. You either want an application that's completed. So something that you can just pick up and use. Or if you're doing it and building this skills in house, or maybe you're experimenting, and trying to solve new and innovative problems, it's about making sure that those junction points between every single stakeholder on the technical side, has the capacity to collaborate and work together. To do that properly, I think that you really need a system or a platform that that eases that pain that makes it easy for people to collaborate that eliminates things like point solutions and silos, which oftentimes, I see is one of the biggest challenges that people deal with when picking a platform. So you might pick a platform that is designed for coding, yet, the unseen. Part of that is that you're in a silo of a notebook environment, that doesn't have that dead engineering piece, it doesn't have that scalable storage that doesn't have that explainability element to it. And then if you piecemeal all these things together, you're pretty much setting yourself up for failure. You need to have a system that's coherent and complete in order to realize those, those outcomes.
21:13 Matt
I think when when picking a vendor, the key thing is to go find customers of that vendor and go talk to them and ask them what is their experience? Like, you know, how did you work with them? How did they help you. And that's, that's real one on one feedback, these conferences are probably the best best place, you can go have a conversation like that. That's, that's pure folks, we might be wearing orange shoes or orange shirts. So you can find one of us. And we can maybe point you to a customer that might be here today to have a conversation like that. But I think it is kind of interesting, you see, the larger, the larger the organization, the more walls there are between the organization's the different departments. And almost as vendors, we kind of bridge that gap sometime externally to get something accomplished. And you know, every org is slightly different in that respect, but it's absolutely it's a team sport, getting this all to work. One thing that we've seen, and this was driven really through more of our partners, but we've been involved in these workshops, where it's really kind of almost learning, the latest technique or models that are interesting, and have the data scientists from a company joint, but also invite the business lines of business folks, so they can see what they're doing. And also invite the IT folks, so there's some people in that space that may want to get interested and you get this cross, cross functional knowledge, and maybe ideas come up. And it really gets leadership involved. And as part of that process as well.
22:49 Nick
You know, there's lots of great products on the marketplace, we all go when we've you know, car shop, and we want to find the best product. Right now in today's age, there's lots of good cars on the street, to pick from. So I think there's some things you have to take a look at. One is flexibility. And two is people, some providers of technology will force you down a path of my platform or, or nothing. And I would say that would be a red flag that I would have, I think you gotta be able to look at a partner that's willing to be agnostic with you, and meaning they'll start anywhere, and they'll are willing to work into other technologies and team up with, you know, some, some extent would be their competitor or another vendor to provide an overall solution because you as a stakeholder or business owner may not have the one to the budget or to the timing might be off from a contractual standpoint, or a number of different reasons of why we can't do everything at once. And so that flexibility is important to work with somebody, but then to it's the people are, is that team treating you with an overall good experience? Are they available? Are they helping you drive value? And are they there, every everyone will join point in time you need them. And so that's an important piece. For a lot of people, you never want to be a number when you're looking to engage with it with a vendor. And so it's important that they build out and have those consultants in place but also build out things like value realization services and are clear about what that is. So value realizing services are a critical item to have. That way you can take your AI and train your AI and manage your AI over time to support your business and your overall goals.
24:25 Deborah
So I want to pause for just a second and see if there any questions from the floor? We're here to help answer your your questions.
24:41 Question
I was just curious how you guys in the past or in some of your working with different companies handled maybe some higher up decision makers being a little more tenured and not wanting to move towards some of these solutions and really hold them back the company from moving forward.
25:03 Nick - Answer
I mean, I think the first thing you have to do is stakeholders can be tough. But I think a lot of times in those instances, people want to go down the path of proof of concept. And the willingness to do a proof of concept, because what you have to be able to do is show the value, but you also have to do it in a time in a quick time. So we call it time to value. So that might come through a proof of concept to show them that, hey, this is valuable, and it's easier than it thinks. But again, I would never say anything that we do is easy. Because of the number of stakeholders involved, the different systems integrations, channels available, trading a system, those are not easy things. But I think being able to demonstrate clear objectives, that cascade down, you know, here's our main objective, here's other objectives, but also be able to do it in bits and pieces and phases. So they can see the time value in that in a shorter amount of time.
25:52 Matt - Answer
Yeah, absolutely. Proof of concept is very valuable, they have to kind of see it to believe it. So, yeah, but that's the challenge is, how much does it cost to deliver those? Or what timeframe? Can you show a little taste of the value, and then where they will see that the bigger picture? So you kind of have to invest a little upfront whether it's time or some dollars up front or the vendor, sometimes vendors will invest in that process. But that's, I think that's the key piece.
26:24 Santiago - Answer
Yeah, and I think it's a it's a cultural question, right? At the end of the day, if you don't have buy in from stakeholders, then you're, you're doomed for failure from the get go. And sometimes that's the hardest part. Because if you do have people that are tenured, that have been doing things a certain way, for a long period of time, that cost of change might be too drastic for them. And the cultural piece there is that you have to be willing to take that leap of faith and in a lot of ways, and it's never, it's never easy, right? So I think at a foundational level, even outside of technology, it's about fostering that culture of innovation and progress inside of an organization. And once you have that culture, that's when you should start looking at technologies and hiring data scientists or those types of things. You can't just brute force it. And if you have people that are higher up in the organization that are very skeptical of something like an AI system or an AI solution, then how are you going to get them to trust what your model is telling them? They should do? It's it's pretty much an impossible fight. So yeah, that's what I think, fundamentally, is the cultural piece has to be in place for these things to be successful.
27:41 Nick - Answer
I think it would be also to figure out what's their driver, and there's gonna be some ROI mechanism that's driving them that they've got to provide. And so they're gonna have to justify through that, or through your project of why. And so I think that'd be a critical component. It might be cost, if it's not cost, it's something else. So that there's going to be something that you want to drive and see how does the underlying affect our project? And what can we drive value towards that?
28:08 Deborah
Well, thank you, that was a great question. And related to that is the cost question. How should internal data science teams handle the question? When the CFO or somebody in finance as how much is it gonna cost me? How should they handle that?
28:27 Santiago
Yeah. Yeah, sure. I'll kind of chime in there. I think that it's, it's a tough question, really. Because it because experimenting, and building machine learning models or training machine learning models that then power AI systems is not cheap. Overall, the way that we approach it at clutter is by really embodying the notion of a true hybrid data cloud. And, you know, we've seen this mass migration to, essentially the public cloud with AWS or Google Cloud, etc. And, and in a lot of ways, that's really solved a lot of the challenges that have just getting up and getting up and getting started and getting going. The challenge there is that as your operations grow those things those paradigms can become quite cost prohibitive. A clutter, we work with AWS or Google Cloud, we're a complete multi cloud solution, you can pretty much use whatever you want there. But we're also we're also a true hybrid cloud in terms that we provide a private cloud experience. So what that essentially means is that if it makes more sense economically to run something on your own data centers, locally, or let's say you know, it's a it's a long perpetual training cycle are something that you're trying to do in real time with, say models at the edge or in, for example, manufacturing use cases. It might make economic sense to have those workloads on premises, or being able to burst with this with a proliferation in the end, you know, the separation of compute and storage that's been delivered essentially, across these clouds, you can do things like burst certain jobs, to a public cloud, to leverage those resources and meet your SLA is in an effective way, but not having to move absolutely everything to run in perpetuity, on a consumption basis in that way. So it's that flexibility of being able to do things where they make the most sense economically, and also in terms of what can be done with your computing power. It's that flexibility, I think that's paramount in managing cost.
30:40 Matt
Cost is a huge factor. And what we're seeing across the board is the same thing that our customers are adopting a true hybrid model where they're, you know, I think a lot of the the learning will always be in the cloud, the kind of the experimentation may be there as well, but depends on kind of where your data sits and where it's available. And maybe your user experience, maybe your inferences in the cloud, because that's where your customers are interacting, or maybe they're on prem, I mean, it kind of depends. But really, the, when you get to the operational stage, we see customers saving 60 70%, compared to their cloud costs when they move that, that on prem. And on top of that, they're the researchers are saying, hey, my response times faster by user experience is better. And they can actually get more done quicker and save costs. But there's an investment there. And, you know, we try to help folks do that as easy as possible. And we can, you know, we're we kind of crossed the board of traditional IT infrastructure, also into infrastructure that supports analytics workloads. So we have those, we can help those teams build those systems, build the AI ready, infrastructure stacks, that there. I think, really, in the last maybe two years, the on prem experience, it's kind of caught up to where the cloud was, in terms of, you know, tracking your models, training, all the tooling and kind of environment stuff that your data science kind of expect and want is now there on prem as well. So it's kind of a, it's kind of that a shift. That's a fast moving space.
32:25 Nick
I think there's three categories that sit up front for me, and that's, are you providing an improved experience, and a proven experience will drive towards revenue typically? Or are you in a cost savings situation? Does it provide us to reallocate resources in another way, because we can automate something, I think those are key areas to look at, in your evaluation of your ROI. You know, I think if you are providing a service and anything time you bring an AI, especially when it's, you know, outward facing to your consumers, it's got to be a value, it's got to be good experience, it can't just be a service, where they get a bad experience, and they still have to come back around and talk to you in another manner. That's not a cost savings. And it's not an it's a poor experience. So you're gonna fail to have your ROI metrics. But if we can provide a good experience, we can help you find that physician schedule, appointment, that physician and then make a payment to the hospital system, all in a very efficient and meaningful way. That's an overall good experience. And it's gonna drive revenue and save costs your organization.
33:23 Deborah
So we have a few minutes left, I'd like to open the floor to questions. Any questions?
33:37 Question - Deborah
I have a question. Can you talk about this decision process in deciding whether to build or buy? And probably most enterprises are somewhere in the middle? You know, they do a little bit of both. So any best practices there?
33:56 Nick - Answer
Yeah, that's a good question. It's something that we've we face often in our world, it's big Do It Yourself market, especially when conversational systems. I think if you've got the people that can take those systems on, and do it on their own, and they know how to scale them with experience, I think you have to take a hard look at doing that. If you don't have those people, then I think it's a bad situation. Because it takes a long time to bring in that expertise in your organization. And bring it from just a PO situation. You go look at a lot of big organizations out there and have tons of POCs skunkworks projects going, what are they able to take their whether goal is and make it go live and scale. And so that will depend on your people. A lot of great products out there. Systems are intuitive today. So I would say it depends on your people.
34:48 Matt
I think with the kind of more models becoming available and the things you can do just on your laptop becoming more and more impressive these days. That there is also a component of operationalizing it and not just, you know, can you scale it? Can you make sure it has high availability in your environment? Is it replicated? Is it backed up? Is it all these things that traditional IT folks? Like, okay, we got a model, we're ready to go. And then all the IT goes, well, how are we gonna make sure it keeps running? You know, how are we going to monitor that all the all these things come into play? So even if you do, do it yourself, you are having a partner vendor assist, I think is, you know, in some aspects, and every org is different, depending on the size and in their staff, but it's a very customer engagement.
35:43 Santiago
Yeah, I think it comes down to what problems are you solving? Is it whether you make the decision to build or buy, right? I would say that a very large amount of problems that can be solved with machine learning and AI systems. It's not, there's not something out there, you can just go and buy. A lot of the problems are novel, and, and are emerging in terms of what is possible. And if you're trying to solve novel problems, or things that don't exist yet, or you're trying to get ahead of the curve in terms of your, your competition, then in a lot of ways, it makes sense to build an house and scale those skills inside of your organization, foster that culture, and go venture into the unknown, taking a portfolio approach into how you're solving problems and creating these machine learning models that then solve greater problems down the line. You have to kind of have that in mind as well. But the flip side of that is that for more, more tried and true use cases, like for example, chatbots is a perfect example. There's no need to reinvent the wheel. Why do it yourself, when you can go and buy a product that has already been tried and true? And that solving the problem for you already? Then it doesn't make sense to reinvent something that already exists? So I don't think it's an either or type of question. It just depends on what problem you're solving. And, and I think that for the majority of organizations, the cases both some problems you solve by purchasing an AI solution, sometimes those solutions don't exist, and you have an in house team that can get you ahead of that curve.
37:25 Matt
And then the the fast pace of everything, the new like GitHub copilot, you know, it's amazing if you've ever if you've ever used it. But there's concerns around it, putting in your license code from other projects, because it's trained on all this dataset. Now, do we have? are we introducing some kind of legal risk into our product? Or you know, there's all these when you go by there's we're still learning all the risks associated with some of these ai ai products?
37:56 Deborah
Do we have time for maybe one more question? So I guess, Matt, following up on that last point you made so how does one treat these models like like a co pilot to use them?
38:16 Matt
Well, I mean. I think there's an ongoing conversation right now there's probably in the community about these risks and things about it. But that's probably a whole a whole talk on its own right there. I don't know if you guys have any comments on that.
38:31 Deborah
So avoid it for now.
38:34 Santiago
Yeah, no, that's I mean, it's a question that you could literally talk about for hours if you if you had the time, you know? Yeah, no, I don't, I don't really have too much to add there.
38:46 Nick
so the question is about security and using AI in for consumers, I think you have to have an organization that's probably been through a certain realm or has expertise on their team, you know, done a lot within patient engagement, and providing medical recommendations to them about where to take their injection, and then follow up with them on their side effects. And when they should escalate a call to the physician or not. You have to go through a number of processes to get that available. You have to have those compliance teams built into your own team. And so those things also have to translate into the product itself. So approvals and reviews of those approvals all have to be things that provide easy experience, right? A medical, legal or regulatory body does not want to look at a system, an Excel sheet manner, they want to have still themselves into a system to go and review and look at an authoring tool and provide those types of look. So you have to be able to support when you when you pick up your vendor, depending on what vertical you're in will drive the different security measurements if it's PCI compliance, if it's HIPAA, whatever it may be. pharmacovigilance, but you have to build that into your practice that we built into the technology that you're providing for a certain vertical, but again, that will be case by case basis within a given vertical.
40:06 Deborah
Well, wonderful. We're out of time and I just want to thank our excellent panelists. Thanks so much for being here.