AI Lens
AI news, hot topics, advancements, and discussions about how AI is reshaping business and society.
Your focused view on the emerging hot topics in the Age of A.I.
AI Lens
Season 1 Episode 7: Critical Tips When Adopting AI in Your Business- Interview with Uvika Sharma and Padmini Soni
AI Lens brings your focused view on the emerging hot topics in the Age of AI!! We provide AI news, hot topics, advancements and discussions about how AI is reshaping business and society.
This episode is our podcast interview at CES 2026 with two of the fabulous speakers and experts in implementing AI in business: Uvika Sharma and Padami Soni.
This podcast includes crucial issues to consider when adopting AI in your business. We also discuss major considerations in maintaining your experienced employees to manage the AI. Join us for these hot topics in AI.
Be sure to watch out for Uvika Sharma and Padnami Soni's new book they are co-authoring with Noelle Russell. In their book, the authors give the tips to make it easier to implement AI in business. It's scheduled to be published soon in early 2026 and will be available on Amazon.
This is Liz with AI Lens. Your focus on the future. Oh, and I've got some fabulous guests today we're interviewing. We went today to a session that was so fantastic. It was all about when you're doing an AI business. What is your checklist that you need to really succeed? And I have two of the speakers here. I have Uvika Sharma and I have Padmini Sony here. They were fantastic. And I'm going to start just asking your general background and how you got into AI as a first question. Padmini.
Speaker 1:Thank you. So glad to be doing this here in the Bellagio at the spa tower. So I've been in the tech space for over 25 years. Love data, love everything to do with data. And I got into AI about six years back, and that was just because of the fashion with data, but responsible AI with a personal story related to my dad. So when so I'll cut to the chase and then come to the responsible AI story because what happened was when my chat GPT was released on November 20, November 30, 2022. Four days prior to that, my dad had a nasty fall. And um that got him bedridden and eventually his passing six months after. Thank you. So what happened was my days got crazy because I was balancing a full-time job and taking care of them full-time. And I started because ChatGPT was so new, I started sort of dabbling with it and trying to plan my busy day around it and all of that. And what happened was me being from the data space, AI space, I could tell that it was hallucinating when it started I started asking about medicines and this and that. And everybody was sort of, you know, just sort of wide-eyed about Chat GPT. And I felt that I was seeing this happen, these hallucinations, which others probably just took it as is. And that's when I realized that I have a role to play in this. You know, it was almost like my calling because I started approaching it not just as a technologist, but as a daughter and as a responsible citizen. So that's my foray.
Speaker 3:That's awesome. Okay, Uvika, how did you get into AI?
Speaker 2:Okay, hi everybody. My name is Yuvika Sharma, and I have been in the data and AI space for 23 plus years. And uh my story is that I wanted to be a pre-med student, and I even got into med school, the accelerated program, and then I realized that uh I cannot see birds, so I ended up as an accidental computer scientist on Wall Street, you know, in Dataware Housing. So that was my journey, and then uh from there on I took on many different roles, and data was always weaving the path, whether it was you know working for Accenture, being large transformation projects or working, you know, on Wall Street in financial services and uh doing uh large digital transformation projects and then even going back to grad school, doing a lot of different things. So for me, data was always weaving the path, and then I remember when I was in the pharmaceutical industry doing compliance and investigations, training and all that. It was like, okay, what's next? And similar, I think it was around the same time, and at that time I did not know Padmini, but around 2019, I was like, okay, the next thing that I need to do, and it's interesting when you look at your path and you realize that I was actually chosen to do this because there were many pivots I made. I tried running away from data, and I was like, okay, what next? Because curiosity was always leading me into different directions, and then you realize I'm actually good at it, I shouldn't be running from it. So that's when I realized that okay, in 2019, the next thing is um AI, and I'm actually good at data, and I can channel this into something more meaningful. So I started training myself, and now I run a consulting organization where I help companies adopt AI responsibly using AI literacy adoption, the workshop that you came to, and uh making sure that people have the right guard rules in place. So we also help them with governance risk compliance. And uh yeah, we uh work with um a lot of like big tech clients, small to medium-sized companies, as well as um I also teach founders of Cornell Tech about uh building responsibility. So super excited about this time.
Speaker 3:Oh, and it was really funny when we were talking about the different projects I have. Um the first question that popped up from you, both of you, was what kind of guardrails do you have? That is so top of mind for us. Can you explain, Padmini, why are guardrails in today's world so important?
Speaker 1:So, one thing that we have to understand about AI is it's not a deterministic system, it's a probabilistic system. And what do I mean by that? What you it's when you input a prompt and you look at a response, that response changes every time. You can have the same prompt, you can keep uh copying and pasting and hitting editor every single time, but your response is different because at the back end there is probability statistical analysis that's happening, and the responses that come are based on at that particular time what were the weights, and you know. So, with that in place, we absolutely need to understand that because it's a probabilistic model, what responses come and it's trained on data from the internet, you know, like drones of data, it has it reflects the biases and all the that exist within that. So it is upon us, the builders of AI, the users of AI, to understand that we have to put these guardrails so that we don't, you know, negatively impact the user. So, what what what are the kind of guardrails? We are talking about understanding bias, understanding the flip side of that point, fairness, you know, security, privacy, transparency, response, um, uh explainability, sustainability, all these are part of you know responsible AI and all these form the guardrails, which and we have to do this so that we don't negatively impact our consumer.
Speaker 3:And when I was in today's session, one thing that came across when you're in the first planning stage where you're thinking about your goal, you also need to start thinking about the guardrails that that long foundation. Exactly. You want to expand on that?
Speaker 2:I mean, it's almost as a you know, uh two sides of the point, right? So one side is like you're trying to innovate, but the second side is that okay, you need to think about the impact, right? You need to be able to manage the risk, right? So if you say I want to do this, like hang on, what is the effect of this? What is the impact of it? And when you talk about the impact of the effect, then you need to dig a little deeper and say, okay, who is going to be impacting? Is it going to be a positive impact? Is it going to be a negative impact? And then you also need to think about what is the worst case scenario. You might think that, oh, this may not happen, but how do you know? You know, so you know you can't account for those things, right? So you need to start thinking about those things.
Speaker 3:Right, when we were talking about going through and doing checks on all this, that's why we still need people, right? 100%. 100%! Can you guys expand on that? What kind of roles do you see people now playing where their job description is going to change?
Speaker 2:Uh, so a couple of things, right? Uh, no doubt AI is uh it's it's changing the way we work. So people need to be able to adopt. But where I feel that we are still going to be relevant is that subject matter expertise. As Padmani mentioned, right? That these are uh probabilistic systems. They are learning from the data, it's the past, right? But AI can do prediction, but the judgment always needs to be related to style, right? So it's almost like an augmentation of capability. So of course, AI is gonna help become more efficient, it's gonna help you become faster. And yes, some people are using it to become smarter, they're learning individual things, but who makes the call and what decision and how the innovation happens? That definitely needs to be with the human, right? So, of course, we'll need uh ethicists as well, people who are gonna be able to oversee the AI, so like AI managers, and now you must be hearing from a lot of people that you have agents showing up on your org chart, so your AI is gonna be a collaborator. So you should think of AI as a partner, as a collaborator, but AI should never be a replacement for a human, right? If the person is going and doing something different, it has to be something better. So the AI basically takes care of some low-level task where the freed up kind can be useful if you want to do something awesome.
Speaker 3:All right, and many, we were talking today during the session about how a lot of these companies are laying off people with 25 plus years. Can you explain why that could be a huge mistake for these businesses?
Speaker:I know there is what was it, like a hundred and seventy thousand people in the last couple of months or something, something's gonna be a way those because it's it's those money, you know.
Speaker 1:I always think of it like the dishwasher. I'm so glad that the dishwasher is because I don't want to be washing those details, right? So it's I want the tasks to be gone from me, right? And I wanted even something to replace that, and that was a dishwasher. Great. So some something similar to that, we want these kind of money tasks to be replaced. But then there are other, like Yavika said, you know, there are assistive tasks, like you know, augmentation of your human capabilities that AI can bring in. So when I I do worry when people are laid off in jokes, but what is also possibly happening is that any layoff is directly linked to AI nowadays. Is it we we need to get a little imminent below the below the news and see what is the actual reason for those layoffs? Maybe not all of them are because of AI, but what is happening is because of this wave, anything, any news that we come across, we think it's always all because of AI, it's perfectly not. Right. If it is, then I think we need to take a step back and re-evaluate and see that what you know how do we really need to get rid of so many people? Because you will see, you know, there are companies hiding back employees, you know. If this it was like the 95% that are not getting the ROI are now bringing back people and saying, okay, we do need, you know, to me the work and paying it just often.
Speaker 3:Right. Well, I mean, during COVID, a lot of people got used to it being home. And then we had the situation as employers where you had the quietly quitting. And so a lot of this could be that. Could be about dealing with employees who don't want to be there. Yeah. Yeah. Um but it's it's very, very interesting to me because a lot of those 25-year employees could be managing AI because they know the historical data.
Speaker 1:Yeah, right. And you can't replace that. And I also feel this is the time when you can have these unicorns and solophoners. So for those who've lost their jobs, maybe find that, you know, little niche where you can, you know, build a company around and become a billionaire. Open a new door.
Speaker 2:It's also like the era of opportunities, you know, as Patmani said. But to your point, um the and also Patmani touched on it that companies need to take a step back and assess that are we even doing the right thing, right? Because uh you need to be able to oversee the AI, right? And uh ultimately subject matter expertise console. I'll give you one example. Recently, uh I was with a client and we were trying to figure out how to leverage AI to summarize a lot of the news, especially that's happening in the tech space, and um we leveraged AI to do that, and uh but I remember telling one of the colleagues who has deep expertise in a space that uh please make sure that you check because this technology hallucinates, and she dissected the entire report and she could spot entirely everywhere like where the AI made a mistake, right? Because for people who don't have the subject matter expertise, it could sound very unclean and it looks right. So confidence does not mean it's fine, right? And that's where subject matter expertise comes very hard.
Speaker 3:I know I found when doing search online as an attorney, the hallucinations have gotten less and less. And especially with Weswan Lexus having implemented AI, but they're still there. And you have the you know, everyone's heard the story about you know, the guy, I think he was in New York who got this far as that was you know, three years back, even now, so even with Deloitte, the use case that Bhagavani pointed out.
Speaker 2:So Deloitte did that two times, right? Of course I love Deloitte, you know, it's a big problem, right? So no offense, it's a fabulous, but the point is that uh it's just that people need better awareness, right? People need better AI literacy to be able to recognize that uh these technologies are not perfect. Yeah, thank god these technologies are there, right? But be aware of the risk and where the problems are.
Speaker 3:And it also happens when you first are implementing it, first using it, and it's not properly trained, yeah. Just like when you have a new employee. You know, if I have a new intern or a new clerk or a new paralegal, I'm gonna double triple shaping over there.
Speaker 1:It's it's I always look at it as when we do the floor and we expect that you know read all the same.
Speaker 2:Yes, yes. Uh so of course, so Padmani, myself, and Noelle Russell, we are the three co-authors, and uh it's uh in a nutshell, we are making it easy for anybody to be able to build AI solutions. So we are gonna be uh we're almost there. So today was uh, you know, we were doing the final uh iteration on the on the book chapters, but essentially it's teaching people how to leverage the open AI's GPT builder tool to build custom um GPTs so that uh you can literally have your mini AI agents doing things for you, but you can you have to train them, you have to plumb them in knowledge and you have to make sure that you're setting a particular kind of tone so that it react uh you know behaves in a certain way. And since you know Padmani, myself, and Noah are champions of responsible AI. So there is hardcore stuff ingrained in that about safety, trust, responsible AI, the whole nine yards. So that's essentially what it is. Padmani, feel free to add anything else.
Speaker 1:And we do have um actual examples that we can build. So you can actually take the prompt and put it in and build it and create your own custom GPT. And so we have uh we've built like a new trip, then we've built a pizza pal, Kareem Buddy, very important right now. So you can actually take that, build one in your community. You know, you go to a pizza, uh pizza place in your community, and you know, take that information. We're actually showing you how to build it and you can use it and probably sell it to them.
Speaker 3:Oh, that's fabulous! Yeah. So, do we have a title for your book yet? Um, we are working on it. You're working on it. I'm motivated. Okay, so look in the near future. I'm gonna have in the notes the name of our authors, and it'll probably be on Amazon. Yeah, yes. Okay, because we're all about promoting businesses to use AI, but to do it correctly. With guardrails and safety, and these are the ladies who are the experts at that. Thank you both so much. This has been wonderful. I enjoyed the session today. Our talk has been fabulous. Again, you're watching AI Lens, your lands into the future, where we provide hot topics, hot days, discussion, and news on AI.