UBC News

Real Talk About Ethical AI Usage in Pharmaceuticals: Experts Share Viewpoints

Episode Summary

Too many pharmaceutical AI projects fail spectacularly because companies treat AI like simple software instead of the fundamental shift it represents. Discover why security nightmares, regulatory minefields, and disconnected data systems can derail billions in investments, and what the winners do differently.Learn more: https://www.gamma-solutions.llc/

Episode Notes

You know what keeps pharmaceutical executives up at night? It's not the competition. It's not even the FDA breathing down their necks. It's the AI system they rushed into production six months ago that's now spitting out results nobody can explain, and regulatory auditors are asking questions they can't answer. Here's the brutal truth about AI in pharmaceuticals: as much as seventy percent of these projects crash and burn, and it's rarely because the technology doesn't work. The technology is incredible. The problem is that healthcare organizations ofen treat AI like it's just another software upgrade when it's actually a fundamental shift in how decisions get made. And in an industry where one mistake can cost lives and billions in lawsuits, that attitude creates disasters. Let me tell you what's really at risk of happening behind closed doors. An employee in drug discovery needs to summarize some research data quickly, so they copy it into ChatGPT. Seems harmless, right? Except that data now lives on external servers forever, and it might contain protected health information or proprietary compound data worth more than most people will earn in ten lifetimes. Major banks and tech companies have already banned public AI tools after discovering staff were accidentally leaking sensitive information. For pharmaceutical companies, the stakes are infinitely higher. The security nightmare is then just the beginning. The real problem runs deeper into the foundation of how pharmaceutical companies operate. Think about your typical drug company. They've got laboratory systems that speak one language, manufacturing systems that speak another, and clinical trial databases that follow completely different rules. A single drug compound might have three separate identifiers across different departments. Now try to train an AI on that mess. Even the smartest algorithm in the world can't make sense of disconnected, inconsistent data. Then there's the regulatory minefield. FDA regulations were written for traditional software, where you could trace every decision back to a human being. But AI systems often learn and evolve in ways those rules never anticipated. Regulators want to know exactly how your AI reached its conclusions, but many machine learning models function as black boxes where even the developers can't fully explain the logic. That gap between what AI does and what regulators require creates compliance disasters that can blindside unprepared organizations. Here's where companies can go wrong. They deploy AI without documenting how they built their models, what data trained them, or how they validated performance. When auditors show up asking questions, missing documentation transforms minor concerns into major findings. Warning letters would follow, operations halt, reputations crumble. This has happened, and it's ugly every single time. But here's the thing, AI done right actually works. The companies succeeding with AI aren't the ones with the fanciest technology or the biggest budgets. They're the ones who started small, built proper oversight from day one, and never forgot that in regulated environments, humans must remain accountable for decisions that affect patient safety. The winners start with focused problems where automation delivers measurable value without risking lives. Organizing clinical trial documents, pulling key details from adverse event reports, these tasks make perfect starting points. They deliver quick wins while keeping humans firmly in control of what matters most. These early successes build confidence and teach teams what AI can and cannot do, making them ready for bigger challenges down the road. Successful companies also establish governance committees before deploying anything into production. Quality experts, regulatory specialists, IT leaders, and business stakeholders review every proposed AI application together. They determine which decisions AI can handle independently and which require human verification. Clear boundaries prevent dangerous confusion about when systems should escalate situations beyond their training to human experts. Validation becomes part of development, not an afterthought. Companies need to keep detailed records showing what data trained their models, how they cleaned that information, and what performance benchmarks the system achieved. They should set up monitoring that never stops because AI performance drifts as conditions change. Put thresholds in place to trigger alerts when accuracy drops or input data shifts, catching silent failures before they compromise quality. Smart organizations also protect information while enabling progress. Clear policies are needed to prohibit entering confidential data into unapproved AI tools, backed by training that explains why these rules exist. Enterprise platforms operating within the company infrastructure should provide safe environments for sensitive work. Encryption should protect data everywhere, and role-based access ensures only authorized people reach specific datasets. The infrastructure matters too, but not in the way most people think. Companies with clean, organized data see returns faster than those still fighting basic quality problems. Legacy systems that can't share data with modern platforms create bottlenecks that expensive workarounds can't fix. The gap between what your infrastructure supports and what AI needs determines how much friction you'll face. But technology alone never determines success. Failure rates of up to seventy per cent occur because organizations underestimate the human side. Pharmaceutical companies with rigid structures and risk-averse cultures struggle more than flexible organizations comfortable with experimentation. Training programs that help employees understand AI capabilities and limitations prevent unrealistic expectations and unnecessary fears. Internal champions bridge gaps between IT teams and end users, smoothing adoption. The future of pharmaceutical work involves AI systems partnering with human experts to solve problems neither could address as effectively alone. Organizations that build proper governance, start small, validate rigorously, and invest in their people create sustainable advantages. Those rushing in without frameworks waste millions on projects that never deliver or create compliance nightmares. Click on the link in the description to learn more about implementing AI responsibly in regulated pharmaceutical environments.

GAMMA SOLUTIONS, LLC
City: Newton
Address: 45 Nonantum St.
Website: https://www.gamma-solutions.llc
Email: ga.morin@gamma-solutions.llc