Why Smarter AI is People-LedWhy Smarter AI is People-Led

The key to ensuring AI remains safe, responsible and effective is maintaining focus on the people running it.

Rick Kreuser, Technical Director, SSG AI CoE

January 6, 2025

4 Min Read
Artificial Intelligence Image by Brian Penny from Pixabay

Ethics. Responsibility. Governance. Trust.

All big concerns for business leaders right now are with AI. In a recent survey, 76% of CIOs say their organizations do not have an AI-ready corporate policy on operational or ethical use. Businesses also see it as a barrier to defining their AI vision due to concerns about regulatory and ethical risk.

We are seeing that attitudes toward ethics and responsibility when planning IT and technology investments have changed. A decade ago, it was more of an afterthought. Today in the AI age, it’s a board-level topic. But it isn’t easy to achieve – let’s explore why.

Why Is Responsible and Ethical AI So Tough?

The very nature of the technology is a challenge. Generative AI relies on the training data, architecture, and AI engine to produce unique results. If not designed carefully and monitored continuously, you could run into bias. For example, financial data might reflect problematic gender pay gaps or historical data might bring outdated cultural norms to outputs. You need a solution for that.

Other challenges include:

  • Legacy governance mechanisms that businesses will need to rework in some capacity. Models that had varying degrees of preparedness even before the Gen AI boom.

  • Skill gaps. AI is a very new field – and companies are concerned about having the right expertise to deliver it quickly and effectively.

  • Getting people on board. People generally fear AI, which will slow adoption.

  • Lack of uniform regulations. There are regulatory gaps in LLM safety, content providence, and risk managements which AI companies are working together to fill.

Why Being People-Led Is the Answer

Lenovo and NVIDIA have created an AI readiness framework with four pillars in a very intentional order: Security, People, Technology, and Process.

“Security” comes first, to make sure you can prevent harm, bias, and unintended or improper use of AI. Then there’s “People" to ensure good change management and that properly trained personnel are involved throughout the AI journey. “Technology” and “Process” come later because, without robust security and onboard people, you won’t release the full value of the AI technology anyway.

In short, you should be people-led instead of technology-led. Here are some ways to do that:

#1: Ensure Explainability with Constant Human Feedback

Explainability is key. You must monitor how and why AI is providing each output and – critically – ensure it stays on track. Explainability usually comes in two forms:

1) White-box solutions: Semantic AI for example, where you can map the logic, training data, inputs, and prompts. As you test it, you can understand where outputs come from and refine from there.

2) Black-box solutions: Closed or open-source systems like ChatGPT. These are less transparent and explainable, so it gets trickier. You always need humans to judge the inputs, infer how reasonable the outputs are, then refine from there.

Either way, you need humans to constantly monitor the LLM. It needs to be stress-tested. The model may drift, gain bias, or get different responses wrong. It’s going to learn depending on who’s using it and what data’s put into it – you must consider that in your monitoring framework.

The best large-scale generative models will likely hit 80-85% accuracy in benchmark tests. Human feedback is instrumental in bridging that 15%.

#2: Base Governance on Transparency and Alignment

Companies need a level of governance where transparency is everything. It ensures people are always accountable for their actions.

For example, let’s say you’re worried about IP protection. Put a broker in place where, if you use a third-party LLM, you must go through a certain gateway that tracks the prompts and responses. That means people will think twice about what they’re sending. Why? Because transparency is everywhere, placing the onus on to self-regulate and do the right thing.

Another essential part is top-to-bottom alignment. Make sure AI initiatives fit the outcomes users, teams, and customers expect. This will help keep everything ethical and responsible – while reducing the risk of skills gaps as the resources you have will match overall business strategy.

I’m not saying this is easy. Does corporate strategy always equal what people are doing? Many organizations found that tough even before AI. But it should be a priority here.

#3: Get People on Board With a “Show Me” Model

When convincing teams to adopt AI, use a “show me” model. Demonstrate clearly how it works and the immediate benefits. How will it make their lives easier and more effective?

Here’s an example. Say you’ve got an NVIDIA NIM inference microservice that can accelerate sales pipeline by 30%, you should lead with that to make the benefit immediately clear. If you don’t, people are more likely to distrust or simply not use the AI solution.

People Are Everything

You need people at the center of your AI adoption strategy as, after all, they’ll be the ones actually using it.

A robust governance framework for AI is essential to ensure the safe and responsible deployment of emerging solutions. This will be critical as responsible AI impacts both the AI industry as a whole and industries, such as industrial digitalization, retail, and financial services.

About the Author

Rick Kreuser

Technical Director, SSG AI CoE, Lenovo

Richard (Rick) Kreuser, the Technical Director of Lenovo’s AI CoE, leads the development of Service offerings and customer engagements for AI/ML, Generative AI, and Digital Transformation.  Rick, an author of Lenovo’s Responsible and Secure methodologies, is a frequent speaker to analysts and client executives, partnering to share perspectives on AI Strategy, Innovation with AI, and Responsible and Secure AI. Over his 35-year career, he has led the development and execution of several $100M+a number of global, technology-driven transformation programs at scale, as both a consulting leader and industry executive.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights