Video

Why AI Readiness Must Take into Account People, Technology, and Processes

Federal News Network's AI & Data Exchange 2025

By Tracy Jones

The IT world has been talking nonstop about artificial intelligence ever since ChatGPT burst onto the scene in 2022. Yet few organizations are fully ready to take advantage of AI, particularly generative AI.

That’s because that term “readiness” is fairly encompassing, said Tracy Jones, associate director of the AI and data practice at Guidehouse.

“When I talk about AI readiness, I’m talking about those foundational steps — getting the people ready, the culture ready, the data ready and the technology ready for AI,” Jones said during Federal News Network’s AI and Data Exchange.

Computing, storage and network capacity all count toward readiness, to be sure. “But we really have to home in on … human transformation to make the AI work the way we need it to work for us,” she said. That involves “thinking about changing our mindsets about how we approach our work, what we do, how we change our environments.”

And obviously, a huge part in doing all of that involves humans, Jones said.

 

Taking stock of current AI readiness

Agencies can boost their efforts to ready their human and technical environments for AI by using a structured assessment framework, Jones said. A framework, like one offered by Guidehouse, can help agencies determine their current readiness as well as gaps in five critical areas:

  • Workforce capabilities and skills
  • Cultural readiness and whether the organization has had success with large transformations
  • Implementation strategy
  • Governance, the structures of which the organization can borrow and adapt from its technology and data governance systems
  • Practical concerns for acquiring AI

 

Where policy and people meet in AI readiness

On the human capital front, Jones acknowledged the importance of training. But she said that before training, an agency should establish an AI usage policy, “which dictates at the very highest level the organization’s expectations for how you’re going to use AI and guardrails.” That way, managers can ensure training accurately reflects policy.

Mapping out the skills and capabilities employees will need for AI “and aligning that to your existing skill sets within your organization” is equally important, Jones added. That can help focus training where it’s most needed.

At the same time, agency leadership should get a conversation going broadly within the organization about AI readiness, she advised. Jones said to make sure to include voices from all functional disciplines, not just IT or data management, but also the business and program groups.

“The readiness conversation should really be taking place across the organization with different folks,” she said. “AI is multifunctional and multifaceted, so you want to have conversations with legal, with change management, with the business mission side.”

AI readiness and success are “about sharing best practices, getting people excited about the time they’re saving,” Jones said, adding that training without preparing the organization for the cultural reality of AI “to me is kind of deadly.”

 

Ensuring technical AI readiness

On the technical side of AI readiness, Jones advised thinking fundamentally about data. Only with sufficient amounts quality data will projects prove successful.

“Data is huge,” Jones said. As for volume, “we see a lot of organizations have these great piloting ideas, and they start off very excited.”

Then, reality can set it: “Once they do the pilot, they realize they don’t really have the volume of data necessary to do a large scale of that same solution. So volume is important.”

Beyond quantity and quality comes the appropriateness for use and protecting confidentiality and privacy.

“Also, it’s really important to understand where the data is coming from and what’s included and not included in the data,” Jones said. By understanding the completeness or comprehensiveness of data for training models, the organization can better avoid biases or skewed results from AI algorithms.

The agency’s technical infrastructure must also support AI readiness activities.

“You definitely need to have the right architects to help look at the platform stack that you’re putting together,” Jones said. “You need to have folks that understand the different cloud and solutions that you’re putting in place.”

Particularly important is design of the data infrastructure in terms of storage locations and data flow patterns, especially because data from many sources and many owners typically come together for AI projects.

Beyond that, Jones said, people leading AI projects need to possess the ability to translate the technology into business benefit terms.

Finally, governance plays an overarching role for all elements of AI readiness. Jones said test use cases for AI are useful not just for the reviewing outcomes but to test the governance policies and to fully define who is responsible for what.

“The use case is a great place to start testing out governance,” Jones said, “to start really thinking through decision criteria for moving something forward, looking at value, feasibility, risk and decision-making.”

Watch Tracy's full session below.


Let Us Guide You

Guidehouse is a global organization providing advisory, technology, and managed services to the commercial and public sectors. Purpose-built to serve the national security, financial services, healthcare, energy, and infrastructure industries, we collaborate with leaders to outwit complexity and achieve transformational changes that meaningfully shape the future.

Stay ahead of the curve with news, insights and updates from Guidehouse about issues relevant to your organization and its work.