The conversation around AI has accelerated faster than most organizations can metabolize.
Every week brings new tools, new promises, and new anxieties. Entire roles are questioned overnight. And for those of us working in design and research, it can be difficult to separate what is genuinely transformational from what is simply noise.
But beneath the hype, a deeper shift is clearly underway.
Generative AI is not just changing how work gets done. It is reshaping how organizations create value, how decisions are made, and where human judgment truly matters. That makes this moment especially important for design and research leaders, not because our roles are disappearing, but because they are being fundamentally redefined.
This is not a tooling shift. It is a leadership shift.
From Capability to Value Creation
The economic potential of generative AI is often cited, and for good reason. Research from McKinsey estimates that generative AI could add between $2.6 and $4.4 trillion to the global economy annually. This figure alone signals the scale of what is at stake.
But numbers like these can be misleading if taken at face value.
They suggest that value will emerge naturally from adoption. In reality, value only materializes when organizations are willing to rethink how work, strategy, and accountability are structured because AI exists.
Most companies today are still operating in what could be described as a "Plus-AI" mindset. They layer AI on top of existing processes, automate familiar tasks, and optimize workflows designed for a pre-AI world. This delivers incremental efficiency, but it rarely produces structural advantage.
A smaller group of organizations is doing something different.
They are rethinking decision-making, talent models, and governance with AI as a core assumption. Research from Accenture shows that only 12% of organizations are currently mature enough to capture significant AI-driven value. These so-called "AI Achievers" share a defining characteristic: 83% of them have formal senior executive sponsorship for AI initiatives.
McKinsey’s own research reinforces this. CEO-level oversight of AI governance is one of the factors most strongly correlated with measurable bottom-line impact.
When leadership treats AI as core infrastructure rather than experimentation, the organization adapts accordingly. When it does not, AI remains fragmented and strategically shallow.
This distinction has direct consequences for design and research.
Why Design’s Role is Changing, Quietly but Fundamentally
AI is powerful, but it is also abstract.
Left on its own, it optimizes what already exists. It scales patterns without understanding their relevance. It produces outputs without context. In that sense, AI is less a solution than a force that requires framing and intent.
This is where design and research move from support functions to strategic ones.
Historically, many design and research teams have been positioned downstream, validating decisions after direction has already been set. In an AI-driven environment, that positioning limits impact. The most valuable work now happens earlier. Deciding what should be built, why it matters, and who it serves becomes the core contribution.
This shift is already visible across leading organizations. Researchers are increasingly expected to operate as strategic partners, shaping opportunity spaces and informing product direction before ideas solidify into roadmaps. As Janaki Kumar, Chief Design Officer for Global Banking at J.P. Morgan, has argued, research must move beyond evaluation and actively drive product strategy.
In an AI-enabled world, research is not just about testing ideas. It is about helping organizations decide which futures are worth pursuing.
Augmentation, Not Replacement
Much of the anxiety around AI focuses on creativity. In practice, what AI threatens most are the parts of the job that have quietly crowded out creative and strategic work over time:
- Data preparation
- Manual synthesis
- Repetitive documentation
- Process-heavy execution
These activities are necessary, but they are rarely where insight is created.
AI excels at absorbing this work. And when it does, something important happens. Designers and researchers regain cognitive space. Not just time, but attention.
This is where human strengths become more visible, not less.
Empathy, interpretation, intuition, and sensemaking are difficult to automate. Reading subtle shifts in tone during an interview. Connecting weak signals across markets and cultures. Reframing a problem rather than optimizing a solution. These are precisely the skills that grow in importance as AI becomes more capable.
AI changes the division of labor. It does not remove the need for judgment.
Why Context Still Beats Capability
One example illustrates this clearly.
A product team working with a major sports league set out to build an app showcasing a century of historical data. The original concept was technically impressive, a powerful interface for navigating statistics at scale.
Then researchers spent time with fans.
Not through dashboards or surveys, but in real social settings, watching how people actually engaged with the sport. The insight was immediate: Fans did not want to explore data. They wanted to use it socially, to settle arguments, prove points, and perform expertise in real time.
That single insight reframed the entire product. What could have become a niche analytical tool turned into something culturally relevant and widely adopted.
No AI system would have surfaced that on its own.
But AI can, and should, make room for that kind of discovery.
This is the difference between data and meaning. And it is where design and research continue to create disproportionate value.
Building AI-Native Design Organizations
The biggest barriers to becoming AI-native are rarely technical. They are cultural.
According to recent workforce studies cited by McKinsey, nearly half of employees say they lack the training and support needed to confidently adopt generative AI. This gap slows progress far more than tooling limitations.
Organizations making meaningful progress tend to focus on three areas:
- Culture: They normalize experimentation and learning. Teams are given explicit time and permission to explore AI without tying every initiative to immediate performance metrics.
- Talent: They invest heavily in upskilling existing employees, not just hiring specialists. Accenture research shows that organizations combining reskilling with targeted hiring outperform those that rely on external talent alone.
- Governance: They treat governance as an accelerator rather than a constraint. Clear, embedded guidelines reduce uncertainty and allow teams to move faster with confidence, especially as risks around bias, privacy, and reputation become board-level concerns.
None of this is dramatic. All of it compounds.
Looking Ahead: Responsibility Scales with Capability
AI is evolving rapidly toward greater autonomy and multimodality. Systems are beginning to plan and execute multi-step workflows independently and to reason across text, images, audio, and video simultaneously.
As these capabilities mature, the role of design and research leadership becomes more consequential.
The central question will no longer be whether something can be built.
It will be whether it should be built.
At that point, design and research are no longer just disciplines. They become stewardship roles, shaping how technology enters people’s lives and how organizational values are encoded into systems at scale.
As Janaki Kumar has noted, design is not simply about making things. It is about transformation.
Closing Reflection
The future of AI will not be defined solely by models, benchmarks, or technical breakthroughs.
It will be defined by the quality of human judgment that surrounds it.
For design and research leaders, this moment is not about defending relevance. It is about stepping into greater responsibility. Shaping intent earlier. Influencing strategy more directly. Ensuring that as our systems grow more powerful, they remain grounded in human meaning.
AI will continue to evolve. The more important question is whether we are willing to evolve our roles, from executors to shapers, from validators to authors, alongside it.
That is the real work of the augmented creative.
Thank you for reading!
If this resonates with you or sparks ideas for collaboration, let’s connect.
👉🏻 Send me a message!
