Let’s not leave our humanity in the rearview mirror
I’ve spent much of my career working alongside some of the most creative and technical minds, and seen firsthand how transformative new technologies can be. AI is no exception — it’s a wonder, a tool with the potential to tap into the collective intelligence of humanity. But as we embrace AI’s potential, we also need to think carefully about how it shapes the creativity, agency, and growth of knowledge workers — especially those just starting out. My recent trip to Buenos Aires got me thinking about this balance.
In BA, we relied on Uber to get around. It felt like a win-win-win: we were sure about where we were going, the drivers were sure about how to get there, and the company managed transactions while optimizing ride distribution for efficiency.
Some drivers had grown up in Buenos Aires and knew the city inside out. They used in-car GPS to avoid traffic but often relied on their own instincts — taking alternate routes when they had a hunch about a better way around an unexpected delay. Newer drivers could do a perfectly acceptable job right from day one.
I especially enjoyed talking to seasoned drivers in my broken Spanish. They had stories — about the barrios we passed through, historic events, the local vibe, favorite Al Corte pizza spots, where they grew up, or other jobs they’d held. Those were the memorable rides. In contrast, newer drivers (and those less inclined to suffer chatty visitors) stuck to the basics: getting us from point A to point B.
After several trips, I found myself wondering: is AI giving the drivers a highly efficient tool to do their jobs better, or more darkly, have they become slaves to an algorithm?
A quick internet search turns up lots of data pointing to low job satisfaction among Uber drivers. A poll of 2,200 Uber drivers in 2024, found that 71% were extremely or very dissatisfied with their driving experience. Contributing factors include algorithmic control that limits how jobs are distributed, a lack of autonomy due to manipulation and gamification, and a sense of exploitation, where drivers feel like cogs in a machine. This loss of agency can have profound psychological effects, including diminished intrinsic motivation, reduced job satisfaction, and a lack of purpose. Drivers may also experience heightened stress and anxiety from feeling out of control or trapped, leading to helplessness, burnout, distrust, and cynicism.
How does this “win-win-win” scenario hold up for knowledge workers using AI? I hear several ideas circulating about AI’s role in knowledge work:
- AI can deliver ideas that are “good enough,” but great ideas require a human leap in creativity and intuition.
- Some knowledge workers feel embarrassed to admit they relied on AI rather than their own thinking, fearing it might undermine their expertise or diminish their credibility.
- Others believe it’s foolish — or even irresponsible — not to use every tool available, including AI, to meet company demands and remain competitive.
- There’s a growing concern that over-reliance on AI could lead to skill erosion, where workers lose the ability to innovate or problem-solve independently over time.
- Some worry that AI might diminish the value of deep, exploratory work, favoring speed and efficiency over depth and originality.
These attitudes reveal a spectrum of reactions — from cautious optimism to quiet anxiety.
Let’s look at these attitudes from the perspective of seasoned versus neophyte knowledge workers.
Seasoned thinkers understand how to elevate a “good enough” idea into something great. They know where their own creative thinking ends and where AI support begins. Having developed confidence in their skills, a clear sense of agency, and ownership of their work, they are less likely to feel diminished by AI. Instead, they view it as an empowering tool that complements their expertise, allowing them to focus on higher-order thinking and innovation.
Neophyte knowledge workers may struggle with these distinctions. Like a new driver relying entirely on GPS, new thinkers may not fully recognize that “good enough” is not “great” and lack the confidence to push beyond AI’s recommendations. They might hesitate to admit how much they depend on AI, fearing judgment or undermining their credibility. Over time, this reliance could mirror the experience of Uber drivers: a loss of agency, empowerment, and purpose. Rather than AI supporting their growth, neophytes may feel unprepared or even non-essential, risking a downward spiral in confidence and creativity.
Most concerning to me as an educator is skill erosion. Over-reliance on AI could lead to a future workforce that lacks the critical thinking, problem-solving, and creative skills necessary to innovate independently. I worry about this in my Innovation & Design classes where students are at risk of filling the space, formerly occupied by imagination, with an LLM.
In the messy uncertainty of the innovative environments I’ve worked in, the perfection and polish of AI can become the enemy of the good. Prioritizing speed and efficiency over depth and originality risks diminishing the motivation and fulfillment experienced by workers.
As we integrate AI into our workflows, we must ask: are we co-creators with these systems, or are we simply operators executing tasks dictated by algorithms? The difference is crucial. When workers lose agency, creativity, and purpose, the win-win-win promise of AI begins to unravel.
To avoid this, companies must adopt AI strategies that prioritize human-centric values. This requires thoughtful integration — providing training, fostering transparency, and ensuring that AI is a tool for growth, not a mechanism for control.
The challenge of our time is to ensure that the future of knowledge work remains a space where our humanity thrives.