AI First, Humans Always: Why the Future Demands Both
- Alex Salkever

- May 5
- 6 min read
Your company’s most valuable asset isn’t AI. It’s the people who know how to use it well.

In recent weeks, two high-profile tech CEOs—Luis von Ahn of Duolingo and Tobi Lütke of Shopify—issued internal memos urging employees to put artificial intelligence at the center of their workflows. The gist of both messages: before asking a human to do something, ask whether AI can do it instead.
It’s a provocative mandate. And on the surface, a logical one. Generative AI is evolving rapidly, and the pressure to harness it productively is immense. For Duolingo, the shift has already paid off. The company has expanded course offerings dramatically by using AI to generate and adapt lesson content (although former writers for Duolingo assert this has come at a cost in accuracy and originality). Shopify, meanwhile, now asks managers to justify any new hire with proof that AI couldn’t do the job instead.
The letters reflect a worldview that is technologically urgent—but operationally imprecise, emotionally tone-deaf, and philosophically incomplete. They gesture at transformation, but offer little in the way of actual transformation strategy. Worse, they risk alienating the very people whose creativity, adaptability, and judgment will ultimately determine the success and impact of any AI transition and business case. The missives from these highly successful CEOs also demonstrate a clear misunderstanding of the strengths of current AI, which can do a very good job at discrete tasks but is currently lousy at taking over jobs.
Finally, what works at a language app company is unlikely to work at a construction company or an eCommerce company, or a toy maker. AI is not cookie-cutter cutter and each organization will have its own AI journey and find bespoke ways to leverage AI to improve their business. To be fair, these CEOs don’t claim to have a universal solution beyond their businesses, but the broader business community appears to have adopted their words as something close to a new Gospel of AI-First.
When Vision Becomes Vagueness
AI-First is a laudable principle. Who would disagree that cutting edge technology should be embraced? That said, what does it mean, in practice, to “ask if AI can do it first”? Are employees expected to run every task through a GPT-o3 query or an internal automation script? What constitutes “can do”—bare minimum competence, or parity with human quality? What counts as successful AI integration, and who gets to define that?
These may sound like implementation details, but they’re foundational. Without operational clarity — specific guidelines, supported tools, and accountability systems — an “AI-first” culture can devolve into disjointed trial-and-error. Different teams will adopt different tools, create uneven results, and potentially make costly mistakes while trying to divine the CEO’s intent.
Perversely, this can also introduce considerable security and operational risks. Imagine if a small manufacturing firm with a proprietary process for building efficient magnets wants to improve its marketing collateral. The marketing team uploads key sales materials that are not for public consumption into an unauthorized LLM.
The LLM may claim that it does not train on any data or materials uploaded by customers, but, as evidenced by the continued problems with hallucinations, AI companies still don’t fully know how their systems work. The materials enter the AI’s training set indirectly or implicitly, and a competitor, asking about small magnet technology, is stunned to get suggestions that appear to reveal key information about the competitor’s product specs.
We know this precise scenario is likely because of all the illicit uses of AI by people who should know better — lawyers (this is despite clear evidence that hallucinations of case citations are a real problem). We also know that AI firms are training models on copyrighted materials across a wide range of domains. So information pulled from an LLM and used for a product could end up putting your firm in court unless you know the exact provenance (this is why LLMs are indemnifying users but few test cases have made it through).
The Human Cost of AI Absolutism
The second, and more serious, issue is the message these letters send to employees: you may no longer be necessary.
Even if unintentional, that’s the subtext. When the CEO asks every team to prioritize AI before turning to each other, it signals that human contribution is now the fallback option. For full-time employees, that undermines morale. For contractors, like many at Duolingo who reportedly saw their roles phased out in the wake of this shift, it sounds like a pink slip in slow motion.
This is where leadership tone matters. A good technology strategy isn’t just about what you build. It’s about how you carry your people through change. And these memos, for all their ambition, fail to reassure or inspire. They offer no acknowledgment of fear, no emphasis on human skill, and no plan for upskilling or adaptation. In their eagerness to usher in the AI future, they risk eroding the human foundation they still need. In every organization we spend time with, building a healthy process for product and technology development is critical to long-term prospects.
A good leader will lay out a clear path for employees, explaining how AI will affect them and how they can use AI, or what they can deliver, to help the company and better serve customers as AI adoption grows. Conversely, a good leader will likely understand that AI advantage is probably fleeting. When DeepSeek released their chatbot app after the R1 model was published, it rocketed up the app store charts. A few weeks later, it fell back to Earth. ChatGPT has retained its top position because it has design and UX advantages that make it sticky.
Ethan Mollick’s Hybrid Alternative
Wharton researcher Ethan Mollick champions what he calls a “cyborg” model of AI use. The idea is simple: AI should augment humans, not replace them. Mollick and others have shown that when people use AI as a partner rather than a substitute, productivity increases. Brad Smith, President of Microsoft, and Reid Hoffman, founder of LinkedIn, both agree, and Microsoft claims it is seeing signs of AI productivity both in its own employees and in the ranks of customers who are increasingly paid $30 a month for access to AI tools. Most recently, a massive report by researchers from Harvard, GitHub, and Microsoft. Mollick’s own work on the “Jagged Frontier” with BCG consultants is likewise instructive in its findings that general knowledge workers benefit tremendously from structured uses of specific AI tools.
The research has also often found that work quality improves. And crucially, people feel more empowered, not less. They’re not just executing tasks; they’re given more agency, applying better judgment, and evolving the way they work rather than being told what to do. And, yes, letting the AI do what it's better at — for example, in medical image interpretation, where AI can frequently surpass humans — is important and necessary. (Although claims that radiology jobs would disappear have thus far not materialized).
The hybrid model also does something the memos don’t. It centers the future around human potential. It says: We believe in you. We trust you to learn new tools, wield them wisely, and make decisions machines still can’t. It makes room for human creativity, nuance, empathy — all the things AI mimics but doesn’t possess.
This is why, at Techquity, we have seen the greatest success in AI adoption in firms that put a non-technical leader in charge of the process. They are more prone to view it not as an engineering problem or as a zero-sum effort to reduce headcount but as a way to make already talented workers more productive.
A Smarter Path to AI Adoption
The Duolingo and Shopify memos aren’t wrong to push for faster adoption. AI is going to completely change the nature of work. Companies that move too slowly risk falling behind. But speed without scaffolding is a recipe for chaos. And disruption without care is a fast track to resentment. If leaders want to build AI-first companies, they need AI-smart strategies. That means:
Setting clear operational rules for how, when, and where AI should be evaluated and used
Empowering employees with training, experimentation budgets, and AI partners—not just pressure
Communicating transparently about the risks, rewards, and ethical boundaries of automation
Championing augmentation over replacement whenever possible
Defining success clearly so employees know what “winning” looks like and where they fit into the future of their organization
And above all, it means remembering this: your company’s most valuable asset isn’t AI. It’s the people who know how to use it well.
If you’re a leader drafting your own “AI-first” memo, consider this your challenge: not just to be visionary, but to be human. Not just to use AI, but to build a culture where it thrives with your people, not instead of them.





Comments