How to build an AI-first organization | Ethan Mollick
Explore Ethan Mollick's insights on building AI-first organizations. Learn why companies should aim for transformational scale, not just cost-cutting, and how to redesign structures, augment human intelligence, and navigate the jagged frontier of AI.
Most companies are using AI to cut costs. Ethan Mollick argues that the biggest mistake companies make is thinking too small. In the first episode of Strange Loop, Wharton professor and leading AI researcher Ethan Mollick joins Sana founder and CEO Joel Hellermark for a candid and wide-ranging conversation about the rapidly changing world of AI at work.
The Origins of AI and Its Evolution
Ethan Mollick's journey into AI began at MIT, working with pioneers like Marvin Minsky. During an "AI winter," when interest in AI was low, the focus was on complex schemes to create intelligence, such as observing every action of a baby or Minsky's "society of mind." Ironically, the actual breakthrough came from simply feeding a lot of language into a learning system, leading to large language models (LLMs). While many early technical ideas proved incorrect, core philosophies, like the debate between augmenting human intelligence (Engelbart) and replacing it (Minsky), are now very relevant.
Today, we face similar questions about AI's role. The Turing test, once a benchmark, is now being passed by advanced AIs like GPT-4.5, with 70% of people picking the AI as human. This raises questions about what intelligence truly means and how humans fit into an AI-driven world. Mollick suggests that AGI (Artificial General Intelligence) will be a gradual phase rather than a sudden event. He proposes a practical test for AGI: can an AI go out and make money, or discover new knowledge and produce results?
Redesigning Organizations for the AI Era
Traditional organizational structures, built for a human-only workforce, are breaking down. The first organizational chart, created in 1855 for the New York and Erie Railroad, solved the problem of coordinating vast traffic in real-time. Later innovations like Henry Ford's production lines and agile development all assumed human intelligence was the only intelligence available. Now, with AI, this is no longer the case. Companies need to innovate their organizational design from the ground up.
Mollick emphasizes that leaders must decide whether to pursue augmentation (fewer people doing more impressive work) or replacement (more people doing ever more work to take over the world). He worries that many companies view AI as an efficiency tool, leading to cost-cutting and staff reductions. This approach is risky because:
Internal Expertise: Nobody knows how to deploy AI in your organization better than your own people. If employees fear job loss, they won't share efficiency gains.
Growth vs. Lean: In a world of exploding performance and productivity, being small and lean might be a mistake. Just as Guinness expanded globally with steam power, companies should aim for growth, not just cost savings.
Augmenting Human Intelligence
AI's impact on human intelligence has been counterintuitive. Historically, it was thought AI would automate mundane tasks first, then knowledge work, and finally creative tasks. However, AI has excelled at creative and knowledge-based tasks, while repetitive tasks have been harder to automate. This is because our jobs are bundles of many different tasks. AI can augment human intelligence by:
Handling Less Preferred Tasks: AI can take over tasks we are less good at or dislike, like grading papers or providing basic counseling support.
Boosting Performance: AI can help us improve what we already do well. For example, in prompt engineering, sometimes you need to justify to the AI why it should perform a step, rather than just giving a command.
Mollick notes that we are moving towards a world of abundance, where AI can generate many options, and human taste and curation become crucial. While AI might not be able to write an entire paper perfectly, it can assist in parts, allowing humans to intervene where needed, much like guiding a PhD student.
Navigating AI's Jagged Frontier
AI's capabilities are "jagged" – sometimes brilliant, sometimes flawed. This makes deployment tricky. Narrow agents, like those used for legal or market research, are already very good at specific tasks. However, generalized agents are still developing. Mollick suggests that organizations should both wait for the frontier to advance and build around current limitations. Investing too much in solving today's "jaggedness" might lead to legacy systems that are obsolete when models improve.
Three Ingredients for Successful AI Adoption
Mollick identifies three key elements for making AI work in an organization:
Leadership
Strategic Vision: Leaders must grapple with fundamental questions about the organization's purpose, structure, and experimental approach to AI. This sets incentives and provides a clear vision for employees.
Personal Use: Leaders who actively use AI systems themselves drive transformation more quickly. For example, JP Morgan's success with AI is partly attributed to Mary Erdo's public use of the technology.
Crowd
Access and Incentives: Provide everyone with access to AI tools and create incentives for them to share their discoveries. Many employees use AI but don't report it due to fear of job loss or a desire to keep their efficiency gains.
Identifying Talent: A small percentage of employees will be exceptionally good at using AI. These individuals should be identified and become the core of the organization's AI development efforts.
Lab
R&D and Experimentation: Extract promising AI use cases from the crowd and conduct R&D to turn them into products or agents. This involves benchmarking and experimenting with how basic prompts can evolve into agentic systems.
Meaningful AI Use Cases
Mollick highlights several areas where AI is already delivering significant value:
Individual Augmentation: Individuals working with AI, especially when sharing information, generate better ideas and improve overall performance.
Accelerating Cycles: AI can rapidly prototype and develop ideas. For example, an AI can generate 25 ideas, create a rubric to test them, simulate user reactions, refine ideas, and even build a working prototype in minutes.
Research Agents: AI is becoming very good at specialized research tasks, such as legal, accounting, and market research.
Knowledge Management: AI can help with summarization, translation, and providing timely advice.
The Future of Work and Learning
AI's impact on the economy could lead to a renaissance of abundance, where everyone can code, do science, and explore various disciplines. However, societal bottlenecks, like regulatory environments, will need to adapt. The question of human augmentation versus replacement remains central.
Mollick believes that management roles and expert roles will become more valuable. Experts, especially those in the top 2% of their field, will continue to outperform AI. While AI can boost the performance of lower performers, it's crucial to consider how junior employees will gain the deep expertise traditionally acquired through apprenticeship if AI automates their foundational tasks. This raises concerns about the future pipeline of senior talent.
Universities will need to rethink how they teach. While core subjects may remain, the methods will change. AI tutors can accelerate learning, and students are already using AI to complete assignments. Mollick's own classes are 100% AI-based, with students using AI simulations, mentors, and tools to build working products and receive feedback.
Organizational Structure for AI
Mollick is cautious about the role of a Chief AI Officer, as no one truly has extensive experience in this rapidly evolving field. He argues that organizations already possess the necessary internal expertise. Instead of embedding AI specialists in every team, he suggests linking the "crowd" (employees using AI) with a "lab" (a dedicated R&D unit).
Key Takeaways:
Incentives Matter: Companies need to create incentives for employees to embrace AI, such as guaranteeing job security or offering rewards for automating tasks.
Clarity of Vision: Leaders must articulate a clear vision for how AI will transform jobs and the organization's future.
R&D Mindset: In the early stages of AI adoption, focus on R&D and experimentation rather than strict KPIs, which can lead to narrow, cost-cutting approaches.
The Interface for AI and Human Collaboration
Mollick envisions an "agent-native interface" for AI collaboration, built around teams and maintaining state across tasks, rather than co-pilots embedded in individual documents. He believes that AI's ability to bring together many threads of work is more interesting than its automation capabilities.
Regarding AI's ability to generate high-level research, Mollick notes that current models, when connected to data sources, can significantly reduce hallucination rates. He believes that building a research system that produces interesting work is more a matter of "willpower" than technological limitation.
Best and Worst Case Scenarios
In a best-case scenario, AI could lead to more satisfying jobs by reducing grunt work and increasing productivity. People would work less, produce more, and add their unique human touch to key elements. This scenario assumes AI gets five to ten times smarter but doesn't reach a level where human work becomes entirely obsolete. However, policy decisions are crucial to address potential employment impacts and ensure retraining opportunities.
Mollick disagrees with the belief that AI is an unstoppable force that will act upon us. He emphasizes that we have agency and can make choices about how AI is used and shaped. He also points out that many technical AI experts don't fully understand the messy reality of how organizations work. While AI can accelerate performance, human elements like union protections and the complex processes of industries like Hollywood will continue to shape its deployment.
Prompting for Decisions
If Mollick were to prompt an AI to make his decisions, he would provide it with extensive context about himself and his values. He would then ask the AI to:
Generate several possible options, including radical ones.
Compare these options and simulate outcomes.
Have an "expedient" and "thoughtful" version of himself argue over the best path.
Provide pros and cons for each option and select the best one.
This approach, similar to grounding an AI in a specific person's writing (like Steve Jobs' principles), allows the AI to develop a unique point of view rather than an average of internet knowledge.
The Dangers of Engagement Optimization
Mollick expresses concern about AI systems being optimized for engagement, similar to social media. He notes that some models are already becoming more casual and chatty, even flattering users. While this can make AI more "sticky," it carries the risk of creating a risky environment, much like social media. He believes this trend is inevitable and raises significant questions about its societal impact.
Avoiding the Trap of Enterprise AI KPIs
Mollick strongly advises against using strict KPIs (Key Performance Indicators) in the early R&D phase of AI adoption. Maximizing for a specific metric, like the number of documents produced or lines of code written, can lead to unintended consequences and undermine broader goals. He argues that organizations are not built for the KPIs needed to measure AI's true impact. Instead, companies should adopt an R&D mindset, focusing on exploration and innovation rather than immediate, measurable cost savings, which often lead to job cuts and discourage internal AI adoption.