The Interns: AI in the Workplace

A piece of me dies when I hear the words “AI use cases.” The phrase conjures images of managers and consultants scrambling to discover applications for a technology neither of them understands. It’s like a corporate treasure hunt where the gold is buried on page 43 of a dense PowerPoint deck.

Instead of focusing on use cases, I ask managers, “If you hired an intern, what work you have them do?” In many ways, AI systems like ChatGPT behave like human interns. They’re book smart, eager to please, and cheap. They’re also easily confused, literal to a fault, and demonstrate questionable judgment.

If AI companies offer a near-limitless supply of interns, why don’t we see them everywhere? Most of us would love to have an extra pair of hands. Yet, for all the commotion, enterprise AI adoption is slow. One analysis found only five percent of Fortune 500 companies had a ChatGPT Enterprise license.

One explanation is that the AI interns are still being onboarded. Companies haven’t given them access to their data and systems. It takes time to figure out how to apply new general-purpose technologies.

I fear adoption rates are slow for a different reason. Most managers suck, and the same shortcomings that frustrate humans are spilling over to AI.

Horrible Bosses

Think about the managers you’ve had in your career. How many of them were truly great? I don’t mean kind people. That’s table stakes. How many of them captured your full potential as an employee?

We can debate the exact recipe for a great manager. We all have different preferences, and what works for me may not work for you. With that caveat, here are the qualities I look for in a manager:

  • Vision: Knows where they’re going and how each person’s work supports the team’s objectives

  • Clarity: Communicates expectations clearly and fosters an open dialogue between team members at all levels

  • Feedback: Provides actionable feedback regularly, always in the spirit of helping individuals learn and grow

  • Judgement: Demonstrates a deep understanding of the work and the broader context in which it’s being performed

  • Autonomy: Articulates what they want but doesn’t dictate how it must be done

If those qualities sound familiar, perhaps you’ve read Daniel Pink’s book, Drive: The Surprising Truth About What Motivates Us. In it, he talks about the importance of autonomy, mastery, and purpose when leading people in the knowledge economy. My concept of a great manager broadly aligns with the qualities in Pink’s book.

In the name of pseudo-science, I created a list of every manager I had in my career — 44 in total. I then categorized the managers into three buckets.

My managers — the good, the meh, and the truly terrible

The “terrible” managers couldn’t even meet my minimum threshold for psychological safety. I spent more time worrying about the person I worked for than the actual work.

The “fine” managers didn’t interfere with my work, but they didn’t do much to enhance it or develop my skills either. I struggled to remember many of these people.

The “great” managers met the criteria I described. I did my best work under these managers, and each of them taught me lessons that stayed with me throughout my career. Working for them wasn’t always easy, but I was fully invested in my work.

How did we end up with this dysfunctional system? It’s not like I worked for terrible companies. I began my career at GE when the company was known for producing exceptional managers. I spent 13 years at McKinsey, a firm with a record of churning out C-suite executives. If anything, my “great-to-terrible ratio” should be higher than most people’s.

The Beatings Will Continue…

I have a theory. Humans, like all animals, are hardwired for survival. Our lives are guided by an unconscious desire to accumulate resources, build social connections, and pursue outcomes that increase the odds of our genetic sequence surviving over time.

Our survival instinct manifests itself as cognitive biases. For example, we exhibit loss aversion — overvaluing losses and undervaluing gains. Keeping what we have requires less energy than finding new stuff. Loss aversion is an artifact of evolution that sticks with us even though it sometimes works against us in the modern world.

More than a decade ago, I worked on a consulting engagement for a large consumer electronics retailer. The project gave me access to the company’s financials. Over 90 percent of the gross margin came from warranties and product add-ons. The company didn’t make money selling expensive electronics. It made money by convincing customers those products might break or be less valuable without add-ons (e.g., a case to protect your new smartphone).

Organizations exploit cognitive biases constantly. Why would we expect the managers who work in them to behave differently?

My theory is that the return on investment for developing great managers is low. Investing in soft skills like coaching and development takes time and attention. Why make those investments when you can extract similar performance by exploiting employees’ survival instincts?

If you ask people why they stay in jobs they hate, you often hear answers like “I need the money” or “I need the healthcare.” Left unsaid are equally important factors like “I enjoy the status” or “I don’t like change.” Put differently, people typically overvalue their current position.

In an efficient labor market, employees would align themselves with those managers who help them reach their full potential. I can’t put an exact number on it, but the great managers from my past account for a large percentage of my lifetime earnings.

Unfortunately, the labor market isn’t efficient. Employees don’t vote with their feet. They stick with managers who suck, making it challenging for great managers to capture the full benefits of their hard work. Why coach your employees when you can get similar results by striking fear into the hearts of everybody who works for you?

I recently had a CEO complain that his employees didn’t see the big picture, were unmotivated, and didn’t speak up. It never occurred to him that perhaps he was the root cause of his problems.

Bad managers are a drain on the economy. They divert employee energy from productive tasks to unproductive ones (e.g., politicking). So what happens when you remove the survival instinct? What happens when the AI interns show up?

Unemployed Interns

My friends at OpenAI, Google, and other tech companies are frustrated by the slow adoption of AI. They’re spending billions of dollars on training the next generation of models while companies struggle to do anything productive with the current generation.

AI Interns as imagined by an AI intern (Source: ChatGPT)

Concerns about AI safety, privacy, and reliability are valid. That said, there are plenty of low-stakes tasks AI could be doing. For example, I’ve had AI reading research papers for the past year. My AI interns have summarized nearly 700 papers and produced almost 200 podcasts. Try finding a human who would do the same work for $1 per research paper.

Managing the AI has been similar to managing interns. I get what I want by employing the same techniques as my favorite managers:

  • Vision: I set the context for the AI by giving it a role — “You are a helpful assistant with expertise in summarizing research papers. You write in an entertaining (occasionally funny) but professional style using language a high school student could understand.”

  • Clarity: I communicate what’s expected through structured prompts — e.g., “Write a podcast transcript based on this summary. Begin the transcript with ‘Hello, and welcome to paper-to-podcast.’ End the transcript with ‘You can find this paper and more on the paper2podcast.com website.’ When mentioning the authors, replace ‘et al.’ with ‘and colleagues’ if applicable.”

  • Feedback: I constantly iterate my instructions to help the AI produce better results over time — e.g., limiting the use of acronyms since those don’t translate well to a podcast format

  • Judgement: I check the AI’s work at each step — e.g., deciding which transcripts are entertaining enough to push to a podcast feed

  • Autonomy: I give the AI latitude to be creative — e.g., setting the temperature to 0.7 for more creativity when writing the podcast transcripts

Learning to manage my AI interns has taken time, and I’m still refining my methods. However, now that I’ve done the work, I’m annoyed by people complaining about AI without doing the same. For example, many grumble that large language models (LLMs) are bad at math. Why is that a surprise? LLMs were trained to predict words, not numbers. The fact they can do math at all is impressive.

In my experience, companies want AI to work with little human guidance. They expect the AI interns to toil away in the background and produce perfect results every time. That’s not how human intelligence works. Why should we expect artificial intelligence to be different?

Soft Skills for Hard Results

In most large organizations, training for compliance (e.g., harassment) and hard skills (e.g., industry knowledge, IT systems) crowd out investments in soft skills (e.g., coaching and feedback).

This point is clear from large companies' (10,000+ employees) learning and development budgets. Expense per employee is volatile, but the long-term trend is flat. Meanwhile, compliance and hard skills training have grown to 72 percent of the budget. Meanwhile, less than a third of the money goes toward executive, managerial, and interpersonal skill development.

Corporate training spend (Source: Training Magazine, Statista)

I’m not passing judgment on the efficacy of corporate training. I’m not sure if companies should spend more or less. My point is that training expenses are a proxy for human capital investments. It’s difficult to look at this data and conclude that large companies see soft skills as a top priority.

My interactions with executives indicate the movement away from soft skills is accelerating. AI is treated as a hard skill. It’s a new technology to master, like an enterprise resource management (ERP) platform or customer relationship management (CRM) system.

That’s not the right way to think about AI training. Adoption isn’t about learning the technology. It’s about learning how to manage machines that increasingly behave like people. It’s about learning what it takes to get the most out of the AI interns.

The skills required to be a great manager won’t change as much as most expect. What will change is the return on investment for building soft skills. Today, terrible managers can brute force their way to success by manipulating employees' survival instincts. The AI interns won’t be as nearly as forgiving.

Previous
Previous

Look Who’s Talking: Speech as a Modality

Next
Next

Gathering Dust: Why Hasn’t VR Taken Off?