Generative AI & Headcount (#93)

We are living in the early days of algorithmic content creation. Today's tools, which are marketed as generative artificial intelligence, are in actuality, highly scaled, predictive algorithms based on a vast amount of preexisting work. A large-scale language model (LLM), such as the infamous ChatGPT, indiscriminately ingests the content of the Internet as a training set. When prompted, LLMs respond by applying a formula to predict the words it displays. The output is entirely based on what others have created previously, re-spun with enough randomness to alter the output of similar prompts on subsequent responses. Regardless of how it works, the application of LLMs are, and will continue to, dramatically shape the way we work.

LLMs are significantly flawed. They violate copyrights, plagiarize, amplify biases and stereotypes, they lie with confidence, and they lack restrictions and controls to prevent propaganda and falsities. LLMs make us dumber, because they reduce the friction of creation which diminishes our ability to learn. Yet despite all of this, even if LLMs made no further advances in their technology, their impact is already astounding. Reports of efficiency gains that measure more than 30% are becoming wide spread across a variety of jobs: writers, reporters, translators, customer service consultants, data analysts, software engineers, and many others. LLMs are here to stay. The technology press reports this as mostly doom and gloom, a sign foretelling the end of our jobs, heralding the age of general artificial intelligence, and the replacement of humans with our computer overlords. This narrative ignores any agency we have in how we apply technology, and while it is sensational, it also ignores how adaptable we are to change.

ChatGPT is widely used by students to write essays and complete homework. Employees are secretly using LLMs to accomplish tasks at work. Schools may call this cheating, employers see the technology as a way to reduce headcount. And here's where the potential paradox lies. If we use LLMs as a way to zero out our wage costs all we achieve is parity with today's worker for less expense. The questioning is along the lines of, "why do I need 10 people to do our work when 2 people using ChatGPT produces the same output?"

Instead we can reframe our work by shifting the activities we do: allowing LLMs to complete the mundane, repetitive work frees the humans to do value-added work. Who cares that ChatGPT writes the code for yet another image gallery or responds with detailed steps to help a customer use a product. No company today has enough staff to do all of the work they have identified, whether it's a startup or a massive enterprise. Every single company has a backlog, a long list of "what ifs" to explore, a never-ending amount of things to do that we just haven't gotten to yet. The companies that leverage LLMs in this way will accelerate and disrupt, seizing the arbitrage created when their competitors let the number crunchers convince them that LLMs are the perfect opportunity to reduce headcount. Furthermore, our employees in these forward acting companies will maintain psychological safety, rather than being threatened by AI they will embrace it. The gutted competition employees, those that remain, conversely will feel more threatened, waiting for the shoe to drop on their dismal jobs, further reducing their quality and output.

When embracing generative AI we face a choice, we have agency: we can reduce headcount and keep productivity the same in the name of cost savings, or we can reshape the work our humans do and invest in value creation to accelerate further into the market.


The Paradox Pairs series is an exploration of the contradictory forces that surround us. A deeper study finds that these forces often complement each other if we can learn to tap into the strength of each. See the entire series by using the Paradox Pairs Index.