Essay by Keegan McNamara originally published in Palladium Magazine.
Video games are ubiquitous. Nearly 70% of Americans play on some sort of device. People between the ages of 15–20 collectively spent over eight billion hours playing Fortnite in 2018. While that number is nearly double the roughly 4.2 billion man-hours spent on the Apollo mission, it’s trivial compared to the total time humanity has spent playing video games in general.
That’s a lot of human potential, being spent on something of questionable value. The traditional justification for seemingly harmful or non-productive behaviors is that people have an individual right to freely choose how to spend their time. But the decisions involved in gaming are often far from rational or even individual. Many video games increasingly take advantage of instinctual psychological drives, routing around higher reflection to create addicting escape experiences for people who find real life increasingly frustrating and unrewarding.
The international video game industry is a compelling profit engine—it generated $135 billion in revenue in 2018. U.S. gamers spent $36 billion on content, which includes the games themselves and in-game purchases. It’s a behemoth industry, and is growing steadily at a rate of roughly $10 billion per year. The incentive for psychological optimization of games by developers and companies is obvious solely from the numbers. That incentive has existed since the early ’70s when the commercialization of games first began.
Prior to early commercialization, the developers who created the first video games in the 1950s and ’60s were computer hobbyists exploring what was possible. They weren’t profit-motivated, and their games were relatively low production value. Early influential games like Tennis for Two and Spacewar! were free, novel curiosities for computer users at Brookhaven National Lab and the Massachusetts Institute of Technology, respectively. As commercialization set in during the early ’70s, gaming transitioned from a programmer’s hobby to a true industry. With that transition, some games surged in popularity. Pong and its clones are perhaps the first examples of video game virality, but Tank, Frogger, Donkey Kong, Pac-Man and Space Invaders also achieved widespread commercial and cultural success in the ’70s.
As the industry iterated on the successes of the ’70s, developers started to notice specific psychological effects that drove popularity in their games, and they began to intentionally embed those effects to drive demand. One of the most important concepts in the industry to emerge during this time was the compulsion loop. In a very general sense, loops are sequences of actions that organisms will reliably repeat to gain rewards. Obvious examples from the animal kingdom are things like having sex and consuming food, but in the gaming domain, loops are sequences of actions within games that players will reliably repeat to get a rewarding experience—the rush when Candy Crush colors connect and pop in a dazzling animation, or when the “Head Shot: +150” badge flashes across the screen after a Call of Duty player manages to excavate the brain of another. Loops are fundamental to understanding how game designers and developers harness the psychological tendencies of the players. Along with other gamification tactics, loops began to become more and more widespread in the ’70s and ’80s, and can be readily recognized in games like the aforementioned Frogger, Pac-Man, and Donkey Kong.
By the time the ’90s and early 2000s rolled around, the Internet had achieved mass adoption, and gaming devices had become many orders of magnitude more powerful. Using the connectivity afforded by the Internet and the power made available by innovations in computer hardware, developers were capable of simulating more life-like environments and complex interactions between human players; the Wolfenstein and Call of Duty franchises were pioneers in 3-dimensional graphics and online gaming. As games became more advanced, though, so too did the gamification techniques.
Mobile gaming was born in the same period, with the advent of miniaturized devices capable of doing what previously required large workstations. Ports of simple games like Snake and Tetris existed in early Nokia phones, but when the first iPhone arrived, the App Store and an explosion of mobile game consumption quickly followed. Titles like Angry Birds, Plants vs. Zombies, and Temple Run came to dominate. Fast-forward to 2018, and gaming purchases on smartphones and tablets accounted for 47% of the gaming industry’s total revenue. The average U.S. gamer spent 29% of their gaming time on a mobile or tablet device.
Mobile games have come to permeate the small pieces of time when people aren’t actively engaged by their lives—quick levels of Candy Crush in line at the grocery store, or 20-minute-long sessions raiding enemy bases in Clash of Clans while on lunch break. Freemium games abound, with stimulating visual rewards for in-game achievements, and gambling-like environments to get their users to spend real money. Mobile games offer a low-commitment alternative to console and desktop games, but provide the same artificial accomplishments, skill progressions, and sedentary adventure.
Game developers aren’t necessarily malicious or even conscious in using techniques to make their games more psychologically compelling. The practice has evolved over the course of the industry’s maturation to become more of an inherent feature of video games than an intentional one. Firms now struggle to make money unless they use state-of-the-art psychological techniques in their games, and it’s hard to even imagine what it would mean for a video game to be fun if it didn’t involve some loop of challenge and reward. There’s a reason the brain is attracted to such activities: in reality, loops of skill-improvement, challenge, and reward are the substance of a successful life.
In a way, what video games do is provide an alternate, simulated environment that is more psychologically palatable and seemingly rewarding—where success is less demanding than real life. A few game developers strive to limit addictive mechanics and give their games artistic or intellectual content, but most don’t.
For someone whose real life isn’t otherwise engaging them with a stream of surmountable challenges, skill growth, and rewarding victories, it’s easy to understand the draw of escape into virtual worlds that do. For some young men, video games have even displaced having a job. It’s not terribly surprising that this has happened just as a stagnant economy and extreme competition has put respected, decent-paying jobs, and participation in rewarding social fabric, increasingly out of reach. If the challenge of real social participation becomes increasingly inaccessible, some proportion will decide not to bother at all.
One gamification tactic that started emerging in the ’90s and 2000s was human-vs.-human competition. It’s one of the core components for a game to get on a breakout path to rapid success. For a modern example, consider Fortnite — Fortnite’s first version was a game whose core mechanic was players fighting against computer-controlled enemies. It flopped. The next game mode they implemented, a multiplayer battle royale, had the psychological hooks necessary to draw in 125 million players in less than a year and propel Fortnite’s ascent to becoming the most successful free-to-play game in the history of the industry. Human-vs.-human competition introduces a high-stakes arena that would’ve been unimaginable to the early game developers of the ’70s.
Competing against other humans is psychologically intense and rewarding. It isn’t just a solitary activity, but a social one. When you beat another person in a game, it subconsciously becomes a story about your relative social status. If you win, you’re better than they are. If you lose, you need to try harder to keep up, or lose respect. Recreational Rocket League player turned professional, Squishy Muffinz, describes the effects such intense psychological dynamics had on him:
"I played the living shit out of Rocket League. I played it so much it was actually unhealthy how much I played ... I was going to high school at the time and I wasn’t sleeping much at all, I was just super addicted."
Another effect of imbuing games with human-vs.-human competition is the emergence of a positive feedback system that incentivizes players to dedicate more and more hours to honing their skills. For any given competitive game, there is a range of natural talent. Natural talent, though, only accounts for so much of any given player’s success in the competitive arena—a far better predictor is aggregate time spent playing. Many games employ a numerical Match Making Ranking (MMR) system to measure the skill of players relative to their peers, which makes it easy to test for a potential correlation between skill and time spent playing. An informal survey taken of Rocket League players shows a positive exponential relationship between a player’s MMR and aggregate time.
For professionals, this relationship is at its worst—moving up in the rankings by a few spots requires hundreds of additional hours of gameplay. And although the number of professional gamers relative to the total number of players is small, the influence of a professional scene on a game’s broader community can be significant. In games like Fortnite, CS:GO, and Dota 2 where professional scenes exist, one’s competitive ability is exceedingly important even among more casual players in the game’s community. This status competition creates an arms race in time spent. It’s a normal occurrence for players in the professional scene for any given game to routinely devote 10+ hours each day to training and playing, and as a result, many average players feel social pressure to do the same.
Again, Squishy Muffinz’ origin story seems apropos:
"You don’t get good at anything in this game without practicing it a lot ... Anybody who’s good at this game, anybody who’s good at anything mechanically in this game has messed up that mechanic thousands of times ... it all comes with time…you just have to spend a lot of time on the mechanics you want to learn."
Not only does gaming provide a natural mechanism for people to spend a lot of time on gaming, but the profit incentive on the part of the developers also pushes them to positively reinforce that mechanism. A profit-motivated developer not only embeds human-vs.-human competitive loops into games, but also invests heavily in developing a professional-level competitive landscape. By providing a hierarchical structure that players can use to compete with one another and ascend on their way to obtaining the glory and riches given to the pros, developers can increase the amount of time and money that players spend in-game.
Motives to reinforce competitive behavior have led to unprecedentedly large tournament prize pools, such as Epic’s Fortnite World Cup at a total of $30 million—more than Wimbledon’s prize pool in 2015. Despite recent growth, the gaming industry is mostly devoid of large monetary sponsors for events and leagues because of its reputation as a juvenile and low-status activity, compared to something like professional tennis. Most adults don’t look up to professional video game players like they might Roger Federer. At the moment, the largest of such sponsors are primarily gaming-related companies like Twitch, Intel, AMD, Razer, and Corsair, but their check sizes are still minuscule compared to more traditional sports-marketing deals seen in the NFL or MLB. As the industry continues to grow, more and more big-check sponsors will begin pumping money into events and thereby accelerate profit and competition motives. The world of eSports is entering a new age of capital-driven sponsorship and player fame.
Rising technological capability and expertise in the video-game industry would seem to lead to a world where many people spend most of their free time and money strapped into VR headsets, to the point of neglecting reality. People already occasionally die playing video games for days straight. They waste years of their lives that they later realize they should have spent studying, planning, and self-improving. They neglect their health and social lives.
These are still relatively speculative problems for most people, but as video games are only slated to become even more compelling, and perhaps as real social life becomes ever less compelling, these trends will continue.
The long-term influence of addictive, competitive loops, and escape from social reality, on the developing brains, skills, and social development of children, especially on a mass scale, remains to be seen. Gamers under the age of 18 constitute 21% of the market, and gamers between the ages of 18 and 35 make up 40% of the market. The majority of players have either grown up in the video game era of the last couple of decades, or are in the process of doing so.
Little is known about the human brain and how the formation of habits and neural pathways in adolescence can affect long-term social behavior in community participation, family formation, and work life. But among a group of roughly six hundred 8th and 9th graders in Minnesota, those who were classified as addicted to video games self-reported participating in more physical fights, starting more arguments with friends and teachers, and receiving lower grades than their non-addicted peers. It could be that video games are causing psychological problems, or it could be that kids who have more social problems find the addictive escape of video games more compelling. Either way, it indicates a relationship between video game addiction and social pathology. This study was done back in the early 2000s, meaning these children now find themselves in that 18–35 age bracket.
Video games, digital media, and surrounding culture don’t just affect people as individuals, they affect what kind of culture we have. It’s common for technically or artistically oriented young people to say they want to make video games as a career. Roughly three times more American kids want to be vloggers and YouTubers than astronauts. The same survey showed that 56% of kids in China want to grow up to be astronauts, compared to 18% who are captivated by vlogging. How kids spend their time, and therefore what they idealize, isn’t just a matter of individual preferences and harms, but a matter of collective strength and ambitions. America’s collective future perhaps starts to look a bit darker. The 21st century might see significant geopolitical reshuffling influenced in part by these cultural dynamics.
The social reorientation towards video game culture makes it even harder to participate in non-virtual social life for many people, exacerbating the low quality of social life that drives much of the demand for video games in the first place. Why go out when everyone else is at home playing video games or watching Netflix?
This general problem isn’t unique to video games. Activities like traditional sports, alcohol-oriented bar and club culture, gambling, sexual subcultures, and other forms of entertainment have existed in all civilizations. They at times get out of control to antisocial levels. But mostly they are kept well-regulated by social norms and laws that limit their downsides and cultivate their upsides, and by competition from more productive activities. This process can be thought of as a sort of “domestication” on the activities in question. Athletic sports and alcohol are mostly domesticated. Video games, being new, are still a wild west of exploration and exploitation.
The paternalist stance is often advanced in cases like this where market forces, perhaps unstoppable by anyone except the state, are innovating in a socially harmful direction. The gaming industry is now largely characterized by its use of a form of addiction to drive profits, and there is a strong precedent of preventative measures being instituted for addictive or socially harmful products.
We tolerate many things that aren’t necessarily good for the development of the individual, if they serve some important productive purpose. For example, it’s hard to argue that wage labor is the best system for directly promoting the individual flourishing of most people. We use it because it’s an important part of our economic system, to which we don’t have an easy alternative. The market wage labor system creates enormous wealth for us individually, and organizes our collective labor efforts. Some argue that any voluntary exchange between rational individuals is productive in that it fulfills the preferences of both, but this both neglects any higher social purpose to work other than individual preference fulfillment, and assumes a rationality that in many cases—addictive video games in particular—cannot be justified.
Many video games exploit irrationality, and lack a more holistic social valuation story. At best, playing video games together promotes some healthy social fabric. Alternately, video games may have social value as a form of real artistic expression. But even these points of value are only a small part of the overall social impact story. Time and human capital spent on video games isn’t really getting us much, socially or individually, in return.
The domestication pressure on entertainment norms towards benign productive functionality usually comes from the community and the state. Someone has to think about the big picture, and have the power and incentives to act on that thinking to change how entertaining activities like video games actually work. Individual gamers and developers outside of leadership positions in society just don’t have the power or incentive scope to address these collective questions.
The problem of managing compelling digital media, keeping people from retreating from society, is closely related to the dynamics described by Seth Largo’s recent article on integration, and Wolf Tivy’s piece on radicalization: in the absence of strong leadership from the main trunk of society, creating positive and productive social roles for people, those people find other things to do and believe in. Some stay provincial in their native subculture, some become radicalized, and some fall into self-destructive escapism and addiction to video games or television.
The main problem might not be with video games at all, but with the weakness of the main social fabric, and lack of opportunities for more purposeful and fulfilling behavior. Obsessive gaming is a cope—an elegant cope that allows players to push further into the back of their minds the fear and reality of years slipping away. It’s elegant because it’s compelling, more socially acceptable than hard drugs, and because there’s still some social fabric. Normal society is failing to provide compelling alternative opportunities to the psychological loops and constructed escape experiences of video games.
The fast pace of development of ever more compelling digital media like video games requires a corresponding acceleration of the ability of society to domesticate these behaviors into their most benign and socially helpful variants. That is, like many other problems, the problem of domesticating video games is partially a problem of state and community capacity—the same state capacity that would be needed to fill the holes in the social fabric that drive so much demand for addicting video games in the first place.
Missouri Senator Josh Hawley’s recent bill to ban loot boxes and certain payment incentives in video games is perhaps an early signal of this kind of state-driven domestication action—though one which also shows how difficult it can be to regulate such hyper-specific practices, and inadequate on its own to tackle the problems with the dynamic profit-driven ecosystem of video game development. But American society is at a historical low point of state capacity and strength of social fabric, so our governing institutions may not recognize the problem and get a jump on it, or have the capacity to intervene successfully.
For mercenary nihilists, now might be a good time to invest in life-absorbing VR pods. But for the rest of us, it’s a good time to start thinking about how to build real state and community capacity.