Merriam-Webster defines a killer app as "... broadly: a feature or component that in itself makes something worth having or using." In other words, if enough people buy a doodad just to run a particular app, then that app is a killer app.

What do killer apps have in common? At least one thing, which flows from the basic nature of the computer.

Computers in a nutshell

Computers store and process binary data. No more, no less. Alan Turing described the computer in 1936 with what computer scientists call a Turing machine. A Turing machine is an imaginary, theoretically infinitely long piece of tape that is split into cells, with each cell able to contain a true (1) or false (0) state. The cells can be updated/altered according to a set of rules.

Unless you are a quantum computing pioneer, the computer you are using right now works basically the same way as a Turing machine. A piece of the information stored on your hard drive is written in binary and might look like this: 11100011 00101011 11101100. By itself, such strings of numbers are not very meaningful. They might as well be random.

Computers start to get useful when those series of 1's and 0's come to mean (model) something other than true/false states. If one series of 1's and 0's comes to represent the letter A, and another one comes to represent the letter B. Encode enough characters, and you can use 1's and 0's to write a piece of information-rich static content, like a newspaper article, and store it on electronic media instead of paper.

Computers get even more useful when operations on those 1's and 0's can start to approximate actual, real-world processes. Certain operations on those 1's and 0's can mimic adding a letter, while others can mimic removing one. Before long, we have a dynamic, interactive software application: a word processor.

In sum, all the magic happens when a computer's 1's and 0's start to model something. The basic usefulness of the computer is inextricably linked to its ability to model some informational or physical reality.

The basic usefulness of the computer is inextricably linked to its ability to model some informational or physical reality.

Killer apps model with some advantage

Modeling is what all apps have in common because that is the only way 1's and 0's can mean anything other than 1 or 0.

Killer apps are just those applications that do it better. They are likely to give users the first, better, cheapest, or only way to interactively model something that they care about. They start as fully programmed, fully deterministic creations, and sometimes they develop further through real-world user or environmental interactions.

For example, Facebook is the Internet's most comprehensive model of the "social graph," the friendly relationships that exist between its users. On the micro level, Facebook models aspects of real-life friendship, like the ability to write to one another. It also adds new capabilities, such as friend recommendations. On the macro level, every user interaction on Facebook improves the company's overall, grand approximation of the social graph.

Facebook is more than digital. What happens on Facebook also affects real life; for example, after you "friend" someone on Facebook, they might thank you the next time they see you. Facebook's crowd-generated, interactive model of social reality sits upon actual social reality, and its constant bidirectional causes and effects ebb and flow on a daily basis.

Or, consider computerized weather modeling. Weather models produce 5-day forecasts so accurate that meteorologists are paid to interpret and share them with their viewers.

Word processing is modeling, too. While we may not think of word processing as a computer modeling process, it was true for millennia that a document was words written on a physical piece of paper. If that is still true today, then text files are not documents: they are models for documents.

And what a useful model a text file can be. A writer can write, edit, share, and/or print a text file with ease, and a reader may find it easier or cheaper to read compared to a physical, printed version. And on average, typing is faster than writing by hand.

The e-book is another computerized model. A technologist from 1918 might have said that a book is a paper thing with bound, typeset pages. A technologist from 2018 might say that a book is anything that provides what we might call "book services:" the ability to read a page, flip a page, etc. Kindles and Nooks are highly functional models of traditional books, with paper pages replaced by e-Ink and interactive elements programmed into the software.

The Kindle and the Nook have turned the book into something that is functionally closer to gasoline than a real book: it fuels book services, just as gasoline fuels transportation services. We buy neither product for its own intrinsic value but for the services it provides.

The list of computerized models-gone-viral continues far beyond Facebook, weather modeling, word processing, and e-books. Here is a partial list of killer app categories across several different types of computers, with each category including one or more killer apps:

  • Desktop or Laptop Computers - office software, email, online shopping, gaming.
  • Video Game Consoles - gaming, media consumption.
  • Smartphones - photo/video/social, mapping/GPS. (Aside: as of early July 2018, the top 3 free or paid iOS apps are for photo enhancements, plant identification, or photo/video/social. Meanwhile, 5 of the top 6 paid/free Android apps are games, which makes me wonder if "babysitting" is now a killer app category. The other is Facebook Messenger.)
  • Servers or Cloud - knowledge-sharing, telecommunications, marketing, selling.
  • Mainframes - scientific computation, inventory management. Mainframes are now mostly obsolete for non-legacy uses but at one time were the keystone of enterprise computing.
  • Supercomputers - weather models, climate models.

From the above, I make at least two observations. First, this list is full of models. Second, no app would become a "killer app" unless it had some advantage over its closest real-world alternative. For example, if Facebook was no better than talking with your neighbors, calling on the phone, and sending letters, no one would use Facebook. By definition, all killer apps bring some real or perceived value to the table.

Put both observations together, and maybe instead of the killer app we should really be talking about the killer use case: the modeling of reality, imagination, or both.

So what?

Who cares if models are ubiquitous among software programs? And, who cares if every killer app enjoys advantages over its substitute?

It is a fair question.

Programmers certainly care. Programmers depend upon computers' modeling capability to make a living, just like airline pilots depend upon the Bernoulli Principle to keep their airplanes aloft.

Incidentally, programmers also grok the ubiquity of models quickly because they write models every time they define an object class. For example, a programmer's object-oriented model of a chicken might be "attribute feathers: yes" or "method: cluck." It is not a chicken, but it has chicken features like feathers and clucking ability, and that might be enough for, say, a 16-bit video game. It might have additional methods and attributes inherited from the class "bird." A huge percentage of the world's software is basically object-oriented (read: model-oriented) in this way.

More generally, anyone who wants to understand why and how computers drive behavior is likely to care about computer models. We spend hours per day on our devices, and our devices' real or perceived usefulness depends in large part on the computer models they contain, whether we are aware of their existence or not.

The killer apps' large effects on our modern lives, livelihood, and society constitute reason enough to make computer models a fascinating and important subject. We all know how disruptive a highly effective app can be. When a computer model-based application is high enough quality, it can motivate people to start buying devices. As the application gets used, it can displace and/or alter its closest present-day or historical equivalent objects or processes in the physical world.

To see this phenomenon in action, we need look no further than our four earlier examples. Facebook has displaced some amount of real-world communication, weather modeling has displaced a good portion of the farmer's almanac business, the word processor has disrupted writing, and the e-book has disrupted publishing. Pick a killer app, and it has probably displaced or change something that used to be there.

In other words, computerized models are important to understand because they do not stay in the computer. Software is bound in cause-and-effect relationships with real world. We create computerized models, and they affect society in countless feedback loops, some of them massively influential.

Model/world feedback loops

It is a feature of our human world that non-tangible ideas affect our physical surroundings. Look no further than the United States Declaration of Independence, the founding document of the United States of America.

The Founding Fathers wrote the Declaration of Independence in 1776. This physical document reflects a set of intangible Enlightenment ideals, as seen through the prism of late-18th century American political and economic realities. The intangible ideas it encapsulates have influenced American life ever since. If you live in the United States, it might not be long until someone references the idea that all people are created equal, a direct echo of Thomas Jefferson's and the other Founders' work.

Like the Declaration, computer models are intangible but with real-world spillovers. These spillovers (actually use cases) explain why computers persist prominently in modern life.

For one, computer models can help us make better choices. If a weather model reports a 90% chance of sunny weather on tomorrow and the meteorologist shares that information with you on the evening news, thousands of viewers might decide to bring their sunglasses to work the next day. Weather models can help us see better on sunny days. That is a real, physical benefit that helps us want to keep watching the TV station. In turn, it convince the TV station to keep paying for the model.

Computer models can also help us better use our machines. If the same weather model reports that it will be 95 degrees in New York City tomorrow, computerized power-generation models ingest that number and automatically power up additional gas turbines to better meet peak electricity demands. In the absence of the weather model, New York's power grid would face a greater risk of brownout. That is another positive, real-world benefit and reason to keep that computer running.

No doubt computer models have helped create a modern way of life that otherwise would never have existed. On the personal level, without computerized e-commerce models you might have gone to a brick-and-mortar store to buy the last thing you bought online. Without Facebook, your friend group might be different today. On the societal level, without social media sites the 2011 Arab Spring may never have happened. And who knows who would have been elected president of the United States in the 2016 elections.

Computer models interface with our physical world, economy, social lives, and spiritual lives in numerous and profound ways. It is a complex web. It is kind of like we are all spinning together in a big washing machine.

Economic incentives

The washing machine is not stopping anytime soon. Economic incentives will continue to drive engineers to create more and more computer model and modeling applications.

If someone can think of a new category of computer model and get it out in the world earlier, faster, cheaper, or better than everyone else, they can get rich. It is already been done with Facebook, weather modeling, word processing, and e-books, to return to our previous example. Every day, legions of programmers in high-tech hubs are developing software to find the next niche and customer base. Those who succeed could earn a billion dollars.

Today, one locus of activity is machine learning algorithms and artificial intelligence (AI). AI is the class of computer models that attempts to recreate aspects of human cognition. Develop it enough, and previously human-only tasks might be done by humans with machines or even machines acting alone. Advertising placements, interpreting that radiology exam, designing that hot new piece of clothing, driving a car - all of these processes and more are on the radar and in development.

If you imagine and produce a new, effective type of computer model before anyone else, and you might find yourself on the Forbes list of the world's richest people. As long as that is true, vast numbers of doers, creators, and entrepreneurs will continue to work every day to try to get there.

Barring any change in those economic incentives or the resources we have at hand, all of us must expect continued acceleration and innovation in computer modeling, with unpredictable effects.

Our uncontrolled experiment

"Models," or plain symbolic or logical representations, are as old as art, literature, and human self-awareness. They have been around for at least 17,000 years, around when our ancestors painted representations of animals onto cave walls at Lascaux, France. And they have never left us. In ancient Greece we saw Plato's Allegory of the Cave, and in 1926 painter Rene Magritte painted The Treachery of Images, a self-referencing representation of a pipe.

For millennia, we have immersed ourselves in models and symbolic representation of our surroundings. We continue to do so today. That is how our minds work.

While models themselves are not new on the scene, computerized models are. Computers have a faster and more powerful, flexible, interactive, and ubiquitous modeling ability than any medium in human history, save for the human mind itself. Imagine if in the golden age of sculpture everyone walked around with a piece of clay in their pocket and could instantly share their sculptures all their friends, instantly, with the most popular ones seen and admired by millions or billions of people daily. That is kind of where we are at with computers.

But for all our new capabilities, we do not know where we are headed. We know that computer models can be powerful tools, but we do not know whether, how, or where our continued innovation in software can help us to lead our most meaningful lives.

All killer apps have, or can have, both positive and negative effects. My Microsoft Word program has allowed me to write quickly, but I have also spend a long time learning how to use it, and maybe I would write more interesting materials with a pen. Facebook brought us social-networked communication, and it brought us Cambridge Analytica.

Tools are double-edged. One hammer builds a house frame, and another knocks one down. In the end, computer models are tools, and all tools can be used to productive or destructive means.

The types of software models in the world will only grow over time. The only way is forward, and there will be bumps along the way. We must make the best of it.

It is up to engineers to build useful and well-designed tools. Form often begets function, as any designer or architect can tell you.

It is up to all of use to use those tools in ways that are good and meaningful and to be kind to each other as we figure it out.

And no doubt it will help to remember our shared social history, even as we push the boundaries of technological development. History is a goldmine of lessons learned in what works and what fails in human relationships. While we may be developing new computer models at a lightning pace, we human beings remain very much the same.

Header photo by Rahul Chakraborty / Unsplash