One of the very first projects we undertook upon the launch of the AI Lab in the summer of 2024 was to create an “AI Explainer” document to explain artificial intelligence in terms accessible to those who are not familiar with the technical aspects of digital technologies and software development. You can access that document here, if you’d like to see what we created.
In reviewing this document a year after its creation, I feel that it stands up pretty well. While you can read the full document at the link above, let’s walk through a few of the key points.
Start at the beginning: Defining AI
In all my classes, I now ask students to learn this simple definition of AI:
Artificial Intelligence (AI) is a computer-based application designed to mimic human intelligence and behavior.
I like this definition because it forces us to confront the fact that AI is not just one thing. While the hype and focus over the past two years has been on generative AI such as ChatGPT, this technology we call artificial intelligence can take several different forms, only one of which is a chatbot. Beyond that, it also should give us pause and motivate us to better understand AI and its role in our lives.
Beyond this simple definition, the explainer document goes on to break AI down into two functional categories (discriminative and generative) and two levels (narrow and general). As it was when the explainer document was first published, the arrival of “Artificial General Intelligence” (AGI) is still a subject of debate: is AGI truly possible and, if so, when will we have it? There is now a move to rename AGI to “Superintelligence” or something equally magnificent. While some believe it will immediately usher in a new era of progress (or holocaust!), I am more aligned with the idea that AI will be slowly implemented as we adapt to its capabilities.
How does AI work?
The next section in the explainer tackles the underlying technology of AI and how it works. In short: we don’t really know. From the explainer document:
Traditional computer applications are built using programming languages that specify exactly how the system should respond to inputs and any results obtained can be traced back through the computer program. With AI, the responses are instead “learned” through the process of training and feedback. These learned responses are stored within the model and cannot be easily interpreted (this is commonly described as the “black box”). This embedded logic produces a large range of outputs, some of which may not have been foreseen.
So in a sense, AI is “grown” more than “built,” with data being the primary nutrient. An important implication of this fact is that the AI applications we use are only as good as the data they are trained on. And never better.
I will say that, since the publication of this document, some of the leading AI chatbot models have indeed tried to solve this problem by releasing “reasoning” models that provide some insight into the steps they are taking in coming up with their response. But we still cannot easily see inside the black box.
AI does not think
The last part of the explainer document focuses on a few important questions, the first of which is about the illusion of AI “thinking.” Because AI is designed to mimic humans, we tend to think of it as having human qualities. This is especially true when we are using the chat interfaces. Let’s be clear: AI is not human, it does not think, consider, reflect, and it has never experienced anything. I would also argue that it cannot truly create, but I’ll leave that for another post.
When you interact with a chatbot, it processes your input to determine the most likely meaning. Based on this, it generates a response (text, image, or video) that it calculates has the highest probability of being a successful answer. The AI does not truly understand either your input or its own output. Its rapid processing speed can make it seem human-like or even superhuman, which may lead us to mistakenly believe it possesses intelligence or wisdom.
Why should we care?
We are seeing new uses and capabilities of artificial intelligence every day. Whether or not you believe superintelligence is coming soon, we should all agree that understanding and effectively using AI are essential for anyone who wants to lead and shape the future of organizations and culture. We need people of faith in those places.
But while we are leaning in to understanding and using AI, we should also recognize that technology is not neutral: the use of technology subjects us to its designs, incentives, and biases. In other words, it is not just how we use it, but what we use it for or even that we use it at all. While this will be a subject of some of my future posts, you can get a preview of the principles behind this concern by reviewing the other document published by the AI Lab last summer: Biblical Principles for the Understanding and Use of Artificial Intelligence.
What’s missing?
As I stated earlier, much of the AI Explainer document still stands up a year after its release. If I were to write it again today, I would probably want to add a section on AI agents and their implications. I’ll leave that for an upcoming post as well.
What do you think?
I would encourage you to take a look at the explainer document and then give me your feedback. Do you like this definition of AI? Is there something that can be explained better? Leave your thoughts in the comments.