The knowledge level is the rational basis for the behavior of a system using artificial intelligence. Known as agents, such systems need knowledge to make inferences about the world and take action in response to specific prompts. In the development of such systems, programmers can encode knowledge as well as the ability to acquire more over time through observation and study of the surrounding environment.
Artificial intelligence researchers proposed the model of a knowledge level in the 1980s, as they began dealing with more sophisticated agents in their studies. The topic has been a topic of further study and discussion among people interested in defining the components of artificially intelligent systems. Understanding how such systems work can help people code better ones over time.
This is above the symbol level, the mechanical groundwork used to support the operations of the system. At the knowledge level, an agent has a library of logical information it can use along with goals for using that information. If the system appears to behave rationally, even if a response is incorrect or doesn’t make sense, it is exhibiting use of its knowledge level. For example, an agent might have false information indicating that two plus two equals five. When asked what two plus two is, it would respond five, showing that it has a goal to answer the question, and it’s using existing knowledge to achieve it.
Encoding the knowledge level can take time and may involve debugging to remove incorrect, contradictory, or confusing information. The more sophisticated an artificial intelligence, the larger the knowledge level, and the more ways it has to apply the information it stores. This is often encoded in a set of sentences the system can use in logical testing in response to a prompt. For example, an agent that controls a chemical process might have a sentence telling it that if temperatures rise above a certain level, it needs to take action to cool the process equipment down to prevent an accident.
Research on artificial intelligence looks both at how such systems are built and how they respond to their environment. At the knowledge level, users can interact with the system to see how well it was programmed. Gaps in information and the inability to learn are signs that an agent is not flexible enough to adapt over time. Systems that can make complex inferences, especially if they may involve logical leaps, are more powerful and may be usable in more settings.