Frequently Asked Questions
Question:Why do you consider KEEL a "disruptive technology"?
Answer: We have had several telephone discussions and are having difficulty getting people to understand that KEEL is a technology; not just a tool component (KEEL Toolkit). While the KEEL toolkit is interesting, it is the new application capabilities that KEEL makes possible that defines KEELs value.
The KEEL Toolkit is used to create KEEL Engines that process information in a new way. These KEEL Engines allow complex behavior to be integrated into devices and software applications.
We would suggest that creating and deploying these behaviors using other techniques would be impractical, especially if you want explainable, auditable results.
KEEL needs to be considered for its transformational characteristics:
KEEL technology can also transform other complex domains: medical systems and processes, transportation systems, financial systems, economic and political systems
KEEL gives systems a right brain that allows them to deploy judgment and reasoning. So when KEEL is considered, it should be considered for what the systems can do with a right brain, not just the tool side of the technology.
When we hear KEEL referred to as a tool, it is an indication to us that someone just doesnt understand. For example: A submarine is a tool of war. Undersea warfare became a new way to fight a war when the submarine was invented. Undersea warfare was transformational. KEEL is a transformational technology. A transformational technology is a disruptive technology.
Question: What is the "underlying technology" that defines KEEL?
Answer: There are three concepts that define KEEL Technology (all covered by Compsim patents). Before explaining the three concepts, it is important to understand the focus of the technology. KEEL was created to capture and package human-like reasoning (judgment) such that it can be embedded in devices and software applications. In humans this is a right-brain analog process that focuses on the interpretation of "values" for data items and the balancing of interconnected "valued items". It is the human's ability to exercise reason and judgment that has separated humans from computer programs in the past.
The first (and most fundamental) concept in KEEL provides a way to establish "modified values" for pieces of information. To begin with, a piece of information will have a potential value (or "importance"), just by its nature (what it is) and the nature of the problem being addressed. This piece of information can be supported or blocked by other pieces of information (driving or blocking signals). For example, one might have a task with a series of pros and cons. Neurons in the human brain have Exciting and Inhibiting Synapses. The importance of the task is modified by the pros and cons to determine the "modified value" of the task. KEEL Technology uses a "law of diminishing returns" to accumulate driving signals. It follows this with another "law of diminishing returns" to accumulate the blocking signals. Blocking signals take precedence over driving signals. In control systems this means that emergency stop will override all accumulated driving signals.
The second concept for KEEL technology is that we are (almost) never dealing with independent accumulations of information. Individual inputs may impact different parts of the problem domain in different ways. Inputs may cause the importance of other pieces of information to change. Information accumulated from the integration of one set of inputs may impact how other inputs are interpreted. So the second concept is that KEEL provides a way for information items to functionally interact with other information items. A set of functional relationships are provided that allow linear and non-linear relationships to be executed. Essentially, we have created the functionality of an analog computer.
KEEL "engines" are functions or class methods, depending on the output language selected. Within the KEEL Toolkit there are two optional processing methods available: The first is an "iterate until stable" approach. The second approach pre-qualifies the design and creates an optimal processing pattern for the worst case scenario.
The third concept that defines KEEL is the dynamic graphical language. Because we are commonly dealing with subjective designs (from a human standpoint), there are several advantages of dealing with graphical language. The first advantage is that the designer is learning how to describe his/her system while it is being designed. The designer can stimulate individual inputs and "see" the information propagate throughout the system. The designer can "see" the impact of change immediately. The designer can create 2D and 3D graphs and see how items interact. A KEEL "engine" is automatically created and executed behind the scene as it is being developed and tested using the dynamic graphical language. When the design is complete, it can be exported as conventional C, C++, C++ .NET, C#, Java, Flash, Objective-C, Octave (MATLAB), Python, Scilab, Visual Basic 6, Visual Basic .NET, and other languages for integration with the rest of the production system (including .ASPX and WCF web services).
It is also important to understand that KEEL models are equivalent to formulas. They are explicit (traceable by looking at the dynamic graphical language and observing how individual inputs propagate through the system). Designs can easily be extended by adding new inputs to the system and functionally linking them to other data items. We suggest that rather than dealing with complex mathematical formulas, that the models are much easier to understand by viewing and interacting with the "KEEL dynamic graphical language".
One question that has been asked several times: What came first, the KEEL execution model (concepts 1 and 2) or the KEEL dynamic graphical language (concept 3)? The answer is that concept 1 came first as we created a model for integrating pros and cons when addressing common structured business problems. Then the functional integration (concept 2) and the dynamic graphical language (concept 3) came together as we began to address dynamic, non-linear, inter-related, multi-dimensional problems that we found would be necessary to be solved for embedded autonomous systems.
While KEEL has been developed to provide human-like reasoning that can be embedded in devices, it can also be used to describe the human-like behavior of physical systems. Consider, for example, the degradation of physical systems that seem to have a mind of their own as components wear out and degrade over time or perform differently as they heat up. All of these non-linear relationships can easily be modeled and simulated with KEEL. Once one understands "how" KEEL is used to create and execute this processing model, we often suggest that the user forgets "how it works" and starts "thinking in curves". In this way, the designer thinks about the (often) non-linear relationships between information items, not how the information is actually processed.
Finally, we would suggest that the only way to get an in-depth understanding of the KEEL underlying technology is to work with it directly at a KEEL workshop. In the 2 to 3 day workshop you will gain an understanding of how to create and debug KEEL cognitive engines, how the KEEL engines process information, and how to integrate the engines into systems.
Question: Why do you differentiate Rules from Judgment?
Answer: Rules are (or are supposed to be) explicit. They define explicit behaviors that are to be produced in response to explicit circumstances. In computers rules are commonly described with IF | THEN | ELSE logic or CASE statements. Judgment, on the other hand, is a human characteristic focused on the interpretation of information in order to decide how to apply rules. It commonly requires "balancing" of (sometimes conflicting) alternatives. In human terms, judgment is considered a parallel process performed by the right hemisphere of the brain. KEEL provides the ability to exercise these parallel, judgmental functions on a sequential processing computer. Policies are commonly developed to guide humans in the interpretation of more complex situations. Humans can then use "judgment and reasoning" to react to situations according to those policies.
Another viewpoint was expressed by Dr. Horst Rittle in the 1970s. He coined the terms "tame problems" and "wicked problems". He indicated that with "tame problems" one could write a formula and obtain a "correct" answer. With "wicked problems" one hopes for a "best answer". His field was city planning. He was suggesting that it would be difficult or impossible to write a formula to control a city. We would extend this concept by saying it takes "judgment" to determine how to allocate resources to operate a city.
With KEEL we are allowing devices and software applications to take on some of the judgmental reasoning functions that have historically been impractical to implement with a fixed set of rules. We would suggest that one additional requirement when automating these judgmental functions is that they must be 100% explicit and auditable. One doesn't want to mass produce devices that exercise poor or unexplainable judgment. With KEEL, all decisions and actions are 100% explainable and auditable.
Question: Unless I am missing something, it appears you have implemented an analog computer on a digital computer that is, you can accept various inputs, scale them, combine them, and provide a scaled output. Let me know what I missed on the engine itself. For example, I didn't see any indication of the ability to support non-linear inputs (for example, thermocouples) or to generate non-linear outputs. I'm trying to figure out what is unique about this.
Answer: Yes, you are totally correct about KEEL representing the functionality of an analog computer. KEEL is intended to model human judgmental decision-making, which is pretty much an analog process. The very small memory footprint of a KEEL "cognitive engine" makes KEEL-based solutions available to some embedded applications, where other approaches would not be practical. Because we are interested in analog decisions or control applications we can easily handle non-linear inputs and create non-linear outputs (sample graph below). In general, if you already have a specific formula that calculates a correct answer for every input condition then KEEL would probably not be appropriate. On the other hand, if you need to create a system that exerts "relative control" in a dynamic or complex environment, then KEEL might be appropriate. The KEEL toolkit allows the design to be created without writing conventional "rule based" code. The engines created from the design can be integrated into existing applications with very little effort. One other note: KEEL actions or decisions are completely explainable and auditable. This differentiates them from neural nets and fuzzy logic.
One characteristic of a real analog computer is that all inputs are handled simultaneously (or at least determined by the propagation delays through the circuits). Commonly, digital computers process information sequentially. With KEEL, the processing of information is handled within what we call a "cognitive cycle". A snapshot of the system (inputs and outputs) is taken at the beginning of the cognitive cycle. Then, within the cognitive cycle, the system iterates the analytical process until a stable state (of all variables) is achieved. (This equates to the propagation delay through an analog circuit.) Then the cognitive cycle is terminated so the outputs can be utilized.
Think of how a human might make a judgmental decision between a number of inter-related alternatives. The human integrates pros and cons of each alternative and determines how potentially conflicting objectives are met. The human may allocate resources across several of the alternatives or apply all resources towards one alternative. The human will be balancing each alternative against the others to determine how to proceed. This is an iterative process for the human because the selection of any alternative cannot be done in isolation. All alternatives must be considered together. KEEL-based solutions perform the same way. The entire system is processed during the KEEL cognitive cycle.
ALSO: While an analog computer "could" be designed such that unstable conditions could exist, the KEEL design tools highlight and disallow these designs.
The 3D graph above shows the integration of two non-linear functions (2 in and 2 out). With KEEL we are commonly dealing many-to-many relationships that cannot be graphed in just 3 dimensions. The KEEL dynamic graphical language provides another way to "see" and "test" these types of relationships.
Question: How does KEEL differ from Fuzzy Logic?
Answer: KEEL has often been compared to Fuzzy Logic. Both KEEL and Fuzzy Logic can support relative, judgmental, analog values. Fuzzy Logic, however, is difficult to explain or audit in human terms. Fuzzy Logic is based on the concept of linguistic uncertainty, where human language is not sufficient to exactly define the value of words: cold, cool, warm, hot Geometric domains are used to describe values: the degree of cold is described as participation in the cold geometric domain. Geometric domains are combined to approximate what the data is supposed to mean. The process is called fuzzification. There is art in selecting the appropriate geometric shapes that are used in Fuzzy Logic.
KEEL, on the other hand, defines information with explicit values and explicit relationships. KEEL supports the dynamic changing importance of information, which allows the reasoning model to change with the environment. Complex relationships can be traced to see their exact impacts. It is easy to see and audit the reasoning process. The graphical language can be animated to show decisions and actions at any point in time. This makes KEEL an appropriate choice when the decision or actions of the system need to be explained and understood.
Fuzzy logic also focuses on the interpretation of "individual signals". Additional logic is required to integrate fuzzified signals into a system design. KEEL, on the other hand, is a system processing model. The KEEL "dynamic graphical language focuses on how the "system" integrates information (linear and non-linear).
Questions for your "fuzzy" designers:
Question: How does KEEL differ from Artificial Neural Nets (ANN)?
Answer: While both KEEL and Artificial Neural Nets support 'webs' of information relationships, ANN webs are taught by showing them patterns to recognize. When an ANN-based system makes a decision, it is based on the interpolation between points it was taught. ANN based systems cannot explain why they make decisions. Because they are taught patterns, they have problems recognizing situations that they have not been taught. In other words, they do not react well to surprise situations. Since ANN based systems may not be able to explain why they do what they do, the developers of ANN-based systems may be subject to liability concerns in safety critical systems (if they make the wrong decision due to insufficient pattern training). When new information items need to be included in a neural net system, the entire training phase may need to be repeated. This may be a significant cost and time-to-market concern.
There may be some/many applications where one may desire human intervention to be incorporated into the decision-making process. Intervention may be non-linear. There may need to be several non-linear control signals. With several non-linear control variables in a system it may be difficult to appropriately train ANN-based systems with this type of control. With KEEL, it is easy to integrate human intervention into the decision-making process in order to adjust/modify how the system interprets information.
It may not be appropriate to integrate multiple independent problem segments into a single ANN controller, because all variables in a neural net are linked, even variables that are totally independent. This is not a problem with KEEL since the designer determines the relationships.
Absolutes should be handled external to the ANN design (like emergency stop) to insure they are handled appropriately. One doesnt want an absolute to be determined by interpolation between taught patterns. For this reason ANN approaches may not be suitable for handling "boundry conditions" without external logic. This is not a problem with KEEL technology as all models are explicit; they are not the result of interpolation between taught points.
KEEL webs model the judgmental reasoning of human experts by defining explicitly how information items are valued and how each information item interacts with other information items. KEEL Technology is an "expert system" technology, because models are created by a human domain expert. Because KEEL webs are designed with a set of visible graphical functional relationships, every decision / action can be explained and audited. KEEL based systems are also easy to extend without starting over.
Question: How does KEEL Technology compare to Neuromorphic Engineering, also known as neuromorphic computing?
Answer: From Wikipedia:
"Neuromorphic engineering", also known as "neuromorphic computing" is a concept developed by Carver Mead in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times the term neuromorphic has been used to describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models of neural systems (for perception, motor control, or multisensory integration).
A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.
Neuromorphic engineering is a new interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science and electronic engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems."
We would suggest that the Neuromorphic Engineering players are the AI guys that want to build a human: Neuro = neurons, Morphology = organic. Using the above definition to compare: you have to know biology, physics, mathematics, computer science and electronic engineering to do neuromorphic engineering.
Once you understand the KEEL dynamic graphical language, you only need to understand the particular problem you want to automate to utilize the technology. One of our patents covers the deployment of KEEL Engines as VLSI or FPGA circuits. You might want a hardware only solution for very specific problems, especially those that might require extremely high speed. We would suggest that with KEEL, you don't need to concern yourself with the intimate details of every neuron, etc. You focus only on how information items work together to make decisions and exert control. With KEEL, you still have to be aware of your system architecture. With KEEL, one isn't trying to create a human. One is trying to create a machine that is more adaptive to changing information, yet is still operating on policies created by humans.
Question: How does KEEL Technology differ from Agent Technology?
Answer: This is comparing a "How" to a "What". "In computer science, a software agent is a piece of software that acts for a user or other program in a relationship of agency. Such "action on behalf of" implies the authority to decide which (and if) action is appropriate."(Wikipedia). KEEL can add "and how" to the definition. KEEL provides a means of creating, testing, and auditing the behavior of complex information fusion models that can easily be integrated into a software agent. This may be especially valuable if the agent is responsible for interpreting complex information relationships for deployment in real time systems, or where the economics of developing complex models can benefit from rapid development cycles and small memory footprint. The KEEL provision of "auditable control decisions" may also satisfy the demand for some agent-based systems.
Question: How would you compare KEEL to probability based solutions (Bayesian / Markov / etc.)?
Answer: Probability based solutions work well when you can obtain good statistics. This might be the case in static situations, like diagnosing disease in patients where normal values are gathered across a large number of tests. However, it is often difficult to get good statistics, especially on non-linear systems composed of multiple inputs. With KEEL, one uses common sense to define how information is to be interpreted. For example, defining concern about running out of fuel is not really a probability problem. With KEEL, you would model how you interpret your concern for running out of fuel in your decision to pursue some goal. With KEEL you model how you want the system to process information. You do not create answers to specific problems. This way you create a robust solution that can solve a number of problems, not just an individually stated problem.
The thought process for a probability based system focuses on the best way to solve a problem, while with KEEL, one models what the system will do, given a variety of inputs and options.
Sometimes there is a risk in probability-based systems for them to be corrupted by incorrect or misleading probabilities (overwhelmed with too much data and lack of auditability). There may be too much "trust" in the data.
Question: How does KEEL differ from conventional AI Expert Systems?
Answer: Conventional rule-based expert systems use approaches like Forward and Reverse Chaining. Reverse chaining systems start with a solution and work back through all the data to determine whether the solution was valid. This approach has worked for simple decisions when some data might be missing. Forward chaining systems start with the data and try to determine the solution, but suffer from missing information components. Rule-based systems supplied the concepts of confidence factors or certainty factors as part of the math behind the results. These types of systems were commonly used to evaluate static problems where the rules are fixed and the impact of each rule is stable. In many real world decision-making situations rule based systems quickly become complex and hard to understand. Computer programs based on rule based systems are usually expensive to develop and difficult to debug. Furthermore, they can be inflexible, and if changes occur, may require complete recoding of system solutions. Bayesian analysis is another approach that focuses on statistical analysis and may be appropriate when there is an opportunity to gather a strong confidence in probabilities. This approach may not work well in dynamic environments.
Rather than defining specific "rules", with KEEL one describes how information items are interpreted. This is more akin to defining policies than writing rules. When one writes "rules" one has a specific answer in mind. With KEEL, one is describing how the system is to respond to information. KEEL defines policies by identifying information items with a level of importance. Each information item can impact other information items to create a web of information. Components of the information web can be exposed as outputs of the system. The 'expert' (designer) defines the information items, defines how important each is, and defines how they inter-relate, using a graphical toolkit. This map of information defines the policies. Forward and reverse chaining models can be created in KEEL. But, with KEEL, this is done without textual programming. The graphical model allows complex situations to be modeled rapidly and tested before being translated to conventional code. In some cases, KEEL systems can use information derived from Bayesian Analysis, Fuzzy Logic, ANN systems as well as sensors, database driven solutions, or human input.
Within the discussion of conventional AI Expert Systems one might be interested in a comparison of a KEEL Engine and an Inference Engine:
From Wikipedia: "In computer science, and specifically the branches of knowledge engineering and artificial intelligence, an inference engine is a computer program that tries to derive answers from a knowledge base. It is the "brain" that expert systems use to reason about the information in the knowledge base for the ultimate purpose of formulating new conclusions." Taken at this level, one could say that a KEEL Engine would be a form of an inference engine because it uses a human expert's understanding of a problem domain captured in the engine and because it focuses on "reason" or judgment to determine outputs.
However, when one looks deeper into the conventional definition of an Inference Engine one quickly identifies some differences and some similarities. Using the Wikipedia definition and an associated section on Inference Engine Architecture, there is a suggestion that an inference engine is based on a collection of symbols upon which a set of firing rules are executed. Broadly interpreted, KEEL could fit in this definition. Data items, their values, and how they are inter-related control the functionality of the system. The concept of "firing rules" is not used in KEEL. "Rules" are not things that a KEEL designer considers. A KEEL designer is primarily concerned with how information items change in importance and how information items are inter-related. Firing rules indicate an intermediate state is observable and may have some value, potentially during some debugging operation. Tracing wires and observing visually the importance of information allows a broader system view of the problem domain than looking at an individual rule as they fire. The Wikipedia definition suggests that Inference Engines have a finite state machine with a cycle consisting of three action states: match rules, select rules, and execute rules. In KEEL, the designer is not conscious of "states" as it is a more organic concept. With KEEL, conflicting rules are "balanced collections" of data (as in the concept of balancing alternatives). There is also a suggestion that an Inference Engine has rules defined by a notation called predicate logic where symbols are valued information items. With KEEL, the information items (aka symbols) in the KEEL Engine do carry with them value information, but there is certainly no predicate logic notation involved. The KEEL dynamic graphical language provides the notation. Applications where the values of variables change with non-linear functional relationships are a target of KEEL Technology. If one includes in the definition of an inference engine that it includes three elements: 1) an interpreter, 2) a scheduler, and 3) a consistency enforcer, then KEEL is further differentiated.
With KEEL the "expert knowledge" is a definition of how information items (symbols using the above definition) are valued and inter-related. There are no visible "intermediate states" in the execution of a KEEL Engine. KEEL Engines equate to a single "information fusion formula" (that might be very complex) where all inputs are effectively processed simultaneously (as in an analog system where there is no value in contemplating intermediate states as the signals propagate through the system).
It is likely that one "builds" an inference engine. Every one will likely be different. All KEEL Engines (the processing function), on the other hand, are effectively identical (some optimization may be used). With KEEL, the problem definition is maintained in tables. The table data will be different, but not the code. This is a subtle difference that may be most important to embedded applications. It may also be important for organizations that need "certified code", where the cost of certification may be an advantage.
Question: How does KEEL differ from conventional rules engines / rules-based systems?
Answer: In a sense, KEEL could be considered a rules-based system (KEEL Engines compared to rules engines). However, conventional rules-based systems are based on IF | THEN | ELSE discrete logic. These systems work well when dealing with compartmentalized data, because one is dealing with simple comparisons, where the rule either fires or doesn't fire. When one is dealing with more complex relationships (where multiple, non-linear inter-relationships between information items exist), then the IF | THEN | ELSE logic may become very complex and hard to manage. For example, situations where pieces of information are treated differently as the situation changes, or situations that are based on the value of other pieces of information (time and space).
Using conventional rule-based systems, it may be hard to visualize the rule set as a whole, because one is sequentially processing the rules, one at a time.
KEEL, on the other hand, processes all information items "together" in a balancing process. When one creates the model using the "dynamic graphical language", one is interacting with the entire problem set simultaneously. Rules defined with the KEEL dynamic graphical language define how information items are to be interpreted in a dynamic (perhaps non-linear) environment that would be very difficult to define with conventional sequentially processed rules.
With conventional rule-based systems, one may be defining how to solve "a" problem. With KEEL, one is defining how to interpret information in more abstract way that allows the KEEL-based rules to address dynamic problems and define more "adaptive" rules.
Question: How does KEEL differ from scripted AI languages like CLIPS?
Answer: They are completely different for several reasons. First, CLIPS is a toolset and a language. KEEL is a technology that incorporates a processing engine architecture, as well as a toolset and language.
CLIPS is like a number of "scripting" languages that utilize text based "rules". It has the same fundamental IF | THEN | ELSE structure of most conventional programming languages. These languages support a number of services that support what we call "Conventional AI" applications. It appears that one programs CLIPS like any other "conventional" programming language, meaning that one uses an "editor" to write the script. This can then be compiled into an application. At the machine level, one is probably left with (potentially large) methods that are described with the script. Like any scripted computer language, they may suffer from typing errors as well as logic errors. Debugging these applications is often a time consuming activity.
CLIPS is completely different than KEEL. We don't suggest that there is anything wrong with scripted languages or conventional programming tools. We suggest that these scripted languages map to a human's left brain activity. KEEL focuses on "judgment and reasoning" that, we suggest, are more image processing (right brain) functions, than static rule-based functions. KEEL is more like differential calculus, except that it is done graphically rather than with scripted formulas. There is no "text" in KEEL, except for naming data items and the "names" are not used at all in the logic. This means you cannot have a "typing" error. You only have to deal with the relationship issues.
With KEEL one is designing systems where the solution is the analog (relative) control of multiple variables. KEEL focuses on solving dynamic, non-linear problems, where the importance of information items change continually. It addresses inter-related problem sets, where each solution impacts other problems within the same problem domain.
With scripted languages the designer is thinking in terms of defining "rules" of how the system is to operate. It is common for a designer to consider a rule from a static situation. With KEEL, the designer is thinking about how information items interact in the solution of a problem. The designer thinks of dynamic, non-linear relationships (curves). The designer is observing the system balance alternatives. The designer then determines if the performance is appropriate.
Question: Why don't you include a database in the KEEL Engine?
Answer: KEEL Engines are small memory footprint functions (or class methods) depending on the language in which they are deployed. There are no external libraries required. When targeting small, embedded applications a user may choose to use dedicated memory arrays for data storage. In other cases historical data may be saved in external databases. In other cases, no historical records may be needed at all. In software applications the user may already have a database in place. Every different kind of application may have different needs that can benefit from a different database architecture. Since KEEL Engines do not care where information comes from (sensors, human input, local or distributed database, etc.) we leave that option open to the system engineers.
Question: Why do you suggest there is a different "mindset" for the developer?
Answer: We suggest that concept development in KEEL is different from other paradigms. For example: developing a simple computer program: The "programmer" has an idea or is given an idea for a program. He/she may or may not outline the program structure using some kind of graphing technique to show data flow. Then coding begins. The programmer is involved in thinking about how to structure the code to make things happen. The programmer's mindset is focused on symbol names and language instructions and how to use the computer instructions to accomplish the task. Depending on the program, the programmer may build a user interface to test the program. Only then can anyone evaluate how well the programmer accomplished the task.
With KEEL, the "designer" is considering many things when creating the application. The designer is considering:
Using KEEL, all of this is done graphically, so the designer is always involved in the problem (not how to create "code" to express the problem). The designer is constantly "testing" the design as it is created to see how it reacts. When the model is complete, it can be given to the "programmer" to integrate it with the rest of the system.
With KEEL, one is commonly dealing with complex scenarios that many times have non-linear components (the changing importance of information items based on time, space or simply the relationships between other information items). Many times the problems have never before been characterized because of their complex inter-relationships. This means that the designs evolve as they are being developed and the designer clarifies his or her understanding of the problem. The KEEL "dynamic graphical language" greatly accelerates this process. Changes can be made and reviewed immediately. There is no need to translate the idea into "code" where the code has to be debugged before it can be evaluated.
For this reason, we suggest that domain experts (not necessarily mathematicians or software engineers), can create and debug the models for these non-linear systems, greatly reducing lifecycle costs to the user/organization.
Another benefit is the visibility of the solution. If these complex behaviors were encoded using common IF | THEN | ELSE logic, the result would likely be a large monolithic code segment that would be difficult to understand. With KEEL, the model is easily visible where inter-relationships and the instantaneous importance of information can be seen while designing the system. Auditing of decisions and actions is easily available. With KEEL's "animation" capabilities, one can even monitor the reasoning capabilities of systems while they are in operation.
Question: How does KEEL compare to curve fitting approaches?
Answer: A common practice is to measure the real world and attempt to create a "formula" from the data accumulated.
This approach is used when there is little (or no) "understanding" of the problem. By "understanding" we mean that an expert doesn't know enough about the problem to describe it in a formula. Data is used to derive a synthetic understanding. This approach may use some kind of curve fitting (pattern matching) approach. One risk to this type of approach is that the measured data may be subject to external influences that may not have been measured or accumulated in the dataset. This results in "garbage in - garbage out" types of problems. If this happens, one may not even recognize that there is a problem. It is often difficult to trace back to how the data was gathered and accumulated.
With KEEL, one uses a human's understanding (even if it is limited), to model the system. The dynamic nature of the KEEL language helps the human test the model under various scenarios by stimulating inputs and observing the response. This is an iterative process as the human fine-tunes his/her understanding of how the information is to be interpreted under varying situations. One could say that it is even an evolutionary process, since the models are likely to be refined over time to take advantage of new information items and new control variables. KEEL focuses on creating models that will get better and better over time. The human "learns" how to describe his/her understanding as the models are being created and refined. The result is a visually explicit model that allows one to "see" how data items are interpreted at any instant in time. This visually explicit approach allows the models to be challenged. This is all part of the evolutionary process. In some cases, KEEL-based systems can be designed to evolve on their own. These systems can use a variety of techniques to incorporate adaptive behavior.
Policies (for humans or for machines) described in KEEL are explicit, compared to those described with a human (verbal or written) language, where most terms are subject to individual interpretation.
Question: Does KEEL learn?
Answer: The term "learning" is subject to many interpretations in the cognitive domain. We would say that KEEL technology adapts, rather than learns. We would suggest that a system that learns will have to be able to automatically accept new information (inputs) that it has never seen before and automatically determine how that new information relates to and impacts all of its existing information. KEEL technology is an expert system technology that requires a human expert to define the reasoning model. The human expert must define how each input relates to all the other pieces of information and what outputs are to be created. In this way, the human expert defines how the system will adapt to changing inputs. The human expert is in control.
We would suggest that a true learning system will decide, on its own, how to integrate new information sources. A human might be termed a true learning system. At birth, the human knows nothing (from a judgmental reasoning standpoint). The baby is exposed to new information and evolves to adulthood. Some adults turn out good, and do good things; and some turn out bad, and do bad things. We would suggest that this evolutionary process is not what we want in machines that are mass produced. There would be the potential of creating devices that evolve differently and perform differently. This could make them uncontrollable.
Auditable Teachability: We would suggest that KEEL Technology provides services to create "teachable" systems. A number of services and techniques are built into the KEEL Toolkit that provide system engineering tools to support systems that need to be periodically updated and extended.
There may be applications, however, that can benefit from the incorporation of true learning abilities. It may be possible to use KEEL engines as policemen to oversee this type of system. This is an area of potential research.
So our answer to the question as to whether KEEL learns, is NO. KEEL can adapt, if that is what the expert wants it to do.
Question: Is KEEL scalable?
Answer: KEEL is scalable in multiple ways. First, a single KEEL "engine" (the encapsulation of a KEEL design), uses long integers to provide identifiers for inputs and outputs, so the number of inputs and number of outputs is limited to the size of a large integer on the target platform that is chosen. This would be a physical limit to the largest size of a "single KEEL engine. Realistically, however, we just say that the size of a KEEL "engine" is unlimited, because it would probably not be "practical" to create such an engine. It is more appropriate to break "large" problems into manageable partitions. If these "chunks" are to be processed on a single microprocessor (computer), then we have system integration tools that will automatically wire multiple engines together (so they can be compiled together).
Sometimes the discussion of scalability is related to "performance". Performance is related to complexity, both in 1) the number of inputs and outputs (Positions and Arguments in KEEL Terminology) in the KEEL design, and 2) the number of functional relationships between data items (wires in KEEL Terminology). The more complex the system design, the longer it takes to process. KEEL Technology supports 2 processing models: Normal Model: slightly larger, because it contains 2 additional tables. It operates slightly faster; and Accelerated Model: slightly smaller, because it doesn't contain 2 additional tables. It operates slightly slower. The speed difference is dependent on the type of problem being solved. For example, a problem with lots of inputs that "all, or mostly all" change constantly, would benefit from the Accelerated model, while if the same problem had only a few inputs that changed the Normal model may operate faster. The Accelerated model will operate with less jitter.
On the very small scale, KEEL "engines" are optimized to the extent that services that are not required by the cognitive design are left out of the conventional source code. This addresses the need for embedding cognitive solutions in very small devices (e.g. sensor fusion). This may reduce the approximately 3K even further.
We also say that KEEL is architecture independent. Here we are talking about how KEEL "engines" are distributed. Because KEEL engines operate independently (at any instant), they operate just like a collection of humans, each with their own responsibilities. Some kind of network infrastructure can tie them together to allow them to share / exchange information. Each entity (human or KEEL engine) uses the information available at the time the information is processed. While we do have tools to integrate multiple KEEL engines together on a single microprocessor, we do not supply tools to integrate information across a network. There are various commercial and proprietary solutions to handle this infrastructure and KEEL should be able to be integrated into any of them.
Question: Is KEEL suitable for upgrading existing systems, or only for integration into new systems?
Answer: Most certainly KEEL is suitable for integration into existing systems. It has a very simple interface (API) that allows KEEL cognitive engines to be integrated into any system. All that is required is to load normalized inputs into an input table and make a call to the "dodecisions" function (the KEEL Engine). Results are then pulled from an output table. Sample code to package this entire process into a function is autogenerated with the code itself to accelerate even this simple process. So assuming that one is capable of adding a simple function to an existing design, it should be possible to integrate KEEL "Engines" into existing designs.
For totally new systems one can accelerate the overall development process by allowing domain experts to develop operational code describing "behavioral functionality" and allowing the system engineers to focus on system architecture and infrastructure issues.
Question: How can you model physical systems with KEEL?
Answer: While we know that machines and physical objects don't have a mind in the sense of a human mind, they sometimes exhibit human-like behavior. For example, your car breaks down at the most inopportune time. It is as if the car knew when and how to cause aggravation. A rain storm causes a leak in the roof, but the damp ceiling is visible in a completely different location. It is as if the water had a mind of its own to search for a path to the ground.
In these and other similar cases, elements of the system are changing. There is a balance at any instant in time between all elements. There is a certain wear of mechanical components. There is a certain set of pressures that are exerted at any instant in time. The pressures cause change to the physical characteristics over time. With KEEL, it is easy to model the behavior of these pressures. One can visualize impacts to physical structures. If you have good information on the driving and blocking factors you can model the interactions. This approach can be used for diagnostics and prognostics modeling and adaptive control. Devices can then react to degraded components and they can react to external influences, all on their own.
Question: Why do you call KEEL a "technology" rather than "tool"?
Answer: We define technology as a "way to do things" (as in a technical approach) and a tool as something that "facilitates the implementation of the technology".
We call KEEL a 'technology' because it is more than just a tool. It is a new way to process information. A MIG welding torch is a tool. With it, you can create a wide variety of mechanical structures. The KEEL Toolkit (with the dynamic graphical language) includes a set of tools used to create KEEL 'Engines' which encapsulate the new way to process information. The KEEL technology 'patent portfolio' covers the basic algorithms, the architecture for the KEEL Engine, and the KEEL dynamic graphical language. Licensees to KEEL Technology get access to the toolset and the right to embed the KEEL Engines in their devices and applications. Licensing KEEL Technology would be like licensing the right to use a MIG welding torch to build mechanical structures composed of MIG welded components. NOTE: A MIG (metal inert gas) weld is a specific type of weld. A KEEL Engine processes information in a specific way.
Question: What do you do if you don't think your domain experts "think in curves"?
Answer: While we suggest that when one designs a KEEL system, one "thinks in curves" (because we are commonly focusing on non-linear systems); one almost never starts with this level of complexity. Non-linear relationships between data items are usually an evolutionary enhancement to a design. The process of developing KEEL solutions is to first identify the outputs (or the items that can be controlled). Then the inputs to the system are added (the items that contribute to controlling the outputs). Then the linkages between items are added with wires. These relationships are almost always linear at the beginning. Remember this is done in seconds as "positions and arguments and wires are just dropped on the screen". Then (maybe) the importance of different inputs is adjusted. Maybe several inputs need to be combined before controlling another output. KEEL designs are extended. The designer drops functionality into the design and tests to see that it performs.
This is an iterative (design, test, modify) development process. Non-linear relationships are gradually incorporated into the design. So, while we suggest that domain experts "think in curves", this is just a natural progression of the design refinement process. It is also easy to integrate and test these types of relationships. When one goes through KEEL training, the concept of "thinking in curves" is taught so the user gets experience creating these types of systems.
In some cases, linear or state change control is satisfactory for production systems. But in other cases, as the design is refined, it is obvious that some kind of non-linear relationship (curve) might be appropriate. These can be merged into the design from the library of curves supplied with the toolkit, or they can be created by the domain expert and integrated into the design.
Question: If a KEEL-based system is based on a human expert's opinion, what do you do if there is more than one expert and they disagree?
Answer: It is true that the rules developed with the KEEL Toolkit mimic how an expert models the reasoning process. When there is more than one expert, there is the possibility (likelihood) that there will be some level of disagreement. However, a company or organization or individual still has to take on the responsibility of deciding on a course of action (which expert to trust). The same is true with a KEEL design. Someone still has to take responsibility of choosing which expert is correct, or at least deciding on how to integrate the opinions of both into a final decision or action.
While KEEL Technology will not make that decision for the system designer, the graphical language exposes all of the reasoning. When experts describe their decision-making reasoning using the English language (or another verbal or textual language), one is usually left with only a loose definition of the reasoning that was used. In other words, the English language is not very good at describing dynamic, multi-variable, non-linear, inter-related problem sets. This is especially true of safety critical systems or manufacturing systems. This is why these types of systems are commonly described with conventional computer languages.
So our response to the question focuses on why it is necessary to describe a human expert's judgmental reasoning with a graphical language. This graphical language is explicit. It can be reviewed, tested, and audited. In this manner, when there are disagreements between experts, the model can be investigated and refined over time.
Question: How can you use KEEL if you don't know what the outputs and inputs are?
Answer: In complex problem domains it is common that you don't know all of the inputs and outputs up front. It is likely, however, that you can identify at least some of the outputs. These will be the control variables that you will use to respond to your problem domain. Given the output variables you know, you should be able to identify some potential inputs. A key attribute of KEEL Technology is that it is so easy to add inputs and outputs to the model using the KEEL "dynamic graphical language". This "ease of use" characteristic should help you "think" about how the inputs and outputs collectively inter-operate in your problem domain.
Another characteristic of "solving" complex problems is that the solution may evolve. You may add new sensors, whose data can participate in the solution. You may identify a new "symptom" that impacts the overall goals of the system and needs, either to be controlled or taken into account.
The KEEL dynamical language helps one think about solving the problem and gives you the opportunity to stimulate inputs and visualize the results. Addressing these kinds of problems with KEEL Technology (language and execution environment) is much simpler than attempting to describe hypothetical problems to a mathematician and getting his/her solution implemented using conventional software techniques, where it can only then be tested.
In summary: When you are not sure of the inputs and outputs, it is easy to hypothesize on what they might be and how they might inter-operate with KEEL Technology. When you are comfortable with the results the model can be easily deployed in more advanced simulations and emulations and finally delivered in a product or service. When you have defined and documented your model in KEEL it provides an explicit explanation that can always be reviewed and extended with ease.
Question: How are the concepts of "surprise" and "missing information" handled by KEEL Technology?
Answer: One of the problems with ANN-based (Artificial Neural Net) solutions is that they do not react well to surprise. ANN-based systems are pattern matching systems that are taught. If they have not been taught a particular pattern of inputs, they will just interpolate between what they have been taught and create an answer. With KEEL, one describes how to interpret information, not patterns. There is no interpolation between taught points. With KEEL, one can create systems that decide how to respond to collections of abstract information points. There is no interpolation or undetermined action.
The concept of "surprise" can be decomposed at multiple levels. First, one can encounter scenarios with a known set of inputs combined in unexpected ways. With KEEL, one can examine the system and observe how it was interpreted. If changes are needed they can easily be made. With ANN there is no way to know "how" an answer was derived, so it can only be addressed with increased training.
Another way of decomposing "surprise" is for a system to encounter new variables that impact the problem domain that were never before considered. This would be like a "person" walking down the street and a Martian appeared from behind a bush. An ANN-based system will respond, although it is not clear how. We have some ideas of how one might build a KEEL-based system that would know how to handle this kind of surprise and would like to discuss this with selected organizations. The abstract "Considerations for Autonomous Goal Seeking" identifies the project.
"Missing information" can be handled in several ways. A piece of information may be necessary to make a certain decision (information that supports a decision or action). If it is missing the decision could be blocked. This would be a forward chaining approach. For example: A decision must be supported by A, B, and C. Alternatively, a piece of information could be used to reject a decision or action. In this case, the designer could create a model where missing blocking information would not block the decision or action. This second option would provide a reverse chaining approach. Complex subjective decisions are commonly a fusion of supporting information and potential blocking information. For example: an analysis of an image could identify an object with numerous features; among them aircraft wings and an aircraft tail. The interpretation "could" demand that both wings and tail are required to know it is an airplane, or it could justify the decision if either aircraft wings or aircraft tail are observed. The interpretation "could" exclude a decision that it was a car if the object had either wings or a tail or other features not belonging to a car.
Question: Isn't the KEEL graphical language just another way to create a formula?
Answer: It is true that the KEEL graphical language is translated into conventional computer languages for processing, which could possibly be expressed as a textual/numeric formula. However, when the problem is a multi-variable, multi-output, inter-related, non-linear, dynamic system, the development and testing of such a numeric formula (using the common definition of a mathematical formula) would be difficult (very difficult)! Using the KEEL graphical language and the dynamic support built into the KEEL Toolkit, these systems are relatively easy to construct and test. Also, because the KEEL language can be deployed into several different conventional languages, it is easy to build test systems (simulators or emulators) to perform extensive system tests before deploying them in production environments. So the answer is "yes" (formulas are created), but because of the complexity of the formulas, they can best be visualized in the KEEL graphical language.
Because another benefit of using KEEL Technology is the ability to audit complex systems, the technology supports reverse engineering of actual actions. By importing snapshots of real-time input data into the design environment, one can "see" the reasoning performed by production applications. This is accomplished by viewing the importance of individual data items and tracing the wires to view inter-relationships.
One additional advantage in the use of KEEL Technology and the KEEL graphical language is built into the KEEL Toolkit. It constantly monitors for unstable situations and warns the designer if such a model would be created. Because KEEL Technology balances information, it would be possible to create a system that would never stabilize (go left - go right - go left - go right....). The design environment watches to insure the designer does not create this kind of system. Should one attempt to develop this type of a system manually (hard coded formulas), additional tools may be necessary to insure that an unstable design is not created.
Question: How does the KEEL graphical language differ from other "graphical languages"?
Answer: Most other graphical languages use wires or arrows to show data flow or logical processing flow between graphical components. These are sometimes categorized as "directed graphs". The graphical components themselves commonly encapsulate functionality. This is completely different than the KEEL graphical language. With KEEL Technology, the functionality is defined by the wires between connection points on the graphical icons. The functionality defined in this way is explicit. Within KEEL designs, there is no "hidden" processing within boxes. While there is a functional ordering from source connection points and sink connection points (information provider to information user) this is a functional definition rather than a data flow process. The wires define how one data item impacts others. During the "cognitive cycle" all relationships are evaluated. This is similar to processing a formula, where one is not interested in what happens part way through processing the formula. One is only interested in the results of processing the formula. Within the KEEL graphical language the bars represent values, not functions. This is somewhat similar to "microcode" that would be processed behind the scene in a microprocessor during an instruction cycle.
One can also look at most other graphical languages as a decomposition of a design into pieces that can be viewed as separate functions that are processed sequentially. Information "flow" is critical to their understanding. With KEEL the concept is that all items are processed during the same time frame (the cognitive cycle). Order is not important to the outcome and intermediate states are not important as they are not visible to the outside world.
Another view of most other graphical languages is that they can perform (or encapsulate) conventional computer code that has the capability to perform mathematical functions, like 1 + 1 = 2. The KEEL language has no mathematical functions. It has no conditional branch instructions. It can, however, identify the most important value. It does provide ways for one item to influence other items in non-linear ways. Conventional logic (external to the KEEL Engines) is still used to perform mathematical functions. This is similar to left and right brain functionality. Conventional logic is used for left brain functions and KEEL is used to perform (subjective, image processing) right brain functions.
Simulink with Matlab (from Mathworks) are examples of a graphical language with Matlab encoded functionality inside of Simulink graphical objects. IEC 1131 (IEC 61311-3) graphset is another similar system. The KEEL Toolkit can be used to create designs that can be translated to Octave (the open source version of Matlab source code). The KEEL Toolkit can also be used to create "PLC Structured Text" that can be integrated into IEC 1131 function blocks. KEEL focuses on creating models that would be deployed inside the Simulink graphical objects or IEC 1131 function blocks.
We have been asked whether one could create the KEEL "dynamic graphical language" with Matlab / Simulink. The answer is potentially yes, as the KEEL Toolkit, with its dynamic graphical language objects could be created with any software tools capable of creating graphical objects, drawing wires that tie the objects together and constructing the KEEL functionality behind the graphics. However, the KEEL "dynamic graphical language", the way that KEEL Engines are structured and how they process information are all covered by granted US patents. So to create the KEEL Toolkit (KEEL dynamic graphical language) would require a license from Compsim to do so.
Question: How does your "tool" compare to "LabVIEW" (National Instruments) or other similar HMI (Human Machine Interface) tools?
Answer: The person asking this question might be focusing on the Look and Feel of a graphical tool rather than the target implementation.
For example: The purpose of LabVIEW is to create an interactive view of a process. The primary purpose of the KEEL Toolkit is to create an executable cognitive function that can be embedded in an application, with the added benefit of allowing the application to show how all of the information in the application is actually being processed. LabVIEW is intended to expose only certain values from a process that some engineer decided was of interest. It would be hypothetically possible to duplicate the rendering of the KEEL language in LabVIEW, but this would exponentially increase the cost of managing the entire project, since no HMI development would be necessary with KEEL. On the other hand, LabVIEW (or any other HMI display) could be driven by an embedded KEEL Engine, if that was desired.
One response is to ask if the solution they need targets an embedded platform or architecture. Example: a KEEL Engine (created with the KEEL Toolkit) could be targeted for deploying in an Arduino microcontroller for a micro-UAV. A micro-UAV may have multiple KEEL Engines (or small functions for adaptive command and control). Would you consider using LabVIEW (or any of the other HMI tools) to create embedded, adaptive command and control functionality? We suggest the answer is "no". On the other hand, there may be some value in embedded KEEL functionality inside of a LabVIEW application. KEEL's non-linear, adaptive functionality is more cost effective to develop. Using the KEEL Toolkit a KEEL Engine design could be saved as a C function and integrated per National Instruments sample: https://decibel.ni.com/content/docs/DOC-1690 Similarly we have an Application Note that describes the process of inserting KEEL Engines into GE's GE-PACSystems that is built into their RLL programming environment: http://www.compsim.com/keel_app_notes_available.html
Another response is to suggest that LabVIEW provides a "solution", or a complete deliverable package. The KEEL Toolkit is used to create a function that provides only a component of a solution (a cognitive function to think about a set of inputs and determine what should be done about them). This equates to a formula (just one formula that would be very difficult to develop without the KEEL Toolkit). Applications could contain one or many KEEL Engine "components". KEEL functions (KEEL Engines) would be "called" by the broader application software when the KEEL information fusion functionality was required.
It is important to remember that with the KEEL Toolkit we are attempting to provide a way for a domain expert to explore and test a desired behavior before committing to a delivered solution. Much of the focus will be on establishing the "value" of pieces of information in a dynamic adaptive setting. This is accomplished with the KEEL "dynamic graphical language". It is common for this work to evolve as the model is developed and the interactive "visual value system" is refined. The domain expert will likely not know the values in advance.
It may be more appropriate to compare the KEEL Toolkit (tool) to MATLAB / Simulink where the target is a "function" that could be integrated in a target application. See "How does the KEEL graphical language differ from other 'graphical languages?'".
Question: What types of problems are best suited for a KEEL solution?
Answer: KEEL can be used to solve problems
- Where human experts are required to interpret information to make the best decisions or take the most appropriate actions
- Where devices must operate autonomously and make judgmental decisions on their own
- Where devices are required make control adjustments / decisions when human operators are not present
- When repetitive human judgmental decisions are prone to error
- Where trained operators are potentially tricked into overlooking critical attributes
- Where human experts take too long to make judgmental decisions
- When the judgmental decisions of the expert system must be explained (when it is important to know why actions were performed.)
- When it is not economical to develop and maintain straight line code (IF, THEN, ELSE) because the problems are complex (non-linear systems)
- Situations where the environment is dynamic, and the importance of information changes, and the system must react to change
- Where there is an advantage to be able to create one design and execute it on multiple platforms: device, software, web
- Where the small memory footprint of a KEEL solution is an advantage
- Where architectural issues may prohibit other solutions (KEEL technology is architecture independent)
- Where there are many complex models to be created and "ease of use" / "rapid development cycles" are required.
Types of management decisions:
Some example decisions and actions commonly allocated to management include:
- Prioritize / Re-allocate / Re-direct
- Do / Don't Do (choose between separate options)
- Expand / Downsize (Add / Remove)
- Reward / Punish
These types of decisions and actions require that management has the capability to understand and measure pieces of information in order to respond.
Question:What are some examples of "behavior", when you say that KEEL can be used to model "behavior"?
A simulated enemy might move about its environment in a normal manner when it thinks it is safe. As the environment changes OR as its goal changes, it may move more carefully (how carefully may be dependent on its situation). If it feels threatened, it may attempt to hide if there are available hiding places or it may attempt to merge into a crowd (it will observe all opportunities and choose the most appropriate activity and how to perform it). If it is approached while hiding, it may continue to hide; it may attack, run or surrender or blow itself up. What it does is dependent on its character and what it feels is most appropriate based on its history, its beliefs, or perception of its own future (all of which can evolve over time). This decision will change as its threat moves closer.
A trainee (reacting to the enemy) will react at different times causing the simulated enemy to perform more realistically (and differently as the enemy will be reacting to a new and different scenario).
The behavior of mechanical systems is not just binary (working / not working). Many system components degrade over time. They also degrade at different rates. Sometimes they degrade in a non-linear fashion over their life-span. This leads to different interactions between the system components. When a service technician responds to the system at different instants different situations will be encountered and different service approaches will be required. Sometimes partial repairs will leave degraded components in the system operating at diminished capacity.
To train maintenance personnel on systems with diminished capacity creates a more realistic situation.
Question: How might KEEL be used to represent or model emergent behavior?
Answer: "In philosophy, systems theory and science, emergence is the way complex systems and patterns arise out of a multiplicity of relatively simple interactions. Emergence is central to the theories of integrative levels and of complex systems."(From Wikipedia, the free encyclopedia)
Elements of nature exhibit emergent behavior as they are exposed to pieces of information in new circumstances. The pieces of information may have been there all along, but they are just treated differently, because of their relationships with other pieces of information. In some cases, totally new pieces of information are added into the mix. Time and space may impact how the pieces of information are interpreted.
KEEL has several characteristics that make it valuable for modeling emergent behavior. First, unlike scripted models, one does not create models to address specific answers. This would assume that the modeler would always know the outcome and was capable of creating the appropriate mathematical model to get the correct answer for all combinations of inputs and inter-related outputs (which would contradict the definition of emergent behavior). With KEEL, the modeler defines policies that describe how pieces of information are to be interpreted and how pieces of information are functionally related. The modeler can then test the models to see how they perform. It is likely that emergent behavior will be generated as the models encounter new situations. A key attribute of KEEL is that the emergent behavior can easily be reverse engineered to understand why the model performed the way it did. It can easily be corrected by integrating new pieces of information into the model, defining new functional relationships and weighting certain items differently. So one can define functional relationships between data items (policies) without needing to understand exactly how the agent processing the policies will use them in practice. By observing the decisions and actions of the agent in the real world, one can begin to understand why things work the way they do. It is likely that KEEL-based models will evolve (under the control of the human modeler) as one learns more about how the model performs. KEEL is especially valuable if the models are involved in critical decisions, where there is a need to have humans in ultimate control.
Emergent behavior can be modeled with pattern matching techniques that incorporate genetic algorithms. Entities endowed with these capabilities may be able to generate complex behaviors, but the results are, for the most part, unexplainable as the entities learn on their own how to establish functional relationships between data items and how to weight the parameters. Since they cannot be explained, they cannot be easily modified. For these pattern matching / genetic algorithm-based systems, they may also find their own ways to evolve that may not be acceptable.
With KEEL, the emergent behavior is auditable and explainable, making it suitable for continued enhancement (and correction if necessary). In this way KEEL can be used to create entities that exhibit auditable emergent behavior.
Question: How might KEEL be used to provide ethical behavior to autonomous systems (robotic ethics)?
Answer: The topic of robotic ethics has been out there for a while.
Autonomous systems can respond better than humans if they are appropriately controlled. This "requires" 1. humans determine what ethical behavior is, and 2. humans bare the ultimate responsibility for the behavior. This disallows allowing robots the ability to generate their own decisions of right and wrong. A key point is that robotic behavior must "easily" be audited. It cannot require the review of thousands of lines of computer code or complex mathematical formulas to determine why the robot did what it did. Humans are limited in describing their behavior, because the only way a human can explain why they "did what they did" is with a verbal or written language (ex. English), where every word is open to subjective interpretation. There is no effective way to describe the state of all the connections between all the neurons in the brain at the instant a decision is made. A robot, who's every movement and action is driven by some "formula", should always be auditable. This should be possible, as long as they are not being driven by some kind of artificial neural net with genetic algorithms that would just be matching some kind of pattern. We don't we want the robots to determine what is right and wrong, completely on their own, unless it is a purely academic exercise.
KEEL Technology provides a way to capture, test, package, deploy, audit, and explain human-like reasoning. KEEL "engines" can be deployed in devices (like autonomous weapons) so they can make the judgmental decisions. KEEL "engines" package the reasoning skills of a human. They do not learn on their own. How they decide to do and what they do, can always be audited. They are adaptive (as defined by a human). They interpret information (as defined by a human). They are 100% traceable, auditable, and explainable. They are completely explainable, such that they can always be audited in a court of law, if the inputs observed by the systems have been recorded (example: a black box in an aircraft). When these devices are making life and death decisions, they must be 100% auditable. The humans responsible for packaging the policies in these devices will be responsible for the actions of the devices. It is my understanding that in the US, military policy makers (not commercial product suppliers) must have the responsibility for defining the policies for military actions. With KEEL, one can do this without resorting to complex mathematics. I.e. it is relatively easy.
While one may suggest that these systems must perform as well as a human, we would suggest they would have to perform far beyond the capabilities of a human. We have heard it suggested that the requirements for unmanned commercial aircraft need to be something like 500 times stringent than for human pilots. They will be mass produced. While an individual human may make the wrong decision, it will not be acceptable to mass produce a device with incorrect reasoning skills. This makes the development of adequate sensor systems equally important.
KEEL technology is not a military focused technology. It can be easily deployed to make auditable medical diagnoses, auditable investment decisions, auditable political decisions, and even auditable legal decisions. All it takes is human expertise to create the models and package them into devices or software applications. We suggest that in the future, the best models will make the most money. The models will continue to be upgraded to remain competitive.
Question: How do you handle the situation when one piece of information comes in slightly before another piece of information (needed to make a decision or control an action) and if the relationship between two (or more) actions is changing during the processing?
Answer: The system engineer is responsible for collecting information (and normalizing that information) before calling the KEEL "cognitive" Engine for processing. This way the information can be processed "collectively". There will likely be some information that ages slowly (its value could degrade over time). In this case, one might provide both the information AND its age as two pieces of information to the KEEL Engine. In other cases, one piece of information may need to trigger a decision or action immediately. Some of the information may not be available. In this case the KEEL policy will dictate how the decisions or actions handle partial data. In these and other cases, the policy designer would define the behavior to operate just like a human that is exposed to different pieces of information at different times. The output of the KEEL Engine should perform just like the human with the added benefit that the KEEL Engine can explain exactly why.
With KEEL Engines it is important to remember that all information is processed collectively. This doesn't mean parallel processing; it means collective processing, during what we call a "cognitive balancing act". We are modeling how all of the influencing factors are working together to make decisions and control actions. It is also important to note that the KEEL Engine is not limited to making just one decision at a time. Some outputs could be yes or no (to start / stop things), some could be to select from mutually exclusive options, and others could be to allocate resources. So, if you want your system to have influencing factors that impact different parts of the solution space in different ways, it is easy to create the models using the KEEL Lnaguage. The SME defines the dominant activities (if there are any) during the development process.
Question: How does KEEL work in a collaborative environment?
Answer: One View of Collaboration: We have had several questions about using KEEL in a collaborative environment. Potentially there are systems of autonomous robots that need to share information and coordinate tactics and strategies to pursue goals.
KEEL Technology does not handle any of the infrastructure duties that move information items between devices. KEEL Engines can be thought of as individual humans with specific domain expertise. Each one operates independently, and at any "instant in time" with whatever information they have at their disposal. Their design (created with the KEEL dynamic graphical language) interprets each of the inputs relative to some set of problems. They can be configured to handle missing or old information, just like humans. If the exchange of information is necessary to pursue group goals, the KEEL Engines should perform just like human equivalents: If new information is provided, that information is used. The new information can be augmented with a trust factor if that is appropriate. The information value can degrade over time if that is necessary. If expected information is not provided, policies can be used to describe what to do (use historic information, use synthetic information, ...). The system designer has the responsibility to manage the sharing of information items by whatever means is appropriate. The KEEL designer (policy maker) creates the policies that handle the information (or lack of information) that needs to be interpreted and acted upon.
Another View of Collaboration: In other cases, the collaboration is between humans. In this case, the value of KEEL is two-fold: First, the English language (written or verbal) is almost always subject to individual human interpretation. It is not explicit. With KEEL, the policies / practices / reasoning approaches can be explicitly described with the KEEL dynamic graphical language and can be reviewed and audited when necessary. Decisions and actions created by KEEL Engines can easily be reverse engineered. KEEL provides a means of explicitly sharing "how information is to be interpreted", even in complex, dynamic, non-linear, inter-related scenarios. Second, there are many times that collaboration is involved in the interpretation of specific scenarios. The issue is weighting individual information items rather than describing how information items are inter-related. In this case, policies can be packaged as KEEL engines that accept input to control the weighting of individual data items. This weighting can then be exposed to show its impact on the overall system of inter-related problems.
Question: Why do you call KEEL a new form of mathematics?
Answer: "In mathematical logic, predicate logic is the generic term for symbolic formal systems like first-order logic, second-order logic, many-sorted logic or infinitary logic. This formal system is distinguished from other systems in that its formulas contain variables which can be quantified." (Wikipedia) The KEEL "dynamic graphical language" satisfies this definition. KEEL provides a way to define functional relationships between information items where data items are quantified by their dynamically changing importance. KEEL further provides a new way to process this information with a small memory footprint solution suitable for deployment in embedded devices. See sample relationships that can be created with the KEEL dynamic graphical language. KEEL allows explicit complex functional relationships to be defined without resorting to complex mathematical transforms.
The dynamic, interactive nature of the language, layers on top of a "KEEL Engine" that is automatically created as the model is being created on a computer. This allows supporting software to validate the model as it is being created. KEEL can be considered a "system" modeling language, because the designer can model at the system level, considering and interacting with system level objects, without being required to translate concepts to written formulas, translating the formulas to computer code, debugging the code, debugging the formula......Complex models can be created in an iterative manner where testing can be initiated seconds after starting a project.
The "DARPA Mathematical Challenges" Broad Agency Announcements (DARPA-BAA 08-65) was looking for "for major mathematical breakthroughs" in several areas; one of which they identified as "Mathematical Challenge One: The Mathematics of the Brain ". DARPA asked to "Develop a mathematical theory to build a functional model of the brain that is mathematically consistent and predictive rather than merely biologically inspired. " KEEL responds to this request.
Question: You say a KEEL system is made of inter-related curves. How does one tie all the curves together?
Answer: The design process for developing KEEL systems is conceptually different from "conventional programming", which is commonly accomplished by sequentially defining instructions that are processed in the order they exist in the code (IF | THEN | ELSE). With KEEL, the designer thinks about how data items are related / inter-related. For example, one thinks about how the 'distance from an obstacle' impacts how much you want to adjust the current heading to avoid a collision. You might think of a hockey stick curve (the closer the object, the more drastic the correction). You might also want to consider how speed impacts the shape of the collision avoidance curve. And you may want to consider the stress on your vehicle. These relationships may be linear or nonlinear. In developing the solution with KEEL, the designer is constantly testing the model. The designer has the ability to generate 3-dimensional graphs and manually stimulate the system to visualize the relationships without writing formulas. Both of these techniques are helpful in observing how the system will perform. What the designer needs to think about is that KEEL is a system of functional relationships that can be created and tested in a very short time. While the system is made up of inter-related curves, the complexity of the mathematics is hidden from the user. The user just thinks about how information items are related to each other and "observes the system" operate. Seeing the relationships graphed (the curves) is part of the validation process. Think curve / see curve / see inter-related curves. In the KEEL "thought process / terminology" we use concepts like "factor 1" controls the "importance" (or significance) of another term, OR the "resolution of factor 1" contributes to how "factor 2" is resolved. These relationships are created simply by dragging a wire from one factor to another. The system can immediately be tested and graphed.
QuestionCan KEEL be used in "planning"?
Answer: While it isn't the "original" objective of KEEL (which is to provide embedded real-time reasoning and judgmental decisions for devices and software applications), the KEEL "Dynamic Graphical Language" provides a unique environment to develop plans or policies that can be visualized during the development process. The process of developing plans requires the identification of objectives that are decomposed into subtasks (pieces of information that can support or block the success of the plan). Obstacles that can or will impact the plan need to be included in this process. Expectations must also be accounted for during the implementation of the plan. KEEL's dynamic language will allow you to see the importance of different pieces of information change due to their relevance. All of these terms are consistent with the techniques integrated into the KEEL "Dynamic Graphical Language", thus making it an ideal tool to create (100% auditable) complex plans. In this vein, the KEEL language is a tool "that helps the analyst/planner think".
Developing plans is commonly an iterative task where one decomposes the task into measureable segments; each with its contributing factors. One might start off with the objective and ask what factors will contribute to achieving the objective (pros and cons). Then those factors might be decomposed This is exactly how one creates a KEEL model. In this case the KEEL "design" is much more specific than plans documented in a textual language (English, for example). Additionally, since the importance of various factors change throughout the implementation of the plan, it is easy to visualize how each factor contributes to the overall plan.
When using KEEL to develop policies / plans, one may not be "thinking" in terms of non-linear impacts of the various contributing factors. With this in mind, one might start the planning process without considering the non-linear aspects. As the plan is refined, however, the non-linear aspects may become more important. This may be especially true as one creates a plan that will be deployed and reused many times. In these cases, one might expect plans to be under constant review and refinement.
Another view of a plan is a scripted sequence of deliverables that must be accomplished in a specific order to accomplish a task. A plan created in KEEL can be defined as an active plan that adapts automatically to change. Consistent with this concept: the KEEL Toolkit supports the integration of real world data "while" the planning model is being created extended. This visualization concept can accelerate the planning process and make it more relevant.
Question: Is KEEL deterministic?
Answer: The answer is "yes" with some qualifications. First, KEEL Engines (cognitive functions) process all information in what we call a "cognitive cycle". This is accomplished in either of two ways: One way is to iteratively process all inputs until a stable set of all outputs is detected. We tell you the maximum number of cycles this will be for any design. The second way (somewhat faster and somewhat more memory) is for the KEEL Engine to process the data in an optimal way. Again there will a maximum number of lines of code processed.
In the first way, the information is processed faster, when some or all of the inputs DO NOT change.
Thus for any KEEL design, there will be a maximum number of instructions processed. There may be some 'jitter' when fewer than the maximum number of instructions is processed.
There is one way, however, where KEEL would not be deterministic. The development environment continues to monitor a user's design for possible unstable designs. Under normal circumstances the user will want these unstable situations to be detected and rejected automatically. There is a switch that disables this automatic rejection (although the user is still warned of the bad design). If the user chooses to allow these "circular references"; then the design would not be deterministic. For example: B is a function of A; C is a function of B; and A is a function of C. Another example: A person has two bosses of equal rank. One boss tells the employee to go left and the other tells the employee to go right. In this second example, one iteration through the KEEL engine would calculate 'left'; the next would calculate 'right', the next 'left', the next 'right'.....(never stabilizing) We would suggest that designs with these unstable designs are never used. However, the toolkit has an option to allow this instability if you choose, because, in some cases it is educational to 'see' an unstable situation in operation.
Question: Do you / How do you handle temporal data?
Answer: Temporal (time) related pieces of information and distance related pieces of information can be used both directly and indirectly. There is a subtle difference. When used directly they are just like any other data item. Our UAVs executing collision avoidance are using distance information directly to determine how to react. They are commonly used indirectly and are used to control how other pieces of information are interpreted or valued.
Here is an example: Time is commonly used to establish the importance of another piece of information. For example a weather pattern might have very little impact on a decision or action that needs to be made now if the weather pattern won't get here for several days. On the other hand, it may have a significant impact if one is creating plans for an event that will take place when the weather pattern will coincide with the event. Temporal data can be used to create plans or to establish expectations over time.
Time might be used as a qualifying factor for aging data. The older it is the less value it might have. Similarly the impact of a threat may vary with its distance: the farther away the less threat. This impact is probably not linear. Time and distance impacts on decisions and actions are excellent examples of the types of problems KEEL can be used to model. Their impacts are commonly changing as the time and distance changes and decisions or actions need to be adjusted.
Time and distance information components can be used differently in different parts of the same problem domain, just like any other piece of input information.
One needs to remember that with KEEL, we are integrating all pieces of information together at an instant to make decisions and control actions. Some of the decisions or actions can be tactical (immediate) and others can be strategic (planning / longer term related). One designs the system to model how information items (including temporal data items) are integrated for that instant to address both tactical and strategic problems.
Question: Does KEEL require assigning weights to input variables?
Answer: A key component to judgmental decisions is an evaluation of the importance of information. There are several ways to assign weights. First all of our inputs and outputs are normalized to values between 0 and 100 (floating point or integer) depending on your target application.
These weights can be defined externally to the KEEL engine. They can also be scaled internal to the KEEL engine. When combining data items inside of a KEEL engine, a single piece of information may impact different parts of the problem space with different levels of intensity. KEEL handles this as well. Another key point in a KEEL solution is that information can change in importance (to different parts of the problem) dynamically. The concept of importance is critical to many KEEL applications. The importance of data can be set at design time (fixed) and/or it can be changed dynamically based on other influencing inputs.
Question: How does KEEL address probabilistic information and fuzzy problems?
This is a two part question. First, with probabilistic information, one assumes that statistics and probabilities are available to be used to interpret the information. Formal statistics are commonly used as inputs to a KEEL Engine when they are available. The following example shows a system where two options are considered. In this case the probability input is through input 0 and the two options are input through inputs 1 and 2. If the "probability" is set to 50 (meaning equal or 50/50), then if options 1 and 2 are equal there is an equal probability and either option is a viable choice. With KEEL, there is never confusion about which one to choose. It finds the first value with the highest rated answer. In this case option 1 is indicated with the icon. Manipulating the probability input (0) and/or the Option 1 or 2 inputs (which would correspond to a confidence value or quality of information value), will allow the system to re-evaluate the information.
The second part of the question (fuzzy problems) is addressed with KEEL's support for non-linear interpretation of data. We suggest that the KEEL language (or another dynamic graphical language) is the most effective way to address soft or (fuzzy) problems. These types of problems are more image processing functions than they are text or numeric formula based problems. They are analog problems and KEEL is essentially an analog system.
Some may feel neural nets may be most appropriate for these types of problems. However, we would argue that these may not foster the development of any real understanding of the problem domain. We would suggest that it is more appropriate to develop an understandable model and refine it over time in order to improve a process. This is especially true in decisions or actions where human safety and significant risk is involved. In these cases, the process must be auditable and it must be correctable if necessary. KEEL provides a system that can be audited, and can be corrected or extended with relative ease.
Question: When you say small footprint, exactly what do you mean? Small enough for a PDA? Small enough for a microcontroller based device?
Answer: We have compiled KEEL applications for an H8 microprocessor. Certainly it would work with other 8 bit micros. So it could go in your cell phone and other low end devices. The code created with the KEEL toolkit is the same, no matter how large the application. When I say code, I am referring to the instructions that process the inputs and outputs. KEEL is table driven, so the more inputs and outputs, and the more relationships you have, the larger the tables. So you wouldn't put the knowledge and understanding of the universe in an 8 bit micro, but if you had some kind of memory manager and enough time to analyze the data, you could do it. Time is relative. In our case, the larger the system, the longer it takes to process. We have a memory calculator that will calculate the memory needed for an application, but you have to approximate the design to use it.
Question: What processors can you target - Intel x86 I presume, but what about the processors typically used in WinCE or Palm devices? Is there an OS requirement?
Question: What language was used to create the KEEL tools? I think I can create KEEL-like tools using a variety of COTS tools. Why would I license KEEL Technology from Compsim?
Answer: The language used to create the KEEL Toolkit, Compsim Management Tools, the KEEL Function Block Development Tools, etc. is unimportant to the user. Compsim has developed a portfolio of granted patents that covers the KEEL technology space, not the specific implementation of the tools. So creating KEEL technology in any language would be an infringement of KEEL patents. Compsim does not "sell" KEEL "tools", it licenses KEEL technology that includes a set of tools that can be used "as is", or that can be used as prototypes for a set of tools that would be productized by the licensee. Compsim's patent portfolio includes the dynamic graphical language for capturing and testing human-like judgmental reasoning, a model for integrating subjective information to make judgmental decisions, an architecture for integrating judgmental reasoning in devices and software applications (microprocessor based), and an architecture for implementing judgmental reasoning in an analog circuit.
Question: How easy is it to integrate KEEL Technology into an existing application?
|Answer: Very simple. KEEL models are created with the KEEL
Toolkit. Preliminary testing is performed there to validate the design. Design
documentation can be generated in the form of reports if desired.
When the designer (domain expert) is ready to integrate the model into the target application the user selects the source code language of choice from a menu item. This creates a text file for the "KEEL Engine" with the design packaged as a function or class with methods (depending on the language). This code is then pasted into the application code area within the user's application development environment of choice.
To call the function, one takes the data items that will drive the function and loads them into tables as normalized values (input values are normalized to values between 0 and 100). Then the function is called the same way that functions are called for any other function call. The results are returned as normalized values in another table for you to use to drive the rest of your system.
You, the system designer, manage when and how often the KEEL Engine is called: change of state, periodic, polled, continuous. It is just a function call.
The complexity of the KEEL Engine is completely separated from the integration process.
Question: Why is licensing KEEL different than licensing a software tool like Microsoft Excel?
Answer: There are several parts to this answer. Microsoft is a high volume "tool supplier" while Compsim is a "technology provider". While Microsoft Excel is copyrighted and may contain patented methodology it might be assumed that Microsoft is really trying to protect Excel from being copied and distributed without purchasing a license. If one wanted to embed Excel in an embedded device and not have to pay Microsoft any royalty, they would likely charge for that service. On the other hand, if you simply wanted spreadsheet capabilities in your device you may be able to purchase that from any other supplier that offered similar capabilities. In this vein Compsim's KEEL Toolkit is a copyrighted and patented software tool. Software tools, like Excel, manipulate user data and create reports, while Compsim's KEEL "tools" create components called "KEEL Engines". These engines process information in a unique way and are covered by Compsim patents. So, if you want to include components that process information the way that Compsim's KEEL Engines process information, that capability can only be licensed from Compsim. Note: This doesn't mean that Compsim would own a KEEL licensee's design. With KEEL Technology, a user's design is stored as integer and floating point table data (arrays). They are simply arrays of numbers. The arrays are meaningless without the KEEL Toolkit (the dynamic graphical language) and the KEEL engines to process them. While software tools' companies (like Microsoft) sell "shrink wrapped software" and don't care how the software is used, Compsim licenses KEEL technology (the patent rights) for a defined application scope that is negotiated with Compsim. The software tools are provided to the licensee as part of the license.
Question: Have you studied Tverskyi's paper, Gestalt psychology, Plato and all the others that have written extensively on decision-making in order to validate the KEEL decision-making model?
Answer: When this question is asked it is clear that we have not successfully explained our objective for KEEL Technology. When we suggest KEEL provides "human-like reasoning", what we mean is that we want to build a "machine" that can address the same problems addressed by humans, where humans use judgment and reasoning to address those problems. Humans have other (and different) drivers than machines: survival, evolution, procreation . Machines are built (by humans) to perform a function or do a job. They do not have (and we don't necessarily want them to have) all the baggage (and randomness) that humans have.
It is more appropriate to look at KEEL as a new form of mathematics. In other words, KEEL can describe functional relationships in an explicit manner. This means that KEEL is an alternative to predicate calculus, which is often difficult to master. [Wikipedia: "In mathematical logic, predicate logic is the generic term for symbolic formal systems like first-order logic, second-order logic, many-sorted logic, or infinitary logic. This formal system is distinguished from other systems in that its formulae contain variables which can be quantified."]
Since KEEL (as a technology) is not attempting to mimic "true" human reasoning out of the box, we suggest it can be used to model any kind of human behavior. We have demonstrated this with the Yerkes-Dodson / Eysenck model (in the "Demos" area of the Compsim website) where personality traits bend normal curves. We suggest the KEEL dynamic graphical language is "helpful" in providing a tool for "experts" to describe "how they think", because it allows them to visualize the weighted variables and the impacts of functional relationships in a dynamic environment (the display screen) . With KEEL this is done without the need to write formulas (in the conventional textual format), translate the formulas to code, debug the code and then package it in some form of simulation - all before a concept can be tested . With KEEL, the concept is tested continually during development.
So, in summary, KEEL targets problems that humans address using judgment and reasoning . KEEL "cognitive engines" can then be deployed in software applications and embedded in real time devices (for adaptive command and control) in an "explicit" (explainable / auditable) manner as if they were defined by a conventional mathematical formulas.
Question: Why do you use the Left-Brain / Right-Brain paradigm to explain KEEL when this particular comparison has been called a "psychology myth"? Some people feel that while it is partially true, it has been dramatically distorted and exaggerated
Answer: Using this comparison helps us differentiate conventional IF THEN ELSE logic that is prevalent in today's digital computing environment from the way that KEEL processes information.
Roger W. Sperry was awarded the Nobel Prize in 1981 when he discovered that cutting the corpus callosum (the structure that connects the two hemispheres of the brain) can reduce or eliminate seizures. His study suggested that language was controlled by the left-side of the brain. Later research has shown that the brain is not nearly as dichotomous as once thought.
Digital computers process information by executing instructions sequentially. There are some suggestions that the left hemisphere also processes information sequentially. KEEL processes information on a digital computer collectively, during what we call a "cognitive cycle". This allows the appearance of information being processed in parallel, yet still allows the processing to take place on today's digital computers and microprocessors. Processing information in parallel allows information to be balanced across a set of inter-related problems. This is important when the solution for any single problem may not be optimal.
Question: Who are your competitors?
Answer: One could say our competitors are companies that sell development tools for other technologies: like tools to develop and train artificial neural nets or fuzzy logic tools, or even companies like Mathworks that sell math "tools" (MATLAB). But more appropriately KEEL competes with technical approaches, where KEEL provides a unique approach to processing information. So rather than having companies that we compete with, we are competing at the technology level. Since KEEL is a patented unique solution, there are no other companies providing the same level of service. For a technology comparison see the Technology page.
Question: What do I need to know to create KEEL-based solutions (to prepare for KEEL)?
Answer: You need to understand that KEEL is a “component technology” and will not define a complete solution. Your “system” will include conventional code that will provide the executive functionality (scheduling of tasks: a main loop of some sort). Your system will have to handle the physics of its world (if it is measuring physical entities). If the system is modeling human behavior, some means of measuring human influencing factors will need to be provided. Your system will have to handle triggered events, if necessary, using your computer’s interrupt services. Your system will also have to do what computers have been doing for years: obtaining inputs, processing physical data (add, subtract, multiply and divide...), and presenting outputs as needed.
The KEEL (cognitive) Engines will be called when there is a need to apply judgment and reasoning functionality to the system (What does “it all mean”? What “best” can be done about it (the situation)? How should resources most appropriately be applied to obtain the best results under these circumstances? How should the system “adapt”?) These functions are adding to the existing “measurements”. It is also important to recognize that KEEL Engines process normalized data. This means that information items range from providing no value (or 0) to maximum value (or 100). So it is the responsibility of the system designer to normalize the input information before it is provided to the KEEL Engine. This means that physical information collected by the system will need to be normalized into (for example: 0 to 100 range).
An example of this might be distance. A threat of some sort might be detected a mile away. This might be assigned a value of 0, meaning it is too far away to impact a decision or action. If the threat was detected at a half mile away, it could still be valued at 0 as it still doesn’t impact the decision or action. But at 100 yards away, it might start to have some impact and at 1 yard or less it could have maximum impact. It is the responsibility of the external software to normalize the input data, so in this case any distance 100 yards and beyond would be assigned a value of 0 and any distance 1 yard or less would be assigned a value of 100. The distances between 1 yard and 100 yards would be linearly distributed between 0 and 100.
NOTE: This means that the input is linearly distributed between 0 and 100, but not that it is interpreted linearly. Non-linear interpretation of information is a key attribute of KEEL Technology.
Similarly the outputs from the KEEL Engine will be provided as normalized values. If you used a vehicle steering function, you might interpret a normalized value of 0 to 100 to mean that a value of 50 would steer straight ahead, while a value of 0 might mean full left and a value of 100 might mean full right. Or in the allocation of a resource, a value of 0 might mean to apply no resource and a value of 100 might mean apply maximum resource.
So to create the KEEL Engines, one also has to provide the expertise. This means the expert must provide the judgment and reasoning skills to define the “operational policy” for how to interpret the relative valued information (information of no value, up to information of maximum value). The expert will also develop the models of how information is inter-related. The KEEL dynamic graphical language is helpful in these activities because one can quickly see and modify the relationships within the model. And, of special value, it does not require higher level mathematics to define the functional relationships. When dealing with complex judgmental issues the expert will iterate the design again and again. The designer may also add externally configurable controls to adjust importance factors or to bend curves so the policy can be adjusted, even when the KEEL Engines have been integrated into a production solution. If there is a need to be able to audit the behavior of a production solution, a logging function can be included. Example code for this capability can be included in the auto-generated code by the KEEL Toolkit.