Simulation and Modelling in a Knowledge Context
|
Simulation and Modelling
Simulation and modelling is increasingly being used by both private and public organisations to help make decisions and formulate strategies. The popularity of the simulation approach is perhaps due to its ability to allow the modelling of different scenarios which may feed back into decision makers' understandings of the problem. In this sense, these tools may be used in multiple phases of classical decision making. However, there are dangers inherent in using models and simulations as a basis for decision making. In some cases such models are perhaps useful in gaining an understanding of the general properties of some types of systems, but as mentioned in earlier lessons, specific situations can make a nonsense of general rules. Therefore one must be aware of the limitations of models in that not all the complexities of reality can be included. How significant the effects are of leaving information and processes out of the models will depend on both the assumptions of the model and the level of understanding of the system as well as the complexity of the system being modelled. In this topic we will look at simple modelling and simulation tools that can be, and are, used in small to large businesses everyday, through to larger more sophisticated tools that are used by social and business planners.Limitations of Models
Again it needs to be emphasised that simulations are typically nothing more than a model of the system. Pidd (2009) defines a model as a:representation of reality intended for some definite purpose.He includes the phrase "definite purpose" because inevitably some aspects of reality are left out, and those aspects may make the model entirely unsuitable for some purposes. The model most likely leaves out even some aspects relevant to the intended definite purpose. This raises two other issues. Turban et al. (2007) points out that simulation models can be expensive to construct, may require specialised expertise (that may be difficult to obtain) and in the end, may not provide an optimal solution.
Mathematical Models
While physical models are possible most modelling today is conducted using computers. These models are typically based on mathematical principles. Here we will briefly examine those principles along with common modelling techniques to gain an understanding of the underlying problems addressed by some mathematical methods and the difficulties in addressing them. To begin with any model of reality must represent some of concepts that exist in reality. In the next section we look at some ways how concepts can be represented abstractly. Note, this is not a definitive review. Some methods such as linear equations and genetic algorithms are not discussed here.Describing Concepts
This material is based on Hurley (1991).Definitions can be extensional (logical denotation) or intensional (logical connotation).
When using extension, we describe (define) a concept based on the members of the class. When using intension, we describe a concept using properties or qualities of the class.
Types of extensional definition are:
- Demonstrative (ostensive) - pointing to members of the class (limitation: examples must be present, may lead to confusion due to common properties of available examples. eg: the indicated chairs may all be wood, may lead to misunderstanding concept chair to mean "made of wood")
- Enumerative - listing or naming members of the class (may be a partial or complete list). Limitiation: cannot list all sets (eg. set of all stars), some members may not have names (eg. insects).
- Subclass - name subclasses of the class (may be a partial or complete naming).
Types of intentional definition are:
- Synonymous - this can be used if there is exact equivalent (eg. "Intentional" means "willful"). Limitation: there is not always such a synonym.
- Operational - specifies operational procedures which can be conducted to determine if something meets the concept. (eg. something is harder than something else if the first can scratch the other when rubbed together). Intended to tie down abstract concepts in reality. Limitations: not always possible, define an operation to demonstrate the concept of "love". Also, captures only part of the meaning of the concept.
- Genus and difference - in logic "genus" means "class" and "species" means "subclass". The difference (or specific difference) distinguishes members of the subclass from the superclass. (eg. Species: Husband, Genus: Man, difference: married).
- Stipulative - assigns a meaning to a word or symbol. Eg: 10^3 means 10 x 10 x 10. This is an arbitrary assignment and as such, others may use the words in other ways (for example, times (x) has a different meaning when used for cartesian products than it does for common multiplication).
- Lexical - reports a meaning a word already has in language. Such meanings may be vague or ambiguous (what is the difference?).
- Precising definition - reduces vagueness of a word or concept - egs. poor, "moment of death".
- Theoretical definition - provides a way of viewing or characterising the entity or entities involved. eg. "heat means the energy associated with the random motion of the molecules of a substance". These often entail deductive consequences and possibly sets of experiments. Other terms which have been given different theoretical definitions reflecting an individual's view are: "God", "mind", "cause" and "intelligence".
- Persuasive - assigns an emotive description to a word indicating it has, or should have, that meaning every day use. eg: "Abortion means the ruthless murdering of innocent human beings".
Situation Calculus
The following sections are based primarily on Chapter 5 of Klein and Methlie (1995) other sources are linked as necessary.Types of reasoning:
There are different types of reasoning:- Deductive: if A is true then B is true. This is modus ponens: A is a sufficient, but not necessary condition for B, and B is a necessary but not sufficient condition for A. In English: if we accept the premises are true then we accept that the conclusion is true. The converse of this is modus tollens : if we accept that B is false, then we must accept that A is false. eg: at 11.00 am it is daytime. In this case, if it is not daytime, then it cannot be 11.00 am. The statement could be naively interpreted as Premise: it is daytime, Conclusion: it is 11.00 am. In which case it is clearly incorrect since it implies that if it is not 11.00 am then it is not daytime.
- Abductive: A process similar to modus tollens but which allows illegal (i.e incorrect) inferences. In abduction, B is not a necessary condition for A. Abduction allows multiple possible explanations for something. In abduction we may entertain incorrect inferences such as: Fact (conclusion): it is daytime, inference: it is 11.00 am.
Of course it could be any other time of day also, but it is plausible (at least) that it is 11.00. In abduction, we take as a conclusion something which allows many possible premises (some of which are incorrect):
Premise * ------- 1 conclusion
Another example: all men are mortal and Socrates is mortal. The conclusion here is that Socrates is mortal, from which we incorrectly infer the premise that: Socrates is a man (Socrates is in fact a horse, although he could be any other mortal creature, therefore the multiple possible answers, in this case, premises). Abductive reasoning is commonly used by people to generate plausible hypotheses for things. Eg: Fact (conclusion): Holden's sales are bad. Invented premise: they must make poor cars. - Inductive: A process similar to modus ponens but which allows invalid premises. In induction A may, or may not be, a sufficient condition for B. eg: Premise: a vehicle has four wheels. Conclusion: it is a car. The vehicle may also be a truck. As with abduction, induction also allows multiple possible answers, this time we accept as a premise something which is not sufficient for its conclusion so we allow multiple possible conclusions (some of which are incorrect):
Premise 1 ------- * conclusion
This is the Sherlock Holmes approach (he does induction, not deduction as he claims): "Ah, the man was wearing glasses, it must have been Mr Smith!". Inductive arguments involve probabilistic reasoning (Hurley 1991).
For general logic see: here
Reasoning in Expert Systems
Expert systems take in data from the user using a series of questions to which the user provides answers (the system controls this process) The system attempts to provide expert knowledge about that data and to explain its reasoning. The typical example is of medical diagnosis where the system takes in symptom information and provides a diagnosis and an explanation of that diagnosis.Expert systems are different from classical decision support systems (DSSs), in which the user is in control and poses questions to the system (like our spreadsheeting exercises in the previous lesson). DSSs can therefore be less structured. In DSS's the user is the expert so they provide their knowledge, in Expert systems the system is the expert so the user is more passive.
Expert systems are built using modus ponens (but typically cannot reach conclusions using modus tollens) and using this rule chain together rules using the conclusions of one rule as the premises in another. This forms a line of reasoning from the premises to the conclusion of the system. This is a largely mechanical process as we shall see when we return to expert systems in a later section. This process is refered to as drawing inferences and is typically conducted by an inference engine.
However, as we have mentioned, if our rules are not quite correct (or too simple) we may have multiple possible answers. In these cases it is necessary to somehow pick one answer over the others. This is called conflict resolution. Solving conflicts is one requirement of an expert system, the rest are covered in the next section.
Expert System Architectures
Expert systems consist of:- a knowledge base
- an inference engine
- a user interface or shell (for both system developer and end-user)
- a user.
The Knowledge-base
One of the hardest problems in constructing expert systems is extracting the knowledge from the experts, often they are unaware of their own reasoning processes which occur sub-cognitively. However, given that a good knowledge engineer can extract the expert's knowledge there is the problem of how to represent it.Three possible knowledge representations are:
- If-then rule based systems; and
- Propositional systems (semantic nets);
- Frame-based (object-oriented) systems.
If-then rule based systems
One of the most common types of expert system representations are if-then rules. These are quite simple, and can be used for a variety of purposes:- Inferential knowledge: if premises, then conclusion;
- Procedural knowledge: if situation then action; and
- Declarative knowledge: if antecedent then consequent.
Long term memory in such a system can be organised as lists of attributes and values. To represent a particular piece of data you specify the object-id, the attribute and value of the attribute. This is similar to object-based programming languages (without inheritance).
Rules can therefore be applied to sets of objects (classes of objects) as follows:
IF (object-id, loan-size < MIN_LOAN) |
THEN (object-id, loan-approved = false). |
This type of process where an initial set of facts is used to generate further facts through the application of rules is called a major control cycle .
The major-cycle involves using the available data to match the conditions of rules, of all these rules some may need to be selected to execute (have their facts in their conclusions added to the work-space).
Rule based systems (in some cases called production systems ) therefore consist of the following three steps being repeated in each major-cycle:
- match - find which rules conditions are matched by data in the work-space
- select - of all the matched rules select those which appear most useful (this may involve some conflict resolution, since rules may contradict each other)
- execute - update the work-space with the conclusions of executed rules being added as new facts in the work-space.
Note, that there may be facts stored in the system, and that new facts can be generated. These new facts form the short-term memory of the system and comprise knowledge that has been infered by applying rules to stored facts. In some systems this infered knowledge can be added to the long term memory of the rule base to save having to repeat those inferences for later queries. This idea can be extended further to allow systems to construct general rules that explain particular examples in a process called Explanation Based Learning (EBL).
Uncertainty in Rule-based systems
Facts may not be certain, so they may have a certainty factor (In MYCIN from -1 to 1 for degrees of belief of falsity and truth, however, in other systems from 0 to 1) associated with them. So may the conclusions of rules.Rule based systems are claimed to be modular, with each rule being a module, and because the chain of reasoning can be followed, they allow an easy explanation of their results. However, as the number of rules increase these systems become increasingly inefficient unless rules can be organised into more efficient structures, this increases the complexity of maintaining the system. Finally, static knowledge about objects cannot be easily represented explicitly.
The Inference Engine
The inference engine operates on the two types of memory, the long-term memory (the rules and stored facts - if there are any stored facts) and the short-term memory (the work-space).As mentioned earlier, inferences are performed during a major control cycle. In each cycle, some rules are matched based on the current contents of the work-space, one or more of the matched rules are selected using some method of conflict resolution, and the conclusions of the selected rule(s) are added to the work-space. This process of using the results of rules to match the conditions of rules in a subsequent cycle is a process of rule chaining. Rule chaining allows the system to move from simple information (facts) to useful conclusions based on the system's rules and inference procedure.
Inference processes
Two basic inference processes are:- forward chaining (data-driven); and
- backward chaining (goal-driven).
Backward chaining takes a hypothesis and attempts to see if it is supported by the facts, this often involves finding support for sub-goals.
R1: IF A and B THEN D |
R2: IF B THEN C |
R2: IF C and D THEN E |
Search strategies
Note that in both cases either a breadth-first or depth-first approach can be used. In the case of forward chaining depth-first will result in only one rule being selected at each time, while for backward chaining, the sub-goals of one sub-goal will be resolved before other sub-goals.Selecting backward or forward chaining
Backward chaining is preferable when there is a large number of possible inputs in relation to goals.Forward chaining is preferable when there is a large number of goals and a relatively small number of possible inputs.
Alternative Systems
Model-based reasoning is based on a deep understanding of the problem domain, rather than the heuristics (rules of thumb) gained from experience that rule-based systems rely on. These systems represent the full complexity of causes-and-effects rather than using rules which short-cut reasoning processesCase based reasoning uses solutions for similar problems on new problems. It involves having a case library of past solutions and then determining which of these is most analogous to the current problem, the selected case(s) can then be adapted to deal with the new situation.
Black-board architectures: these allow multiple experts to contribute and use knowledge by allowing a common area for the sharing of data.
The User Interface
Typically, the interfaces to expert systems must support not only the user of the system, but also the knowledge engineer and the expert whose knowledge is being represented. This means the system must have a way for new rules and facts to be entered. Unfortunately, it is difficult to ensure such rules and facts are consistent and the maintenance of large Expert Systems over time can be complicated.It is also desirable to have an explanation facility, both for the users and the domain expert. For users so they can accept the recommendations of the system and for experts so they can verify the answers provided by the system
Applications of Knowledge-based systems
Uses of knowledge-based systems and the applied technologies:- Model-based reasoning:
- Financial diagnostics - model-based reasoning
- Equipment failure diagnosis - model based
- Case-based reasoning:
- medical diagnosis
- auditing
- claims settlement
- Blackboard architectures
- Speech recognition (Hearsay III)
- Tax planning
- Expert-systems:
- Medical-diagnosis (MYCIN)
- Hardware configuration
- Hardware Configuration for Digital Equipment corporation (R1 or Xcon).
Dynamic Programming
The following material is sourced from Cooper and Cooper (1981)Dynamic programming is based on the principle that every optimal policy consists only of optimal sub-policies. Unlike back-propagation neural networks, dynamic programming approaches are not subject to local minima or maxima, global solutions are found.
Dynamic programming approaches are particularly interesting because in solving a problem, they actually solve a whole class of problems. Therefore the calculated solution has the potential to be reused to answer other questions.
Here is an example of a problem which can be elegantly solved using a dynamic programming approach (Cooper and Cooper 1981):
The problem involves a traveller who must journey from his home city (city 1) to a destination (city 10). There are a variety of paths he may take through various cities, and the cost of each of those paths is indicated by the numbers on the lines connecting cities. Notice that his trip is divided into five stages, and at each stage he has a choice of which city to go to next, however, he cannot (for example) go directly from a city at stage 1 to a city at stage 3.
He now wants to find the route with the lowest cost. One possible solution to this is to list all the possible routes and calculate the costs of those. This would require calculating the costs for 18 different routes of length 4. However, an alternative is to use dynamic programming.
Dynamic programming begins by moving backwards from the destination. It first find the minimum cost path from cities at stage 3 to cities at stage 4:
City | Minimum Cost | Path |
8 | 380 | 8-10 |
9 | 280 | 9-10 |
City 5 - in which the possible paths cost are: (210 + 380) = 590 or (230 + 280) = 510 |
City 6 - in which the possible paths cost are: (350 + 380) = 730 or (380 + 280) = 660 |
City 7 - in which the possible paths cost are: (290 + 380) = 670 or (400 + 280) = 680 |
City | Minimum Cost | Path |
5 | 510 | 5-9-10 |
6 | 660 | 6-9-10 |
7 | 670 | 7-8-10 |
City 2 - in which the possible paths cost are: (320 + 510) = 830 or (350 + 660) = 1010 or (400 + 670) = 1070 |
City 3 - in which the possible paths cost are: (350 + 510) = 860 or (280 + 660) = 940 or (410 + 670) = 1080 |
City 4 - in which the possible paths cost are: (300 + 510) = 810 or (250 + 660) = 910 or (200 + 670) = 870 |
This leads to the table below:
City | Minimum Cost | Path |
2 | 830 | 2-5-9-10 |
3 | 860 | 3-5-9-10 |
4 | 810 | 4-5-9-10 |
(300 + 830) = 1130 or (200 + 860) = 1060 or (350 + 810) = 1160 |
Note that as we move back through the stages, each stage uses the results calculated for the previous stage. Note also, that having solved this problem using this dynamic programming approach, no matter which city we are currently in (i.e no matter where we start), we can find the minimum cost path to city 10.
Dynamic programming in this case has associated a value with each state (i.e learned a value function).
Multi-agent Models and Simulation
The final type of simulation we will be looking at here is multi-agent modelling. These are based on ideas of decentralisation that aim to capture decentralised interactions and feedback loops (Resnick 1997). This is closely associated with the modelling concepts developed in the fields of deterministic chaos and complex systems. Rather than focusing on hierarchy, rules and logic, multi-agent systems allow one to explore the impacts of relationships, interdependencies and negotiation (Resnick 1997). Such models often allow explorations of emergent relationships (for example: what causes traffic jams) in complex systems with many interacting autonomous or semi-autonomous agents. Interesting sites that provide tools for such explorations are the Santa Fe Institute's Swarm Modelling site and the simpler Starlogo site provided by MIT.References:
Hurley, P.J. 1991 A concise introduction to logic. Fourth Edition. Wadsworth.Cooper, L. and Cooper, M.W. 1981. Introduction to dynamic programming 1st Ed, Pergamon
Klien, M.R and Methlie, L.B. 1995. Knowledge-based Decision Support Systems, 2nd Ed. Wiley.
M. Mitchell Waldrop. 1992. Complexity: the emerging science at the edge of order and chaos, Chapter 5 - Master of the Game . Penguin.
Smith E. E and Medin, D.L. 1981. Categories and Concepts. Cambridge M.A, Harvard University Press.
McDermott, D. 1981. Artificial Intelligence meets Natural Stupidity . Chapter 5, "Mind Design: Philosophy, Psychology, Artificial Intelligence. Ed. Haugeland, J. MIT Press. pp 143-160.
Pidd, M. 2009. Tools for Thinking: Modelling in Management Science. 3rd Ed. Wiley.
Resnick, M. 1997. Turtles, Termites and Traffic Jams: Explorations in Massively Parallel Microworlds. MIT Press.
Turban, E., Aronson, J.E., Liang, T-P and Sharda, R. 2007. Decision Support and Business Intelligence Systems . 8th Edition. Pearson International.