• No results found

Selecting components of the system

N/A
N/A
Protected

Academic year: 2022

Share "Selecting components of the system "

Copied!
23
0
0

Loading.... (view fulltext now)

Full text

(1)

UNIT II

(2)

2

Research Modelling By

Prof. Mohammad Ubaidullah Bokhari

(3)

3

What a Models?

The model here means “a representation of a thing”. It is also defined as the body of information about a system gathered for the purpose of studying the system. It is also stated as the specification of a set of variables and their interrelationships, designed to represent some real system or process in whole or in part.

Fundamental Laws for Construct model: Various steps in the construction of a model are;

Selecting components of the system

All the components of the system which contribute towards the effectiveness measure of the system should be listed.

Distribution of components

Once a complete list of components is prepared, the next step is to find whether or not to take each of these components into account. This is determined by finding the effect of various alternative courses of action on each of these components. Generally, one or more components (e.g., speedup, throughput) are independent of the changes made among the various alternative courses of action. Such components may be temporarily dropped from consideration.

Combining the Components

It may be convenient to group certain components of the system. For example, the purchase price, freight charges and receiving cost of a raw material can be combined together and called `raw material acquisition cost. The next step is to determine, for each component remaining on the modified list, whether its value is fixed or variable. If a component is variable, various aspects of the system.

Advantages of a Model

 It provides a logical and systematic approach to the problem.

 It indicates the scope as well as limitations of a problem.

 It helps in finding avenues for new research and improvements in a system.

 It makes the overall structure of the problem more comprehensible and helps in dealing with the problem in its entirety.

 It permits experimentation and analysis of a complex Figure I

(4)

4

system without directly interfering in the working and environment of the system

Limitations of a Model

 Models are only idealized representation of reality and should not be regarded as absolute in any case.

 The validity of a model for a particular situation can be ascertained only by conducting experiments on it.

What is modelling?

Modelling is the application of methods to analyze complex, real-world problems in order to make predictions about what might happen with various actions. (Shiflet & Shiflet 2006). Modelling is the process of producing a model; a model is a representation of the construction and working of some system of interest. One purpose of a model is to enable the analyst to predict the effect of changes to the system. On the one hand, a model should be a close approximation to the real systemand incorporate most of its salient features. On the other hand, it should not be so complex that it is impossible to understand and experiment with it. A good model is a judicious trade off between realism and simplicity. Simulation practitioners recommend increasing the complexity of a model iteratively. An important issue in modelling is model validity. Model validation techniques include simulating the model under known input conditions and comparing model output with system output.

The objectives or purposes which underlie the construction of models may vary from one decision- making situation to another. In one case it may be used for explanation purposes whereas in another it may be used to arrive at the optimum course of action. The different purposes for which modelling is attempted can be categorized as follow:

 Description of the system functioning.

 Prediction of the future.

 Helping the decision researcher decide what to do.

Description of the system functioning

The first purpose is to describe or explain a system and the processes therein. Such models help the researcher or the manager in understanding complex, interactive systems or processes. The understanding, in many situations, results in improved decision making.

Prediction of the future

The second objective of modelling is to predict future events. Sometimes the models developed for the description/ explanation can be utilized for prediction purposes also. Of course, the assumption made here

(5)

5

is that the past behaviour is an important indicator of the future. The predictive models provide valuable inputs for decision-making.

Helping the decision researcher decide what to do

The last major objective of modelling is to provide the researcher inputs on what he should do in a research in a particular topic. The objective of modelling here is to optimize the decision of the researcher subject to the constraints within which he is operating.

Modelling Tools

Spreadsheets (Excel):

Simple, easy to master, transferrable skill.

Systems Dynamics tools (Vensim):

Visual representation of model; limited to accumulating quantities over time.

Programming environments (Mat lab):

Powerful, transferrable, high-demand skill, hardest to learn.

High-Performance Computing (Cluster, GPU):

Enables modelling of very complex systems

 (Protein folding, weather) in reasonable time.

Types of Models

Models have been classified in many ways. The dimensions in describing the models are;

Macro vs. Micro.

Physical vs. Mathematical.

Dynamic vs. static.

Deterministic vs. Stochastic

Analytical vs. Numerical.

Macro vs. Micro Models

The terms macro and micro in modelling are also referred to as aggregative and disaggregate respectively.

The macro models present a holistic picture of a decision-making situation in terms of aggregates. The micro models include explicit representations of the individual components of the system.

Physical Models

In physical models a scaled down replica of the actual system is very often created. Engineers and scientists usually use these models.

(6)

6

Dynamic models vs. Static Models

The consideration of time as an element in the model. Static models assume the system to be in a balance state and show the values and relations for that only. Dynamic models, however, follow the changes over time that result from the system activities. Obviously, the dynamic models are more complex and more difficult to build than the static models. At the same time, they are more powerful and more useful for most real life situations.

Deterministic vs. Stochastic Models

The final way of classifying models is into the deterministic and the probabilistic/ stochastic ones. The stochastic models explicitly take into consideration the uncertainty that is present in the decision making process being modelled. Designing of a network server falls in the stochastic model.

Analytical vs. Numerical Models

The analytical and the numerical models refer to the procedures used to solve mathematical models.

Mathematical models that use analytical techniques (meaning deductive reasoning) can

be classified as analytical type models. Those which require a numerical computational technique can be called numerical type mathematical models.

Model Building

The approach used for model building or model development for a research will vary from one situation to another. However, we can enumerate a number of generalized steps which can be considered as being common to most modelling efforts the steps are;

i. Identifying and formulating the problem in hand.

ii. Identifying the objective(s) of the problem.

iii. System elements identification and block building.

iv. Determining the relevance of different aspects of the system.

v. Choosing and evaluating a model form.

vi. Model calibration (simulation).

vii. Implementation.

The decision problem for which the researcher intends to develop a model needs to be identified and formulated properly. Precise problem formulation can lead one to the right type of solution methodology.

This process can require a fair amount of effort. Improper identification of the problem can lead to solutions for problems, which either do not exist or are not important enough.

(7)

7

Simulation

Simulation is a tool to evaluate the performance of a system, existing or proposed, under different configurations of interest and over long periods of real time. Simulation is used before an existing system is altered or a new system built, to reduce the chances of failure to meet specifications, to eliminate unforeseen bottlenecks, to prevent under or over-utilization of resources, and to optimize system performance. The steps involved in developing a simulation model, designing a simulation experiment, and performing simulation analysis are: Figure II shows a Seven-Step Approach for Conducting a Successful Simulation Study.

Step 1.Identify the problem.

Step 2.Formulate the problem.

Step 3.Collect and process real system data.

Step 4.Formulate and develop a model.

Step 5.Validate the model.

Step 6.Document model for future use.

Step 7.Select appropriate experimental design.

Step 8.Establish experimental conditions for runs.

Step 9.Perform simulation runs.

Step 10.Interpret and present results.

Step 11.Recommend further course of action.

Although this is a logical ordering of steps in a simulation study, much iteration at various sub-stages may be required before the objectives of a simulation study are achieved. Not all the steps may be possible and/or required. On the other hand, additional steps may have to be performed. The next three sections describe these steps in detail.

How To Develop A Simulation Model?

Simulation models consist of the following components: system entities, input variables, performance measures, and functional relationships.

Figure II

(8)

8

Step 1. Identify the problem

Enumerate problems with an existing system. Produce requirements for a proposed system.

Step 2. Formulate the problem

Select the bounds of the system, the problem or a part thereof, to be studied. Define overall objective of the study and a few specific issues to be addressed. Define performance measures - quantitative criteria on the basis of which different system configurations will be compared and ranked. Identify, briefly atthis stage, the configurations of interest and formulate hypotheses about system performance. Decide the time frame of the study, i.e., wills the model be used for a one-time decision (e.g., capital expenditure) or over a period of time on a regular basis (e.g., air traffic scheduling). Identify the end user of the simulation model, e.g., corporate management versus a production supervisor. Problems must be formulated as precisely as possible.

Step 3. Collect and process real system data

Collect data on system specifications (e.g., bandwidth for a communication network), input variables, as well as performance of the existing system. Identify sources of randomness in the system, i.e., the stochastic input variables. Select an appropriate input probability distribution for each stochastic input variable and estimate corresponding parameter(s).

Step 4. Formulate and develop a model

Develop schematics and network diagrams of the system (How do entities flow through the system?).

Translate these conceptual models to simulation software acceptable form. Verify that the simulation model executes as intended. Verification techniques include traces, varying input parameters over their acceptable range and checking the output, substituting constants for random variables and manually checking results, and animation.

Step 5.Validate the model

Compare the model's performance under known conditions with the performance of the real system.

Perform statistical inference tests and get the model examined by system experts. Assess the confidence that the end user places on the model and address problems if any. For major simulation studies, experienced consultants advocate a structured presentation of the model by the simulation analyst(s) before an audience of management and system experts. This is not only ensures that the model assumptions are correct, complete and consistent, but also enhances confidence in the model.

(9)

9

Step 6. Document model for future use

Document objectives, assumptions and input variables in detail.

How to Perform Simulation Analysis?

Most simulation packages provide run statistics (mean, standard deviation, minimum value, maximum value) on the performance measures, e.g., wait time (non-time persistent statistic), inventory on hand (time persistent statistic). Let the mean wait time in an M/M/1 queue observed from n runs be W1,W2 ,...,Wn . It is important to understand that the mean wait time W is a random variable and the objective of output analysis is to estimate the true mean of W and to quantify its variability.

Monte Carlo simulation

The Monte- Carlo method is a simulation technique in which statistical distribution function are created by using a series of random numbers. This approach is the ability to develop many months or years of data in a matter of few minutes.

The method is generally used to solve problem which cannot be adequately represented by the mathematical model or where solution of the model is not possible by analytical method.

 Simulation is experimentation with models. Simulation studies for design or research can involve many hundreds of model changes, so programming must be convenient, and computations must be as fast as possible.

 Simulation ranks very high among the most widely used of techniques. It’s such a flexible, powerful, and intuitive tool, it is continuing to rapidly grow in popularity.

 This technique involves using a computer to imitate (simulate) the operation of an entire process or system.

 Simulation also is widely used to analyze stochastic systems that will continue operating indefinitely.

For such systems, the computer randomly generates and records the occurrences of the various events that drive the system just as if it were physically operating. Because of its speed, the computer can simulate even years of operation in a matter of seconds.

(10)

10

 To prepare for simulating a complex system, a detailed simulation model needs to be formulated to describe the operation of the system and how it is to be simulated.

Types of Simulation Models

Simulation of deterministic model.

Simulation of probabilistic model.

Simulation of static model.

Simulation of dynamic model.

Difference between Simulation and Modelling

Modelling:

Construct a conceptual framework that describes a system.

Simulation:

Perform experiments using computer implementation of the model.

Analyzing:

Draw conclusions from output that assist in decision making process.

Data Analysis and Testing

Data analysis is a practice in which raw data is ordered and organized so that useful information can be extracted from it. The process of organizing and thinking about data is key to understanding what the data does and does not contain. There are a variety of ways in which people can approach data analysis, and it is notoriously easy to manipulate data during the analysis phase to push certain conclusions or agendas.

For this reason, it is important to pay attention when data analysis is presented, and to think critically about the data and the conclusions which were drawn.

Objective of Data Analysis

i. Evaluate and enhance data quality.

ii. Describe the study population and its relationship to some presumed source (accounts for all in- scope potential subjects; compare the available study. population with the target population).

iii. Assess potential for bias (e.g., non response, refusal, and attrition, comparison groups).

iv. Estimate measures of frequency and extent (prevalence, incidence, means, and medians).

v. Estimate measures of strength of association or effect.

vi. Assess the degree of uncertainty from random noise (“chance”).

vii. Control and examine effects of other relevant factors.

viii. Seek further insight into the relationships observed or not observed.

ix. Evaluate impact or importance.

(11)

11

Mathematical Modelling & simulation

(12)

12

Modelling and Simulation

Modelling and simulation (M&S) refers to using models – physical, mathematical, or otherwise logical representation of a system, entity, phenomenon, or process – as a basis for simulations – methods for implementing a model (either statically or) over time – to develop data as a basis for managerial or technical decision making. M&S helps getting information about how something will behave without actually testing it in real life. For instance, to determine which type of spoiler would improve traction the most while designing a race car, a computer simulation of the car could be used to estimate the effect of different spoiler shapes on the coefficient of friction in a turn. Useful insights about different decisions in the design could be gleaned without actually building the car.

The use of M&S within engineering is well recognized. Simulation technology belongs to the tool set of engineers of all application domains and has been included in the body of knowledge of engineering management. M&S helps to reduce costs, increase the quality of products and systems, and document and archive lessons learned.

M&S is a discipline on its own. Its many application domains often lead to the assumption that M&S is pure application. This is not the case and needs to be recognized by engineering management experts who want to use M&S. To ensure that the results of simulation are applicable to the real world, the engineering manager must understand the assumptions, conceptualizations, and implementation constraints of this emerging field.

Application domains

There are many categorizations possible, but the following taxonomy has been very successfully used in the defence domain, and is currently applied to medical simulation and transportation simulation as well.

Analyses Support is conducted in support of planning and experimentation. Very often, the search for an optimal solution that shall be implemented is driving these efforts. What-if analyses of alternatives fall into this category as well. This style of work is often accomplished by simulysis - those having skills in both simulation and as analysts. This blending of simulation and analyst is well noted in Kleijnen.

Systems Engineering Support is applied for the procurement, development, and testing of systems. This support can start in early phases and include topics like executable system architectures, and it can support testing by providing a virtual environment in which tests are conducted. This style of work is often accomplished by engineers and architects.

Training and Education Support provides simulators, virtual training environments, and serious games to train and educate people. This style of work is often accomplished by trainers working in concert with computer scientists.

A special use of Analyses Support is applied to ongoing business operations. Traditionally, decision support systems provide this functionality. Simulation systems improve their functionality by adding the dynamic element and allow to compute estimates and predictions, including optimization and what-if analyses.

(13)

13 Introduction

The lessons presented in this paper were learned from many years of experience building simulations for the Department of Defence. Military simulation products are very much like computer games, and in some cases are the computer games of the future. Therefore, these lessons may have current and future applicability to game developers, just as they have guided military modellers in the past.

The military defines a model as "an abstraction that represents the state or behaviour of a system to some degree". A simulation is "a complete system that exercises models for the purpose of training, analysis, or prediction". In this context it is important to point out that both models and simulations are expected to represent the objects and events of a virtual world accurately, though the level of detail may vary greatly.

Simulations can be found that represent individual vehicles, their articulated parts, the physics of movement, and ballistic fly-outs of individual munitions. Other, equally accurate and useful simulations, represent hundreds of vehicles as a single icon, its combat capability as one variable, terrain as an enumeration covering multiple kilometres, and engagement as a force exchange ratio. In both cases, the fidelity of the system is less important than its consistency in capturing interactions between objects.

Consistency of interaction allows the user to adjust their view of the world and enter it at the appropriate level.

Simulations vary widely, but beneath the surface they all begin as an exercise in capturing salient features of the real world and translating those into a virtual world. In all cases the process of doing this successfully follows some fundamental principles which are described in this paper. Of course, we do not pretend that every principle is presented here, only that these are a useful set that can serve as invaluable guidelines in developing simulations, models, games, virtual worlds, and digital playgrounds.

The Golden Rule of Modelling.

There is one rule that far overshadows all others. This has been elevated to the status of The Golden Rule.

It is a guiding light that is obvious once you hear it, unforgettable once you know it, easily propagated if you believe it, but quickly forgotten when you are absorbed in the process of creating a model, simulation, game, or virtual world.

The Golden Rule of Modelling.

"A model has no inherent value of its own. The value of a model is based entirely upon the degree to which it solves someone's real-world problem."

"Obvious!" you say. "Who could forget that?" you ask. "We teach it to all our people," you claim. But how many products reflect it? How many programmers adhere to it? What is done to in still it? Like all golden rules, it is easy to accept, but hard to follow.

In the military realm, the Golden Rule directs us to consider the purpose for which the simulation is being acquired - such as training, analysis, or prediction. The Golden Rule dictates the level of fidelity necessary to solve the problem, the extra features that can be added to each model, the amount of data presented on a user interface, and hundreds of other characteristics. The Golden Rule drives us to build a simulation focused on the current and future problems that our customers will face. In computer games,

(14)

14

the effects are the same, but may be expressed in different words because of the relationship to the marketplace of customers and the predominant mission of entertaining your customer.

There are two major offenders of the Golden Rule. The first is the manager or marketer who advertises features that do not exist, that are totally superfluous, or that disturb the interactive balance needed to insure a good model. Marketing meetings, press interviews, and product briefings have a life of their own and result in the creation of features that were never intended for the product. Unfortunately, once uttered, these descriptions must be made into software facts. This practice has been going on since the first program was written, but has become commonly recognized through hilarious features in the Dilbert cartoon.

The second offender is the programmer who pushes interesting ideas, new algorithms, and secret capabilities into the software. Though exciting and challenging to add to the simulation, these features are not free. They add cost in development hours, CPU cycles, code complexity, maintenance, and justification when found out. The marketers have been vilified for their role in violating the Golden Rule of Modelling, but the programmers have remained relatively unscathed, retaining their image of clever nonconformists. Occasionally, violators are vindicated when the customer later demands the capabilities that entered the system this way. However, the added features are usually just burdens that dog the product throughout its life.

Axioms to the Golden Rule

Any universal rule worth its salt will generate axioms that further define the implications of the rule.

These axioms describe specific applications of the golden rule or principles that follow if the rule is true.

Axiom #1: Models are not universally useful, but are designed for specific purposes.

Every customer or market segment has a different set of needs. Sometimes these needs are closely aligned with the needs of previous customers. More often, some segment of those needs is divergent from those of the original customer. Therefore, a model that was the perfect solution yesterday, may be totally inadequate today. There was a time when Lanchester’s differential equations (published in 1916) were the miracle cure for all direct fire combat modelling. But, the assumptions behind those equations grow less valid every day. Today, new methods are demanded for the same task. There was also a time when we were all mesmerized by the brilliance of text-based adventure games and Pong on the television sets.

Divergence is always a function of time, but it is also a function of the domain in which the customer exists. Though Quake may be a bestselling shooter for male players, it is probably not the core for a market blockbuster aimed at female customers.

Axiom #2: A great model of the wrong problem will never be used.

There is a catalogue describing nearly all of the simulations owned or operated by the Department of Defence. This catalogue lists nearly a thousand systems, many of which were amazing solutions to a specific problem. However, they are usually custom crafted solutions for that specific problem. Once that problem is solved, the model is no longer of any use. The same happens if the problem transforms itself, as the dissolution of the Soviet Union has done to force-on-force combat models. Models that cannot transform themselves as well will fall into the corners of dark closets, never to be fired up again.

(15)

15

Games face this same fate. Thousands of games are available for play, but only a few hit the customer’s needs and wants right on the head. Many models begin with a target in mind, but during the development process they lose sight of that target. These fall into the dark closet because they are pushing the hot buttons of someone besides the customer (probably that of the programmer or project management). If completed these systems are great solutions for a need that no one has.

Axiom #3: Learning to model is better than learning about models.

People who know all about old models, games, or techniques are excellent sources of ideas and lessons learned from the past. However, this knowledge must be combined with an understanding of the fundamental principles for creating a new model. Without this, the model historian will spend his or her life creating combinations of products that already exist. There is certainly a need for this. Every model or game can benefit from incorporating good ideas from other games. However, the state-of-the-art moves forward because people understand what is involved in the process of modelling or game design. They know what is essential and what only a specific implementation is. These people can invent the next generation, produce the blockbuster titles, and solve problems that no one else could crack. (Remember the first time you saw Wildenstein?)

The 10 Commandments of Modelling

Through years of experience and interviews with other long-time developers we have arrived at a list of ten principles for building a successful simulation product. These have been organized into the 10 Commandments of Modelling. Like the original 10 commandments, these are touchstones for success that can be kept in the forefront of your mind. But many other rules and guidelines are needed to support, enhance, and clarify these to help you create a great product.

I. Simplify, Simplify

When building a model, game, or simulation a good team can always envision and implement more details than are really necessary to make the product a success. The fertile brains and abundant energy of great people can always imagine and program much more than the customer has asked for, needs, or can appreciate.

The team must be bounded by the needs of the specific product. Additional great ideas should be captured and placed on the storyboard for the next product. If allowed to render every vision into software, the final product will be a bloated and confused medley of ideas that are not clearly tied together, or tied by the thinnest of threads. Great military simulations and computer games focus on a specific mission and do that job very well. Within the military, the Janus and ModSAF simulations have been extremely successful. Janus represents individual or aggregate objects on the battlefield and executes at very discrete time-steps. It is not a virtual simulation, uses poor graphics, has an archaic user interface, and requires prodigious amounts of time to build Pk tables for every possible interaction. But, it allows training and operational evaluation at a level that is needed by a large military audience. Similarly, ModSAF is a single CPU simulation that is more advanced than Janus, but is constantly criticized for what it cannot do.

Programmers who extend the AI in the system complain about the limits imposed by the Finite State Machine architecture and the inability to create complex, linked behaviours. However, the system is used

(16)

16

on hundreds of projects and continues to be the most widely proliferated simulation within the Department of Defence. ModSAF meets a specific need in a convenient, usable, and modifiable package.

Around 1320 Sir William of Occam summed up the need for simplicity in what has become known as Occam’s razor – in English "hypotheses are not to be complicated without necessity". Dr. Robert Shannon, a pillar of the discrete event simulation community, has stated that "The tendency is nearly always to simulate too much detail rather than too little. Thus, one should always design the model around questions to be answered rather than imitate the real system exactly." This wise advise is emphasized by Albert Einstein himself, who maintained that "Everything should be made as simple as possible – but no simpler."

II. Learn from the Past

Successful systems of the past were built by very intelligent and energetic people working with the best tools available at the time. They arrived at solutions that would fit into the computer available and applied considerable ingenuity and feats of engineering to achieve this. It is easy to look backward and smile at those primitive products. But, within each of them are nuggets of gold that should be mined when creating a new system.

Legacy systems, as they are called in the military, are packed with good ideas that can be reused.

Compact solutions to complex problems are embedded in every algorithm. Even military simulations have a "ship date" at which version 1.0 is delivered to the customer. However, these systems continue to grow and improve for decades. It is not unusual to find a 1970’s era FORTRAN simulation running on a DEC VAX. But, the software will have been improving and maturing internally for 20 years and is far beyond the capabilities delivered in version 1.0.

Model developers who study these are continually amazed by the complex virtual world that has been squeezed into these old machines. The creativity born of limited resources is capable of achieving what appears impossible to the general observer.

Of course, these old systems provide lessons on what not to do as well. The same traps and snares that snagged your predecessor a decade before are waiting for the new developer. Ask yourself why your predecessor did not use some of the ides you are considering.

III. Create a Conceptual Model

A team of young, energetic, talented programmers are always eager to start programming immediately.

This admirable quality must be harnessed and directed toward the very difficult process of creating a conceptual model that will serve as the blueprint and foundation for the product. This is a part of the design process that attempts to capture the characteristics of the real world that will be represented in the software.

Conceptual modelling consists of selecting the objects, attributes, events, and interactions that will form the product. Without resorting to programming, you want to identify and define a set of these that work together to form a complete, complimentary, and efficient product. When creating a virtual world there are an uncountable number of combinations of characteristics and intentions. Some are empowering, some inert, and some fatal. A working conceptual model will define a virtual world that operates

(17)

17

efficiently and appears to be complete and consistent. Designers can experiment with new ideas and trace their impacts on other algorithms within the system. Constant experimentation arrives at a package that is the best that can be found, and does so without the long development times needed to do so in software.

IV. Build a Prototype

One of the reasons teams skip the Conceptual Model phase is the extreme difficulty of mentally envisioning and defining an entire virtual world and the infrastructure that will support it. But, having worked through that process there are always questions and assumptions that cannot be evaluated without working software. A prototype should be written to explore these dark corners of the conceptual model.

There is no need for a prototype to look like the final product. It must enlighten the programmers who are about to jump into the problem, give them ideas, options, and tools to find the best solution to the problem.

An engineering prototype has the same objective as a conceptual model – to clarify the structure, algorithms, and capabilities of the final product. As essential as this mission is, it must be bounded in time and money relative the final schedule and budget. Both steps are essential, but neither produces a final product that can be shipped. These are tools to help create a better product, not substitutes or excuses for avoiding product development. Neither, can you expect these to iron out all of the problems, questions, and mistakes that will be encountered when programming the simulation – they just help reduce the number and severity of future problems.

Finally, to quote Bill Joy of Sun Microsystems, "Large successful systems come from small successful systems." So where do you think large failures come from?

V. Push the User’s Hot Buttons

The game community appears to be better at this principle than the military simulation community. The desire for a beautiful work of engineering genius that will be admired by your peers sometimes leads to products that are perfect at solving the wrong problem. There are many simulation systems that are never used because they solved a problem that no one has.

The development team must be in touch with the customer and understand what gets them excited. When they use a model today, what really turns them on? What makes their job easier? What makes them recommend the product to others? What infuriates them about current models? What are they trying to do, but are thwarted by the limitations of the model? What is dead wrong, laughable, and embarrassing about their current set of tools?

Your new product must capture the success of the old products, but overcome their limitations. Capturing success does not mean duplicating the product (though a copycat product is sometimes the solution), but requires that you achieve the same level of user excitement.

It is easy to fall into the trap of creating a product that the developer wants rather than what the customer wants. But the market base for that product is extremely small.

VI. Model to Data Available

(18)

18

Military simulations for training, analysis, and prediction must accurately capture the performance and behaviour of existing systems. These simulations may be used to ingrain life-and-death behaviours in soldiers, guide multi-million dollar purchasing decisions, or direct the future structure of the US military.

If they are not accurate, the results can be catastrophic. Therefore, the models must be based on known characteristics and behaviours of the real systems being replicated. But data on these systems is scarce.

During a real war the emphasis on capturing data objectively for future decision-makers is overshadowed by the need to stay alive and accomplish the mission. As a result we have a very limited set of quantifiable information about how combat works and how battles unfold.

Model developers need to be aware of the databases that exist in their areas. They need to understand what data exists and what data is totally unavailable. Every software model or game requires data that does not exist in any official or unofficial form. Every model requires that data be synthesized from what is available and from the subjective experiences of soldiers who have performed the operations. But, an effort needs to be made to provide a foundation for the model based on the scarce data that does exist.

VII. Separate Data from Software

If you read simulation code back through time you will see that we have been learning this lesson for 20 years. In the past, the budget for CPU and memory dictated very terse implementations of models. As a result, these tended to be made up of algorithms that had been tuned to the specific situation for which it would be used. Changing the situation required changing the software. However, thanks to improvements from the hardware industry we can now afford the luxury of moving some of our assumptions and system tuning into data that can be changed by the team or by the customer. This results in a product that is much more flexible and valuable to the user.

Even games now allow the user to create their own scenarios, to add new models, and to modify the visual scenes. This power is one of the user’s hot buttons and can only be pushed when the models are driven by data that is accessible to non-programmers.

To paraphrase Art Link letter, "User’s change the dandiest things." As programmers we often underestimate the creativity and cleverness of a dedicated user. Who could have imagined all of the Quake conversions that have emerged? We are just now coming to appreciate this and support it with data driven models, scripting languages, and tools to safely manipulate this data.

VIII. Trust Your Creative Juices

When working with a new team that has not created a simulation before, I notice that they are afraid to move forward without explicit direction and definition about what they should build. They are afraid that they will head off in the wrong direction and create a product that others will criticize. This fear of criticism is more crippling than their aversion to reworking a program that has gone wrong.

Experienced members of the team must demonstrate, instil, and encourage the brave act of trusting your own creative juices. The team leaders must provide the vision for the entire product, but each programmer, designer, and artists must have the freedom and confidence to express their vision in the product.

(19)

19

On military projects the fear of making mistakes results in constant repetition of requirements analysis, organizational restructuring, product research, and unproductive meetings. The team avoids making concrete decisions about the design of the product. They will not allow programmers to finish a conceptual model or build a prototype. Thousands of man-hours can be wasted in this trap. But eventually this cycle will be broken by one of the following events:

Arrival of a decisive and motivational leader,

Back alley software development by disobedient programmers, Arrival of a hard deadline, or

Cancellation of the project.

Good leaders will not abide remaining in this trap. Experienced programmers cannot stand to vacillate around a problem they know how to solve. Self-confident programmers (new and experienced) will march somewhere of their own accord. If your team does not trust its own creative juices and abilities you either have a poor leader, an unskilled team, or a stifling organization.

IX. Fit Universal Constraints

Every product is bounded by the universal constraints of Quality,

Time, Money, and Competence.

When you run out of one of these, the product is finished regardless of any software details. Managers have been taught to fit products into the bounds of the first three, but are largely unaware of the fourth.

The quality, detail, and capabilities of a simulation or game are unlimited in and of themselves. The time to produce the product dictates the level of quality in its many forms. The amount of money available limits the size of the team and is tied directly to the time factor (since we all expect to get paid every month). These three constraints are preached in multiple management courses and textbooks and are applied to every form of product under the sun.

However, there is a fourth constraint – competence. Some projects require skills that are in short supply.

Therefore, a generously funded project with a long schedule may still be strangled by the inability to hire people with the skills needed to do the work. Good leaders, programmers, designers, and artists are not available to do all of the work that companies want done. As a result some projects are understaffed, others are staffed with incompetent people.

A successful project must fit into the boundaries formed by all four of these constraints.

X. Distill Your Own Commandments

(20)

20

We opened this discussion emphasizing that there are many more than ten principles of modelling. The nine listed so far have been derived from the experiences of very talented people. However, the readers of this paper have a rich pool of their own experiences. That pool contains valuable lessons that fit into your current project, profession, or hobby. Each of you should distil your own set of commandments which you will use to avoid making the same mistakes you have made in the past, to gravitate toward what you know can be successful projects, and to create a working environment that is productive, rewarding, and profitable. Place confidence in your own lessons, trying not to repeat them throughout your career. These will be with you forever and you cannot count outsiders to solve your problems for you and guide your career.

The Laws of Data

When building a virtual version of the real world it is essential that you be able to describe the real world in some numerical or rule-based form than can be coded into software. Without this, the models are always a shot in the dark. Since a model is a dynamic picture of the behaviour of a system, it is very difficult to evaluate how accurate it is. The initialization data that starts up the system is one indicator of how accurate it can be. It is certainly not the only, or even the strongest, indicator. But it is a very measurable and tactile indicator. When the model or game is running, events move too quickly to provide a good feel for their validity. This leads to the capture of the information in a log file that can be studied closely and slowly.

Military simulations may be much more finicky about the accuracy of their data than are computer games.

But, as the virtual worlds in games become richer and more interactive, the need for accurate data that interacts in reasonable and realistic patterns will increase. Customers get a feel for the realism of the simulation based on its interactions more than its static appearance (in the form of initialization data or screen shots). Creating believable or realistic models is predicated on capturing the characteristics, behaviours, and interactions of the real world - or creating entirely synthetic laws of physics under which to operate. The latter is a much more difficult task, so developers tend to prefer the former. When collecting accurate data upon which to base a model you will find the following four laws in effect.

First Law. You can never get all of the data you need.

No matter what level of detail you are building, a complete set of data has not been collected, organized, and catalogued to meet your needs. The people who have collected data that is available usually did so for a study, model, or game they were focused on. As a result, their data never covers all of the aspects of your virtual world.

Second Law. You cannot use all of the data you can get.

The First Law of Data is not an indication that data on any subject is scarce. In fact, the world is awash in sea data. However, much of this information is overlapping and contradictory. It also presents an aspect of the problem that is of no interest to you. This often makes it impossible to combine the data from a number of sources into a single complete description. Of course, there is always that bucket of data that is of no use to anyone at all.

Third Law. You always have to collect some of your own data.

(21)

21

The first two laws make it very clear that you are going to have to do some data collection yourself. This may involve measuring or observing the process of cutting a path through the forest, assaulting an embassy with a team of soldiers, or walking horses across a muddy bog. Hopefully, once you have the information, you will share it with the world, pushing back the boundaries of the unmeasured and uncatalogued.

Fourth Law. You always have to synthesize data to meet the needs of your model.

As willing as you may be to collect data on the behaviour of a system, there is some data that is impossible to come by. It is unlikely that you will conduct experiments to discover the thickest wall that can be penetrated by a karate kick, measure the survival time of human flesh in a pool of lava, or find out how far a pig can free fall without becoming bacon.

Every model or game contains data that is synthesized by the model developers. This may be based on principles of physics, extrapolations of experiments, informed speculation, or pure fantasy. No model can get by without the creative bravery of a few people willing to make a guess. In some circles this process is frowned upon and its practice is covered with the most arcane scientific explanations. But, in truth, it is an honest part of the business and should be accepted as such.

Conclusion

This paper has presented lessons learned from years of excitement, productivity, suffering, and stagnation. The author and those consulted have learned about success and failure the hard way – through experience. But we have also learned from the experience of others. It is our hope that you will benefit from these lessons and spend more time on successful projects, abandon failures as soon as possible, refuse to follow incompetent leaders, and create greater products than we have.

Trademarks

The Golden Rule of Modelling and The 10 Commandments of Modelling are trademarks of Roger Smith.

Dilbert is a trademark of United Features Syndicate. Quake and Wolfenstein are trademarks of id Software.

(22)

22

Monte Carlo Simulation

The Monte Carlo method was invented by scientists working on the atomic bomb in the 1940s, who named it for the city in Monaco famed for its casinos and games of chance. Its core idea is to use random samples of parameters or inputs to explore the behaviour of a complex system or process. The scientists faced physics problems, such as models of neutron diffusion that were too complex for an analytical solution -- so they had to be evaluated numerically. They had access to one of the earliest computers -- MANIAC -- but their models involved so many dimensions that exhaustive numerical evaluation was prohibitively slow. Monte Carlo simulation proved to be surprisingly effective at finding solutions to these problems. Since that time, Monte Carlo methods have been applied to an incredibly diverse range of problems in science, engineering, and finance -- and business applications in virtually every industry.

Why Should I Use Monte Carlo Simulation?

Whenever you need to make an estimate, forecast or decision where there is significant uncertainty, you'd be well advised to consider Monte Carlo simulation -- if you don't, your estimates or forecasts could be way off the mark, with adverse consequences for your decisions! Dr. Sam Savage, a noted authority on simulation and other quantitative methods, says "Many people, when faced with an uncertainty ... succumb to the temptation of replacing the uncertain number in question with a single average value. I call this the flaw of averages, and it is a fallacy as fundamental as the belief that the earth is flat."

Most business activities, plans and processes are too complex for an analytical solution -- just like the physics problems of the 1940s. But you can build a spreadsheet model that lets you evaluate your plan numerically -- you can change numbers, ask 'what if' and see the results. This is straightforward if you have just one or two parameters to explore. But many business situations involve uncertainty in many dimensions -- for example, variable market demand, unknown plans of competitors, uncertainty in costs, and many others -- just like the physics problems in the 1940s. If your situation sounds like this, you may find that the Monte Carlo method is surprisingly effective for you as well.

What Knowledge Do I Need to Use It?

To use Monte Carlo simulation, you must be able to build a quantitative model of your business activity, plan or process. One of the easiest and most popular ways to do this is to create a spreadsheet model using Microsoft Excel -- and use Frontline Systems' Risk Solver as a simulation tool. Other ways include writing code in a programming language such as Visual Basic, C++, C# or Java -- with Frontline's Solver Platform SDK -- or using a special-purpose simulation modelling language. You'll also need to learn (or review) the basics of probability and statistics. To deal with uncertainties in your model, you'll replace certain fixed numbers -- for example in spreadsheet cells -- with functions that draw random samples from probability distributions. And to analyze the results of a simulation run, you'll use statistics such as the mean, standard deviation, and percentiles, as well as charts and graphs. Fortunately, there are great software tools (like ours!) to help you do this, backed by technical support and assistance.

How Will This Help Me in My Work or Career?

If your success depends on making good forecasts or managing activities that involve uncertainty, you can benefit in a big way from learning to use Monte Carlo simulation. By doing so, you can Avoid the Trap of the

(23)

23

Flaw of Averages. As Dr. Sam Savage warns, "Plans based on average assumptions will be wrong on average."

If you've ever found that projects came in later than you expected, losses were greater than you estimated as "worst case," or forecasts based on averages have gone awry -- you stand to benefit!

Go Beyond the Limits of 'What If' Analysis. A conventional spreadsheet model can take you only so far. If you've created models with best case, worst case and average case scenarios, only to find that the actual outcome was very different, you need Monte Carlo simulation! By exploring thousands of combinations for your 'what-if' factors and analyzing the full range of possible outcomes, you can get much more accurate results, with only a little extra work.

Know What Factors Really Matter. Tools such as Frontline's Risk Solver enable you to quickly identify the high-impact factors in your model, using sensitivity analysis across thousands of Monte Carlo trials. It could take you hours to identify these factors using ordinary 'what if' analysis.

Give Yourself a Competitive Advantage. If you're negotiating a deal, or simply competing in the marketplace, having a realistic idea of the probability of different outcomes -- when your opponent or competitor does not -- can enable you to strike a better bargain, choose the price that yields the most profit, or benefit in other ways.

Be Better Prepared for Executive Decisions. The higher you go in an organization, the more you'll find yourself dealing with uncertainty. Simulation or risk analysis might not be essential for routine day-to-day, low-value decisions -- but you'll find it invaluable as you deal with higher-level, more strategic -- and higher-stakes -- decisions.

References

Hughes, Wayne P. Editor. 1997. Military Modelling for Decision Making, Third Edition. Military Operations Research Society, Alexandria, Virginia.

Law, Averill and Kelton, W. David. 1991. Simulation Modelling and Analysis. McGraw Hill. New York, NY.

Smith, Roger. 1998. Military Simulation Techniques & Technology. 3-day Course Notebook.

http://www.magicnet.net/~smithr/mstt

References

Related documents

The jurisdiction maps for Cyberabacl West shall be shown in Annexure-I and IL G - zzt lz. Comrniqsionerate

These gains in crop production are unprecedented which is why 5 million small farmers in India in 2008 elected to plant 7.6 million hectares of Bt cotton which

File Transfer Protocol (FTP) is a standard protocol used on network to transfer the files from one host computer to another host computer using a TCP based network, such as

So need of the ours is more research into making solar cheaper modelling and simulation of PV arrays is done to estimate its characteristics batter and extract

For a given time-step we have to solve mathematical equations and functions within that time step that means each variable involved in simulation or the state of the

Application of Virtual Instrumentation: Instrument Control, Development of process database management system, Simulation of systems using VI, Development of Control

The boost converter is used to step up the voltage from lower level to higher level. It can be considered as a dc equivalent of transformer. The step up level is determined by the

National Institute of Technology , Rourkela Page 4 A “biometric system” refers to the integrated hardware and software used to conduct biometric identification or