A Method for Evualuating ES Shell for Class Room

Embed Size (px)

Citation preview

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    1/11

    1

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    KEYWORD SEARCH

    The Official Electronic Publication of the National Association of Industrial Technology www.nait.org

    2002

    A Method for Evaluating Expert SystemShells for Classroom Instruction

    By Dr. MD Salim, Mr. Alvaro Villavicencio, & Dr. Marc A. Timmerman

    Volume 19, Number 1 - November 2002 to January 2003

    Refereed Article

    AdministrationCAM

    Computer ScienceInformation Technology

    Management

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    2/11

    2

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    A Method for EvaluatingExpert System Shells forClassroom InstructionBy Dr. MD Salim, Mr. Alvaro Villavicencio, & Dr. Marc A. Timmerman

    AbstractContemporary Industrial Technol-

    ogy pedagogy includes the subject of Artificial Intelligence. ArtificialIntelligence problems are often solvedby students using commercial software

    packages called Expert Systems Shells.This paper describes the use of amethod called Function Point Analysis

    for the evaluation of Expert System

    Shells for suitability in IndustrialTechnology pedagogy. Two types ofFunction Point Analysis are describedin this paper; a direct method in whichthe Expert System Shell software itselfis evaluated and an indirect method inwhich only the specification of thesoftware is evaluated. The indirectmethod does not require the evaluatorto have an actual copy of the softwarebeing evaluated, but it gives less usefulresults than the direct method, whichdoes require that the evaluator have acopy of the software. Both methods aresimple and straightforward and can beused successfully by Industrial Tech-nology educators with no specializedbackground in Information Technologyor Software Engineering.

    IntroductionAn Expert System is a computer

    program that can perform tasks usinglogic patterns similar to those used by ahuman expert. Expert systems can beconstructed from scratch usingstandard programming languages like

    C/C++, Cobol, or other general-purpose computer languages. This is alaborious and lengthy process due tothe complexity of the programs(Jackson, 1999). Expert System Shellsare commercial software packages usedto implement Artificial Intelligenceprojects in business, industry, govern-

    ment, and in academic programstraining students for such activities.Using an Expert System Shell signifi-cantly reduces the time and labor of thecreation of Artificial Intelligenceprograms by eliminating the need toprogram in a general-purpose program-ming language. A typical example ofan Expert System program is a Deci-sion Support System. A Decision

    Support System is a program thatmakes informed decisions based onhuman-like reasoning rules by evaluat-ing various forms of data. A commonapplication of Decision SupportSystem is fault identification. This isa program that helps unskilled workersto identify (and to fix!) the cause ofequipment malfunctions by promptingthe workers to make certain measure-ments and observations. The DecisionSupport System is programmed withthe rules or knowledge of an expertworker in the maintenance of a particu-lar type of equipment. The systemhelps an unskilled worker to fix abroken piece of equipment by usingthis program equipped with the expertworkers knowledge and experiencewithout actually requiring the expertworkers presence. This type ofArtificial Intelligence applicationallows a company to make the bestpossible use of its personnel resourcesand is of great economic interest inmany sectors of business and industry.

    Expert System Shell packages vary

    greatly in cost and performance, and infriendliness to the end user. They play animportant role in the $250 million NorthAmerican Artificial Intelligence softwaremarket. The National Academy Council(1999) has reported that over 12,000Artificial Intelligence systems based onExpert System Shells are in use.

    MD Salim holds a B.S. in Civil Engineering from theBangladesh Instit ute of Techno logy, an M.S. in Con-struction Engineering fr om t he University of Leeds(UK) and a Ph.D. in Civil Engin eering f rom the Nort hCarol ina State Universit y. He has served in numer-ous industrial positions and on the faculty of theUniversity o f Northern Iowa. His research interestsand publications are in the applications of arti ficialintelligence to construction management.

    Mr. Alvaro Villavicencio holds a B.S. in IndustrialEngineering wit h a minor in Computer Sciences fromthe Catholic University of Chile, and a M.A.degreein Industrial Technology from the University ofNorthern Iowa. He is a Doctor of Industrial Tech-nology Candidate at the University of Northern Iowa.He served as a faculty member at th e University ofTarapac, Arica-Chile, wor ked as consultant in Man-agement Information Systems in different institu-tions in Chile, and also as a Staff Engineer in anElectric Distribution company in Chile. His research

    interests are in the areas of Artif ical IntelligenceApplications in Manufacturing and Construction,Expert Systems development, system integration,Robotics, and Autom ated Manufacturi ng.

    Dr. Marc A. Timmerman holds a B.S.E.E. from the SantaClara University, a M.Eng.E.E. from the RensselaerPolytechnic Institute, and a Ph.D. from the George W.Woodr uff School of Mechanical Engineering at theGeorgia Institute of Technology. He has served onthe faculties of the University of Tulsa, the LouisianaState University at Baton Rouge, the University of Ar-kansas at Litt le Rock, the University of Northern Iowa,and the Oregon Instit ute of Technology. His researchpublications are in the area of embedded micropro-cessors and DSP, Mechatronics, vibrations and mag-netic bearings, optimization theory and artificial in-telligence, and supply chain m anagement.

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    3/11

    3

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    PurposeAn Industrial Technology Educator

    always wants to select a commercialExpert System Shell Package that issuitable for instruction. The instructorwould like to first qualitatively groupthe available shells as to complexity.Next the instructor would like to

    numerically rank the acceptable shellsinto a list from most suitable to leastsuitable for instructional use. If there areties in the ranking or if the best shellsare unavailable due to cost, networking,hardware, or licensing issues, theinstructor would like to be able to usecombined qualitative and quantitativedata to guide his or her decision.

    The indirect method (qualitativeassessment) described in this paperrequires the instructor to obtain adescription or specification of the

    software package to be studied. Basedon these data, the instructor enumeratesthe softwares features in five catego-ries. These enumerations result innumerical values that are tabulated,multiplied by weights, and furtherarithmetically manipulated. The finalresults are a set of numerical valuesthat describe the lines of code, numberof man hours, etc., to write and debug acomputer program in a high levelprogramming language to implementthe features of the Expert System Shell

    being studied. These numbers are anindirect measure of the relative com-plexity of the Shell being studied. Forexample, if Shell A yields a resultthat 50,000 lines of code would beneeded to implement Shell Asfunction and Shell B yields a resultthat 20,000 lines of code would beneeded to implement Shell Bsfunction, then the instructor wouldinfer that Shell A was more complexthan Shell B. The instructor can usethese numbers to qualitatively rank thecomplexity of the shells being studied.In general the complexity of the shellgives an indication of the powerfulness,number of features, or problem solvingstrength of the shell.

    The direct method (quantitativeassessment) described in this paperrequires the instructor to obtain a copyof the software to be evaluated and to

    perform a simple benchmark problemon this software. Based on his or herexperience with the software, theinstructor answers a set of nineteenquestions about the software. Thequestions are quantitative and are basedon a 0 (very false) to 5 (very true)numerical scale. After some arithmetic

    manipulation of these numbers, asingle numerical factor results calledthe Satisfaction Level that rangesfrom 0 (least satisfied user) to 5 (mostsatisfied user) can be used to rank thesoftware packages being studied as totheir likelihood to be satisfactory to theend user, the student.

    It should be noted that the directand indirect methods could also beused independently. For example, if asmall number of shells are to be testedand sample versions of all these shells

    are available, then the direct methodcan be used immediately. Likewise, ifthere are no sample versions availableof the shells to be tested, the indirectmethod can be used independently.

    Review Of LiteratureAs noted by Stylianou, Madey, and

    Smith (1992), the selection of anadequate Expert System Shell is oftenthe difference between a successful andan unsuccessful industrial application.This observation is just as applicable to

    instructional uses of Expert SystemsShells as it is to industrial uses of suchsoftware packages.

    Industrial Technology classes inComputer Integrated Manufacturing,Construction Management, IndustrialMaintenance and Supervision, Occupa-tional Health and Safety, and otherareas often include computer labora-tory exercises or projects based onimplementing Decision SupportSystems or some other type of Artifi-cial Intelligence applications. Theseapplications are often implemented onExpert System Shells. Some examplesof such applications include the worksof Koo and Aw (1992) and Roschke(1994) that have described applicationsof Decision Support Systems toconstruction management problems.El-Najdawi and Stylianou (1993) andLarsson (1996) have described the use

    of Decision Support Systems toproblems in manufacturing and plantmaintenance. Larsson (1996) describesa Decision Support System that uses anExpert System Shell to implementindustrial maintenance on sensorsystems in a manufacturing setting. Taitand Mehta (1997) give a detailed

    description of a Decision SupportSystem called WORKBOOK thatimplements a complete chemicalexposure analysis for industrialworkplace chemical safety compliance.Xenakis (1990) gives a number ofexamples of the use of Expert SystemShells to managerial/supervisory tasks.

    According to two articles by VanName and Catchings (1990), due to thelarge number of Expert System Shellpackages on the marketand the lack ofany standardized nomenclature for

    describing the features and capabilities ofsuch productsselecting an appropriatepackage for an application can be verydifficult even for experienced users. Head(1992) gives the sage advice to thosecontemplating the selection of an ExpertSystem Shell to think big but to startsmall. Current product reviews ofExpert System Shells in the software tradeliterature are of very limited use as theyassume that the reader is already familiarwith the current version of the shell andthe author or authors only provide details

    of changes and improvements to thatshells latest version. In order to getdetails of the basic structure of the shell itis often necessary to go to archival sourceslike Pallatto (1991), Rasmus (1989),Coleman (1989), Shepherd (1988), whorespectively describe the basic functionsof first-generation Expert System Shellpackages from IBM, Nexpert, Xi-Plus,and Texas Instruments.

    Unfortunately, other than anexcellent article by Moffitt (1994),there is not much literature on theselection of Expert System Shells froma purely pedagogical perspective.Nevertheless, Stylianou, Madey, andSmith (1992) identify a total of eight(8) categories of criteria for evaluatingExpert System Shells; End UserInterface, Developer Interface, SystemInterface, Inference Engine, Knowl-

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    4/11

    4

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    edge Base, Data Interface, Cost, andVendor-Related aspects.

    From an industrial and businessuser perspective, Plant and Salinas(1994) and Dudman (1993) havepresented a bench-mark basedframework for the evaluation of ExpertSystem Shell packages. Their general

    concept is that a standard bench-markproblem is formulated and run onvarious shells and the results arecompared for time to execute, correct-ness of results, ease of use, and so forth.Unfortunately, the outcome of thisassessment is very dependent on thespecific bench-mark problem chosenand this method is thus very limited inits usefulness. Murrell and Plant(1997) and Antoniou, van Harmelen,and Plant (1998) have published surveysof more sophisticated methods for

    software assessment that attempt toavoid this limitation of the bench-mark method. Such methods includethe constructive cost model of Boem(1981) and the method of Putnam andMeyers (1992). Putnam and Meyerschampion the concept of excellencemeasures that deal quantitatively withissues like maintainability and userfriendliness of software packages. Bothof these methods are based on using thesize of the program, the labor involvedin writing the program, and other such

    measures of complexity and effort, toassess the softwares general usabilityand overall quality. These methods,though simple to implement, do not givea good idea of the suitability of asoftware package for aparticularuse,but do give a general view of thesoftwares overall complexity andfriendliness. Wentworth, Knaus, andAougab (1999), and Pressman (1997)have presented approaches based onstatistical concepts that give goodestimates of the likelihood that asoftware package will be powerful andaccurate, i.e., give good answers.Unfortunately, these methods do notgive much insight as to the difficulty or

    friendliness of the steps leading to thatanswer. Finally, Banyahia (1996) givesan excellent summary of these issues in

    a format useful to Industrial Technologyeducation professionals. As can be seenfrom these references, the key problemis finding a single method that assessboth the power/accuracy and the user

    friendliness of an Expert System Shell.Function Point Analysis is a

    methodology for assessing a software

    package that can measure both theaccuracy and the user friendliness of anExpert System Shell. The FunctionPoint Analysis method is based onevaluating specific features of thesoftware package on a numerical scale,multiplying these numbers by weightsthat assigns the relative importance ofeach feature, and formulating a numeri-cal figure of merit for the package as awhole by adding these weight andevaluation products. A final scalingfactor is often used to bring the answer

    into an easy-to-interpret range of 0 to 5points or 0 to 10 pointswith 0representing poor satisfaction. Accord-ing to Cheung, Willis, and Milne,(1999) the Function Point Analysismethod originated with Albrecht in1979 and has become very popularamong Information Technologyspecialists. There is a large literatureon methods to apply Function PointAnalysis to particular types of prob-lems in software assessment. Suchliterature includes work by Robillard

    (1996), and Jeffrey and Low (1993). Apaper by Furey (1997) aptly summa-rizes the benefits of the Function PointAnalysis approach:

    Although they are not a perfectmetric, they have proven to bevery useful. Among the desirable

    features of function points aretechnology independence, con-sistency and repeatability, datanormalization and estimatingand comparing. (p. 28)

    The methodology presented in thisstudy is based on Function PointAnalysis. The work of Boem (1981)and Putnam and Meyers (1992) is usedto augment the Function Point Analysismethod with other advanced concepts.

    Function point analysis hasbecome a standard, mature techniquewidely used by industrial practitioners.Biderman (1990) mentions that aninternational society dedicated tofunction point analysis, the Interna-tional Function Point Users Group(IFPUG), was founded in 1986 and had

    enrolled over 200 members. Asfunction point analysis is a maturetechnique, there are relatively few

    journal articles about function pointanalysis in the current experimentalliterature. A recent article by Cheung,Willis, and Milne (1999) describesmany ongoing industrial applicationsof function point analysis. Anotherrecent article by Orr and Reeves (2000)describes a pedagogical framework forteaching function point analysis in aBusiness program.

    MethodologyTwo types of evaluation data can be

    generated for an Expert System Shellpackage, a direct methodand an indirectmethod.1 The first type of data isSatisfaction Level which is a directmeasure of the overall user satisfactionwith the shell. For example, if a samplecopy or demo copy of the Expert SystemShell software is available, the evaluatorwill run an example problem. Based onhis / her experiences with the software,

    the evaluator can calculate a numericalSatisfaction Level based on his / herexperience with the shell. Many vendorsof Expert Systems Shells do provide freedemo or sample limited-use versions oftheir products to allow potential buyers toperform this type of study.

    Direct Method MethodologyTable 1 presents the direct method

    test instrument. This instrument iscompleted by the evaluator as follows:

    (1) The evaluator obtains demonstra-tion or sample copies of thesoftware packages to be evaluated.

    (2) The evaluator selects a bench-mark problem, for example a textbook example problem, and runsthis problem on each of thesoftware packages.

    1 The methods presented in this study are specific applications of very general methods described in a seminal text by Pressman (1997). Readersinterested in the scientific/statistical bases of these methods are referred to this most excellent general treatment of this subject matter.

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    5/11

    5

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    (3) After running the bench-markproblem, the evaluator respondsto the 19 questions in the instru-ment and estimates a quantitativeanswer to each question on a 0 to5 scale with 5 being very true and0 being very false.

    (4) Each numerical result is multi-

    plied by a weighting factor asgiven in the weight column.

    (5) The weighted values are summedand then divided by 26 (as for 19questions, 7 questions haveweight factor of 2 and 12 ques-tions have weight factor of 1, i.e.,7x2 + 12x1= 26) to give a resultin the numerical range of 0 to 5.

    Three complete numerical ex-amples of the direct method and aparadigm for interpreting the direct

    method result are presented in theFindings section.

    Indirect Method MethodologyThe second method is based on

    estimating the resources needed to writea program using general computerlanguages to implement an ExpertSystem. This type of calculation is basedon finding answers to the followingquestions, If a user wrote a program in

    some computer language to solve thisproblem instead of using a ready-madeExpert System shell, how long would thisprogram be, how many person-monthswould it take to write this program, howmany errors might this program have,and how many test cases might the userneed to run to debug the program?This question assumes that LISP orPROLOG, two common languages usedfor Artificial Intelligence programs, arebeing used for this hypothetical equiva-lent computer program.

    Table 2 presents the indirectmethod test instrument.2 This instru-ment is used as follows:

    (1) The evaluator obtains specifica-tions or other descriptive materi-als for the software packages tobe evaluated.

    (2) The evaluator enumerates the

    number of Internal KnowledgeStructures, External Data, InputDecisions, Decisions Supported atOutput, and User Inquiries sup-ported in the software packages.

    (3) The evaluator then estimates thecomplexity level of each of thesefeatures as Low, Medium, orHigh, and selects a numericalweight based on this complexity.

    (4) The evaluator multiples theenumerated values in step 2 bythe weights in step 3 and sums

    the weighted value. This number

    2 The methodology of this instrument is adapted from Pressman (1997). Pressmans results are based on statistical analyses of numerous softwareprojects and curve fitting techniques to derive simple algebraic approximations based on these studies. The weighing factors in this instrumentcome from these statistical analyses.

    Table 1. Direct measurement test instrument.

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    6/11

    6

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    is referred to as the FunctionPoint (singular).

    (5) Finally, by division /, multipli-cation x, and exponentia-tion^, of this Function Point(singular) numerical value, theevaluator calculates otherFunction Points (plural) that

    are the end result of the indirectmethod:(a) Equivalent Lines of Code

    Needed to Solve SameProblem =(Function Points) x 64

    (b) Number of Test Cases Neededto Debug Equivalent Code =(Function Points) ^ 1.2

    (c) Number of Probable Defects/Bugs in Equivalent Code =(Function Points) ^ 1.25

    (d)Number of Months to Write

    Equivalent Code =(Function Points) ^ 0.4

    (e) Number of Persons Neededto Write Equivalent Code =(Function Points)/150

    (6) One value requires a furthercalculation, the number ofPerson-Months needed is theproduct of the numerical valuesof the number of persons and thenumber of months:Total Person-Months forEquivalent Code Project =

    Number of Months x Number ofPersons

    These results of the indirectmethod do not have a simple interpreta-tion. These numbers estimate the scopeof an equivalent LISP / PROLOGlanguage programming project with thesame functionality as the ExpertSystem Shell being evaluated. Thereis no direct interpretation of these

    numbers as an end users potentialsatisfaction, but these numbers can beused to rank and compare the potentialcomplexity of an Expert System Shellproduct in the situation where aversion of the product is not availablefor evaluation. Typically, the assump-tion is made that a more complexpackage has more features or functionsand is therefore is more powerful. TheFindings section illustrates a possibleapproach to interpreting these data.

    Some Practical Issues with thisMethodologyIf a very large number of software

    packages are to be evaluated, the lesslabor intensive indirect method can beused for a preliminary ranking of thesoftware packages and the more timeintensive direct method can be used torefine the selection process once asubset of packages has been identifiedfor a further and more detailed study.

    These methods are based onstandard techniques with which all

    Information Technology managers arefamiliar. In situations where softwarepurchases require administrative

    approval, studies of this nature can bevery helpful in obtaining administrativeapproval from Information Technologymanagers for making this type ofsoftware purchase.

    A further use of the indirectmethod is in estimating the amount oflecture and lab time needed to cover

    the basic functions of the ExpertSystems Shell. In the situation wherethe Expert System Shell will be one ofseveral elements covered in a class,there may be a strong advantage infavoring a simpler product over a morecomplex product.

    FindingsTest Cases Presented

    To illustrate the use of this para-digm, a complete instructional usabilityanalysis of three common Expert

    System Shells products is presented:(1) MP2 Professional version 5.0

    from Datastream Corporation ofGreenville, South Carolina,USA. Table 3 is a direct analysis,Table 4 is an indirect analysis.

    (2) EXSYS CORVID from Albu-querque, New Mexico, USA.Table 5 is a direct analysis, Table6 is an indirect analysis.

    (3) Art * Enterprise from Mindboxof Greenbrae, California, USA.Table 7 is a direct analysis, Table

    8 is an indirect analysis.

    Table 2. Indirect measurement test instrument.

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    7/11

    7

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    In assessing the Expert SystemShells, it is important to keep in mind thatthe complexity of using the shell isstrongly influenced by the complexity ofthe problem the shell is being used tosolve. It would not be meaningful tocompare Shell A solving a simpleproblem to Shell B solving a compli-

    cated problem. The shells should beevaluated consistently on a single bench-mark problem to insure that their featuresneeded to solve non-trivial, complexproblems are being uniformly evaluated.In this study, a simple problem of thescheduling part flow in a machine shopwas used as the benchmark problem.

    The direct method presents anunexpected problem in that the demoversions of Expert System Shellpackages commonly made available byvendors are often deeply scaled down

    versions that are missing many of thefeatures of the full commercial versionof the software package. Care must betaken to carefully research the featuresof the full commercial softwarepackage and perform a consistent andmeaningful analysis.

    Interpretation of the Data of theTest Cases

    The italic numerals in the Tables 3through 8 represent values entered bythe person evaluating the shells. Table9 is a summary of the results of Tables3 through 8. The values are rounded forease of comparison. The direct method

    calculations shown in Tables 3, 5, and7, give an estimation of end-userSatisfaction Level. The indirectcalculations in tables 4, 6, and 8, giveestimates of the relative complexity ofthe packages. Table 9 presents thedirect and indirect results for compari-son purposes.

    The evaluator must use somethought and insight to interpret theresults. The results seem to group theshells into two categories. The MP2shell has a high satisfaction number at

    4.54 but also has a high code line countat 15,616. The EXSYS CORVID andthe Art * enterprise shells seem to lumptogether as they both had similarsatisfaction numbers at 4.35 and 4.31and much lower code line numbers at6,272 and 9,472. The data would

    suggest that there is a trade-off be-tween the complexity of a program, theease of use of a program, and thepower of a program. Figure 1 depicts agraphical interpretation of these data.Based on the direct results, all threepackages show the potential to satisfythe user. However, the MP2 package is

    a much more complex softwarepackage than the Art * Enterprisepackage or the EXSYS CORVIDpackage. This would suggest that MP2is both easy to use and has manyfeatures while Art * Enterprise andEXSYS CORVID are also easy to usebut have fewer features. Based on thisinterpretation of the data, the instructormight have the following ranking:

    (1) Purchase MP2 as the mostdesirable package.

    (2) Purchase either Art * Enterprise

    or EXSYS CORVID as a secondchoice. Asboth of these pack-ages have similar direct andindirect rankings, the choice ofwhich package to purchaseshould be based on cost.

    Table 3. MP2 Results. Direct measurement technique.

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    8/11

    8

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    Table 4. MP2 Results. Indirect measurement technique.

    Table 5. EXSYS CORVID example. Direct measurement technique.

    Implications In The ResearchExpert System Shell software

    assessment studies have appeared inmany computer-related journals andtrade publications. Articles in thesepublications tend to focus on one ormore specific packages available at thattime and gives one an overall impres-

    sion of the shell, but provide nodefinite information on performance(Stylianou, Madey, and Smith, 1992).Assessment methods that give a usefulmeasure of the user complexity andcomputational power of such softwarepackages are also described in greatdetail in the literature. There is

    unfortunately no single method thatgives a measure of both the power andthe complexity of a software packageas a unified measurement. A furtherlimitation is that these methods in theliterature do not do a good job ofnarrowing this evaluation to a specific

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    9/11

    9

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    Table 6. EXSYS CORVID example. Indirect measurement technique.

    Table 7. Art * Enterprise example. Direct measurement technique.

    application (like instructional use) ofthese software packages.

    The method presented in this paperis a specialization of the popularFunction Point Method for softwareassessment applied to the particularproblem of selecting an Expert SystemShell for Industrial Technologypedagogical use. The direct form of

    this method gives a simple numericalindication of the likely satisfactionlevel of the softwares end users, thestudents. Although this method is basedon a subjective evaluation technique,the results do provide a useful scale forselecting Expert System Shells basedon a common characteristic. Theindirect form of this method is useful

    for estimating the power / complexityof Expert System Shell packages. Thisindirect technique does not requireaccess to the actual software packageand is less labor intensive than thedirect technique. Together, the directand indirect techniques allow for anestimate of both the power / complexityand the user satisfaction / friendliness

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    10/11

    10

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    Table 8. Art * Enterprise example. Indirect measurement technique.

    Table 9. Results for all three cases presented.

    Figure 1. Interpretation of Direct and Indirect Results.

  • 8/3/2019 A Method for Evualuating ES Shell for Class Room

    11/11

    11

    Journal of Industrial Technology Volume 19, Number 1 November 2002 to January 2003 www.nait.org

    of Expert System Shells. For anExpert System Shell package to bepedagogically useful, it must be bothpowerful and friendly.

    Debate, discussion, and literatureon this important pedagogical problemis lacking and it is hoped that thisparadigm, as imperfect as it is, will

    spark an interest for study and schol-arly discourse on this very practicalproblem in the Industrial TechnologyCommunity. Artificial Intelligence andDecision Support Systems are rapidlyentering the mainstream of industrialpractice and as such will continue to beof interest to those responsible forplanning and implementing the Indus-trial Technology Curriculum.ReferencesAlbrecht, A. (1979). Measuring

    applications development productiv-

    ity. Proceedings of the Joint Share/Guide/IBM Application Develop-ment Symposium, 83-92.

    Antoniou, G., van Harmelen, F., &Plant, R. T. (1998, Fall). Verificationand validation of knowledge-basedsystems: Report on two 1997 events.AI Magazine, 19(3), 123-126.

    Biderman, B. (1990). Using functionPoints. Computing Canada, 16(4),30-32.

    Boem, B. (1981). Software Engineer-ing Economics. New York, NY:

    Prentice Hall.Banyahia, H. (1996, Spring). Costs andproductivity estimation in computerengineering economics. EngineeringEconomist, 41(3), 229-242.

    Cheung, Y., Willis, R. & Milne, B.(1999). Software benchmarks usingfunction point analysis.Benchmarking: An InternationalJournal, 6(3), 269-279.

    Coleman, T. (1989, March 29). Ex-pert-ease. PC User, 94-96.

    Furey, S. (1997, March-April). Why weshould use function points. IEEESoftware, 14(2), 28-30.

    Dudman, J. (1993, May 13). Measureof success. Computer Weekly, 40-41.

    El-Najdawi, M. K., & Stylianou, A. C.(1993, December). Expert SupportSystems: Integrating AI technolo-gies. Communications of the ACM,36, 54-65.

    Head, R. V. (1992, April 13). Plan-ning an expert system? Think big,but start small. Government Com-puter News, 11(8), 23-24.

    Koo, T. K., & Aw, Y. B. (1992, May).Using expert systems to manageprofessional survey practices.Journal of Surveying Engineering

    118(2), 43-63.Jackson, P. (1999). Introduction to

    Expert Systems. Harlow, England:Addison-Wesley.

    Jeffery, D. R., & Low, G. C. (1993,May). A comparison of functionpoint counting techniques. IEEETransactions on Software Engineer-ing, 19(5), 529-533.

    Larsson, J. E. (1996, January). Diag-nosis based on explicit means-endmodels. Artificial Intelligence, 80,29-93.

    Moffitt, K. (1994, May-June). Ananalysis of the pedagogical effectsof expert system use in the class-room. Decision Sciences, 25(3),445-461.

    Murrell, S., & Plant, R. T. (1997,December). A survey of tools forthe validation and verification ofknowledge-based systems: 1985-1995. Decision Support Systems,21, 302-323.

    National Academy Council. (1999).Funding a revolution. Washington

    D.C.: National Academy Press.Orr, G., & Reeves, T. E. (2000). Func-tion point counting: one programsexperience. The Journal of Systemsand Software, 53, 239-244.

    Pallatto, J. (1991, July 29). Experttools take the spotlight; IBM,AICorp stress graphical interfaces,ease of use. PC Week 8(30), 55-56.

    Plant, R. T., & Salinas, J. P. (1994,August). Expert systems shellbenchmarks: The missing compari-son factor. Information and Man-agement, 27, 89-101.

    Pressman, R. (1997). SoftwareEngineering: A PractitionersApproach. New York, NY:McGraw-Hill.

    Putnam, L., & Myers, W. (1992).Measures for Excellence.Englewood Cliffs, NJ: Prentice Hall

    / Yourdon Press.

    Rasmus, D. (1989, September). Theexpert is in. MacUser 5(9), 136-147

    Robillard, P. N. (1996, December).Function points analysis: An empiricalstudy of its measurements processes.IEEE Transactions on SoftwareEngineering, 22(12), 895-901.

    Roschke, P. N. (1994, September-

    October). Validation of knowledge-based system with multiple bridgerail experts. Journal of Transporta-tion Engineering, 120, 787-806.

    Schmuller, J. (1999). Sams teachyourself UML in 24 hours. India-napolis, IN: Sams Publishing.

    Shepherd, S. J. (1988, July). Sophis-ticated expert. PC Tech JournalJuly 6(7), 106-117.

    Stylianou, A. C., Madey, G. R., & Smith,R. D. (1992). Selection criteria forExpert System shells: A socio-

    technical framework. Communica-tions of the ACM, 35 (10), 30-48.

    Tait, K., & Mehta, M. (1997, August).Validation of the workplace expo-sure assessment expert systemWORKBOOK. American Indus-trial Hygiene Association Journal,58, 592-602.

    Van Name, M. L., & Catchings, B.(1990, March 7). Choices aboundin expert-system shells. PC Week,7(9), 77-79.

    Van N, M. L., & Catchings, B. (1990,

    March 7). Expert-system shellusers advice forethought. PC Week,7(9), 78-79.ame

    Wentworth, J., Knaus, R., & Aougab,H. (1999). FHWA Handbook:Verification, Validation, andEvaluation of Expert Systems.McLean, VA: Turner-FairbankHighway Research Center.

    Xenakis, J. J. (1990, September 3).Real use of AI: expert systems findwidespread use in mainstreamapplications - finally.InformationWeek, 30-32.