11
Book Reviews Java in a Nutshell: A Desktop Quick Reference (2nd ed.). David Flanagan. Cambridge, MA: O’Reilly; 1997: 610 pp. Price: $19.95. (ISBN 1-56592-262-X.) The Java programming language has become a major force in making the World Wide Web more dynamic. The number of Java books available on the market today runs into the hundreds, and no one book can cover all the intricacies and possibilities that Java offers, and still be a single volume at a reasonable price. Java in a Nutshell, however, comes close. David Flanagan, a computer programmer consultant and a user interface designer with a degree in computer science from the Massachusetts Institute of Technology, has written an exception- ally detailed, lucid primer and reference manual for Java program- mers. This second edition covers version 1.1 of the Java language and API. As of the writing of this review, version 1.2 was still in beta. This clearly is a book for the practicing programmer, as would be expected of a program reference. The novice unfamiliar with programming terminology would likely be lost after Chapter 1. The material is concise, and the choice of terminology precise. The reader is expected to already be familiar with either C or C11. While knowledge of object-orientated programming is not as- sumed, such experience is helpful when approaching this book. Examples are kept to a minimum in this edition, and the reader will frequently find one example highlighting several new con- cepts. As is discussed in the preface, the examples in this second edition were, for the most part, removed and placed with further material in a separate book, Java Examples in a Nutshell (Cam- bridge, MA: O’Reilly; 1997), by the same author. Readers inter- ested in a tutorial should consult this book. Part I introduces Java with three chapters describing how to get started, the differences between Java and C, and a discussion of classes and objects. All the basics are covered, from name spaces—packages, classes, and members—to exception handling. The tables in Chapter 2 on primitive data types and operators are especially well laid out. The similarities and differences with C11 are discussed, but are not as well organized as the C comparison material. Other books on the market deal with this comparison in more depth, and readers are referred to these if they wish a more detailed, point-by-point comparison of Java with C11. Part II, the shortest section of the book, contains two chapters on the new features of version 1.1. Chapter 4 presents an overview of the new application programming interfaces (API), including object serialization, internationalization, reflection, Java beans, JDBC (Java DataBase Connectivity), RMI (Remote Method Invo- cation), and security. In addition, the new event model, applet changes, new JDK (Java Development Kit) utilities, and depre- cated features are also discussed. Chapter 5 explains the new language features, such as the inner classes—member, local, and anonymous—as well as nested top-level classes, anonymous ar- rays, and class literals. Examples highlighting the differences between the two versions are provided. Part III contains seven chapters which provide programming examples of applets, events, new AWT (Abstract Windows Tool- kit) features, object serialization, beans, internationalization, and reflection. The majority of examples are new to this edition and can be downloaded from http://www.oreilly.com/catalog/javanut2/ examples/. Readers can run, modify, and experiment with these examples to their heart’s content. In addition, many of the exam- ples from the first edition, which were removed from the printed text of the second, can also be found at the website. For a programmer new to Java, these working examples are a wonderful resource. Part III also contains a number of useful tables such as the AWT components and the events they generate, function key constants, event types and their listener interfaces and methods, and bean naming patterns and conventions. Some of this material is reprinted in the reference sections of the book, but most is unique in form and layout. Part IV begins the reference section of the book. Chapters 13 through 16 cover Java syntax, system properties, Java-related HTML tags, and the JDK tools. The syntax is displayed in table and list form and covers data types, escape sequences, operators, modifiers, reserved words, and documentation comments. System properties and the ,APPLET. tag are handled in a similar fashion. Chapter 16 provides a synopsis, description, availability, and options list for each JDK tool—appletviewer, jar, java, javac, javadoc, javah, javakey, javap, jdb, native2ascii, and serialver. Depending on the tool, information on commands, properties, security, and environment is also presented. Part V—the API Quick Reference—is the largest section, con- taining Chapters 17 through 32. This is the heart of the book, and it packs a lot of information into a small space. A four-page guide describes how to find and read a reference entry. API packages are listed alphabetically and begin with an overview of the package. A hierarchy diagram lists the classes and interfaces in the package and their relationships to each other and to other packages. These visual diagrams are a wonderful tool for quickly assessing the placement, and therefore inheritance, of a particular class or inter- face within a package. Classes and interfaces are listed alphabetically within a given package. Each entry includes the name, availability, description, synopsis, and cross-references of the given class/interface. Each synopsis contains descriptions of class modifiers, names, super- classes, interfaces, member information, parameters, exceptions, and inheritance. The cross-references describe, among other things, hierarchies, extensions, implementations, and exceptions. Chapter 32 is a comprehensive, alphabetical index of classes, methods, and fields. Users can locate a particular class or interface and determine which package defines it, or a particular method or field and determine which class or classes define it. The concise nature of this index, however, does have a drawback. Since no pagination is given, the user either must flip to the subject index to get the relevant pages, or flip through the reference section, keep- ing an eye on the headers at the bottom of the pages for the relevant package/class. Both are time consuming. Unfortunately, not all of the Java APIs are included in this volume—a definite drawback, and one acknowledged by the au- thor in the preface. Those APIs dealing with database connectivity, © 1998 John Wiley & Sons, Inc. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 49(14):1329 –1339, 1998 CCC 0002-8231/98/141329-11

Java in a nutshell: A desktop quick reference

Embed Size (px)

Citation preview

Page 1: Java in a nutshell: A desktop quick reference

Book Reviews

Java in a Nutshell: A Desktop Quick Reference(2nd ed.). DavidFlanagan. Cambridge, MA: O’Reilly; 1997: 610 pp. Price: $19.95.(ISBN 1-56592-262-X.)

The Java programming language has become a major force inmaking the World Wide Web more dynamic. The number of Javabooks available on the market today runs into the hundreds, and noone book can cover all the intricacies and possibilities that Javaoffers, and still be a single volume at a reasonable price.Java in aNutshell,however, comes close.

David Flanagan, a computer programmer consultant and a userinterface designer with a degree in computer science from theMassachusetts Institute of Technology, has written an exception-ally detailed, lucid primer and reference manual for Java program-mers. This second edition covers version 1.1 of the Java languageand API. As of the writing of this review, version 1.2 was still inbeta.

This clearly is a book for the practicing programmer, as wouldbe expected of a program reference. The novice unfamiliar withprogramming terminology would likely be lost after Chapter 1.The material is concise, and the choice of terminology precise. Thereader is expected to already be familiar with either C or C11.While knowledge of object-orientated programming is not as-sumed, such experience is helpful when approaching this book.

Examples are kept to a minimum in this edition, and the readerwill frequently find one example highlighting several new con-cepts. As is discussed in the preface, the examples in this secondedition were, for the most part, removed and placed with furthermaterial in a separate book,Java Examples in a Nutshell(Cam-bridge, MA: O’Reilly; 1997), by the same author. Readers inter-ested in a tutorial should consult this book.

Part I introduces Java with three chapters describing how to getstarted, the differences between Java and C, and a discussionof classes and objects. All the basics are covered, from namespaces—packages, classes, and members—to exception handling.The tables in Chapter 2 on primitive data types and operators areespecially well laid out. The similarities and differences withC11 are discussed, but are not as well organized as the Ccomparison material. Other books on the market deal with thiscomparison in more depth, and readers are referred to these ifthey wish a more detailed, point-by-point comparison of Javawith C11.

Part II, the shortest section of the book, contains two chapterson the new features of version 1.1. Chapter 4 presents an overviewof the new application programming interfaces (API), includingobject serialization, internationalization, reflection, Java beans,JDBC (Java DataBase Connectivity), RMI (Remote Method Invo-cation), and security. In addition, the new event model, appletchanges, new JDK (Java Development Kit) utilities, and depre-cated features are also discussed. Chapter 5 explains the newlanguage features, such as the inner classes—member, local, andanonymous—as well as nested top-level classes, anonymous ar-rays, and class literals. Examples highlighting the differencesbetween the two versions are provided.

Part III contains seven chapters which provide programmingexamples of applets, events, new AWT (Abstract Windows Tool-kit) features, object serialization, beans, internationalization, andreflection. The majority of examples are new to this edition andcan be downloaded from http://www.oreilly.com/catalog/javanut2/examples/. Readers can run, modify, and experiment with theseexamples to their heart’s content. In addition, many of the exam-ples from the first edition, which were removed from the printedtext of the second, can also be found at the website. For aprogrammer new to Java, these working examples are a wonderfulresource.

Part III also contains a number of useful tables such as theAWT components and the events they generate, function keyconstants, event types and their listener interfaces and methods,and bean naming patterns and conventions. Some of this materialis reprinted in the reference sections of the book, but most isunique in form and layout.

Part IV begins the reference section of the book. Chapters 13through 16 cover Java syntax, system properties, Java-relatedHTML tags, and the JDK tools. The syntax is displayed in tableand list form and covers data types, escape sequences, operators,modifiers, reserved words, and documentation comments. Systemproperties and the,APPLET. tag are handled in a similarfashion. Chapter 16 provides a synopsis, description, availability,and options list for each JDK tool—appletviewer, jar, java, javac,javadoc, javah, javakey, javap, jdb, native2ascii, and serialver.Depending on the tool, information on commands, properties,security, and environment is also presented.

Part V—the API Quick Reference—is the largest section, con-taining Chapters 17 through 32. This is the heart of the book, andit packs a lot of information into a small space. A four-page guidedescribes how to find and read a reference entry. API packages arelisted alphabetically and begin with an overview of the package. Ahierarchy diagram lists the classes and interfaces in the packageand their relationships to each other and to other packages. Thesevisual diagrams are a wonderful tool for quickly assessing theplacement, and therefore inheritance, of a particular class or inter-face within a package.

Classes and interfaces are listed alphabetically within a givenpackage. Each entry includes the name, availability, description,synopsis, and cross-references of the given class/interface. Eachsynopsis contains descriptions of class modifiers, names, super-classes, interfaces, member information, parameters, exceptions,and inheritance. The cross-references describe, among otherthings, hierarchies, extensions, implementations, and exceptions.

Chapter 32 is a comprehensive, alphabetical index of classes,methods, and fields. Users can locate a particular class or interfaceand determine which package defines it, or a particular method orfield and determine which class or classes define it. The concisenature of this index, however, does have a drawback. Since nopagination is given, the user either must flip to the subject index toget the relevant pages, or flip through the reference section, keep-ing an eye on the headers at the bottom of the pages for therelevant package/class. Both are time consuming.

Unfortunately, not all of the Java APIs are included in thisvolume—a definite drawback, and one acknowledged by the au-thor in the preface. Those APIs dealing with database connectivity,© 1998 John Wiley & Sons, Inc.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 49(14):1329–1339, 1998 CCC 0002-8231/98/141329-11

Page 2: Java in a nutshell: A desktop quick reference

remote method invocation, and security are to be printed in aseparate volume. For comprehensive information on all the Javaclass libraries, one should consider purchasing a complete refer-ence set, such as Patrick Chan and Rosanna Lee’sThe Java ClassLibraries, Second Edition, Volumes 1 and 2 (Reading, MA: Ad-dison-Wesley; 1998).

The subject index is comprehensive but short on cross-refer-ences. This is particularly true of abbreviations which are usedthroughout the index. For example, information on the AbstractWindowing Toolkit is found under AWT, not “abstract.” For theexperienced programmer, this is a time-saving feature, however itwould have been useful to include the full phrase, in parenthesis,after all abbreviations, instead of just some. An orderingkey would also have been useful. For instance, KEY_ACTION isfiled as “key action” which naturally comes before “keyboard,”and “java.beans” comes before “javac,” which comes before“java.math.” In one case, the underscore functions as a space; inthe other, the period is ignored.

An errata list, as well as further information regarding the book,can be found at O’Reilly’s website: http://www.oreilly.com/catalog/javanut2/. There are some 24 corrections listed, the vastmajority of which are minor spelling and/or grammatical errors.For a book of this size and complexity, this is a remarkably smalllist.

A note about the binding is in order here. A good referencebook, when opened to any page, should lie flat on a desktop, toallow easy reading while hands are occupied with typing/input.This book, unfortunately, does not do this, and you will need somesort of weight to keep it open, which is a definite annoyance.

Mr. Flanagan succeeds in producing this quick, desktop refer-ence. If one had to have just one book, this would be an excellentchoice, and at under $20.00, it is one of the best investments anyJava programmer can make.

Michael R. LeachPhysics Research LibraryHarvard University17 Oxford StreetCambridge, MA 02138E-mail: [email protected]

Readings in Agents.Michael N. Huhns, Munindar P. Singh, eds.San Francisco, CA: Morgan Kaufmann; 1998: 523 pp. Price:$49.95. (ISBN 1-55860-495-2.)

Today’s society is experiencing major transformations in theway that we work and play. One is hard-pressed not to be impactedby rapid technological change. One area that has been greatlyinfluenced by this change is that of information systems. Thisenvironment is filled with heterogeneous and dynamic forms ofinformation sources that seem to exist only to frustrate those whosearch for relevant information. There exists a common “need formechanisms for advertising, finding, fusing, using, presenting,managing, and updating information” (p. 1). This need can be bestmet by the use of interfaces that are flexible, adaptive, mediative,and above all personal—in other words, agents.

Michael Huhns and Munindar Singh draw upon their manyyears of involvement in Distributed Artificial Intelligence andMultiagent Systems (DAI/MAS) to compile this “singular andinformative collection of some of the best technical material todate in the research are of intelligent agents” (p. v). Choosing froma broad range of sources, the authors present 51 articles, all writtenin the 1990s, divided into three sections, strangely called “chap-

ters.” These articles offer a wide variety of perspectives which“gives readers a truer flavor of agents than a narrow, ideologicallyloaded analysis” (p. xi). The authors feel that the “included articles[will] provide the essential background and perspective needed tounderstand and appreciate any other article on agents” (p. xi).

The opening chapter called “Agents and Multiagent Systems:Themes, Approaches, and Challenges,” by Huhns and Singh, pro-vides the reader with a thorough introduction to agent research.There is no one agreed upon definition for or view of an agent.Some people see agents as possessors of some type of humanessence —consciousness, feelings, emotions, perceptions, andcognition. Others regard agents as nothing more than programmedautomatons. The authors feel that the truth lies somewhere inbetween these viewpoints. Huhns and Singh have synthesized thecore elements of opinion into the definition that agents “are active,persistent, (software) components that perceive, reason, act, andcommunicate” (p. 1).

The authors then proceed to explore agent taxonomy. Five detailedtables present agent characteristics be they intrinsic or extrinsic, andcharacteristics for systems, frameworks, and environmental-agent re-lations. Other agent properties described include autonomy (absolute,social, interface, execution, or design), intelligence, rationality, andconstruction (procedural vs. declarative).

After discussing agent applications and architectures, the au-thors provide a detailed overview of agent infrastructures whichprovide ways for agents to communicate, to be understood, and tomove about. The authors note that communication and understand-ing can be optimized by common ontologies and communicationand/or interaction protocols. Some of the many acronyms thatabound in this realm are explained.

The authors next explore the world of agency. Having elected tonarrow the focus of their review, Huhns and Singh present fivemodels of agency. The first, rational–logical agency, involves quali-tative concepts like consistency and/or suitability of actions based on“beliefs.” The second model is rational–economic agency whichinvolves the maximization of preferences with actions. Social agency,the third model, deals with cooperation and/or commitments. The lasttwo models are interactive agency and adaptive agency.

The conclusion to this chapter is a look at the three mostimportant directions the authors believe agent application can take.Agent-based software engineering may allow programmers to pro-duce software that “exploit[s] modularity and reuse of code” (p.18). Interaction-oriented programming, which is a “class of lan-guages, techniques, and tools” (p.1), may help one to optimize theconstruction of “inherently superior” multiagent systems. Adaptiveagents may assist in the building of effective systems within open,information-rich environments.

Due to the myriad ways agents can be applied, the authors haveelected to include in Chapter 2 (“Applications”) only those articlesthat “typify the wide variety of agents in operation . . . representmilestones in the development of agent technology . . . [and]introduce new advances in agent technology” (p. 25). I will bereferring to individual articles by the last name(s) of the author(s).

The first part of this chapter covers the use of agents in enterprises.Enterprises entail many diverse, heterogeneous, and often isolatedsystems that need to be coordinated globally. Chaib-draa looks atdistributed artificial intelligence (DAI), includes an excellent DAItutorial, and shows how these entities can positively impact industrialsystems. Huhns et al. explain how local autonomous agents canenable reintegration and management of information found in phys-ically separate locations. Cutkosky et al. incorporate excellent reader’saids in their description of the Palo Alto Collaborative Testbed(PACT) and its significance in concurrent engineering. Petrie ad-dresses the use of a “general-purpose constraint reasoning and conflictresolution functionality” (p. 25) as a means to unite heterogeneousagents in a federation. Sandholm and Lesser address various issuesthat impact the use of bundled rational self-interested (BRSI) agents inautomated negotiation systems.

1330 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998

Page 3: Java in a nutshell: A desktop quick reference

Access to information within open environments, especially theInternet, has opened up new areas for agent use. Etzioni and Weldexplain how the Internet Softbot provides an effective interface tothe Internet. Arens et al. report on the creation of an informationmediator that processes (or translates) domain-level queries intoefficient query access plans. Kuokka and Harada describe a“matchmaking” facilitation agent that allows providers to seekspecific customers, and vice versa. Durfee et al. explore the use ofagents in a university digital library.

As the sources and forms of information proliferate and be-come increasing complex, user interfaces have begun to migrateaway from impassive to active modes. Personal assistants areagents that interact dynamically with users. They tend to exhibitthe following characteristics: Multimodal, dialogue based, mixed-initiative, anthropoid, cooperative, and adaptive. Lashkari et al.describe collaborative interface agents and their use in filteringelectronic mail. Rich and Sidner argue that when autonomousagents interact with humans, these agents should collaborate in thesame manner as humans. Kautz et al. present a “bottom-up”approach to designing software agents: One first identifies usefuland feasible tasks that an agent can perform. Amant and Cohenoffer a system that allows users to perform statistical exploratorydata analysis (EDA).

The next section of articles cover emerging applications foragents. Hayes-Roth et al. investigate multiagent collaboration indirected improvisation —the “simultaneous invention and perfor-mance of a new “work” under the constraints of user-specifieddirections” (p. 141). Cassell et al. describe how their systemautomatically animates conversations between multiple human-like agents with synchronized and “appropriate” speech, intona-tion, facial expressions, and hand gestures. Stone and Lesterpresent an approach to dynamically sequencing an animated ped-agogical agent. Liu and Sycara explore multiagent coordination inan environment of tightly coupled tasks combined with real-timescheduling and execution. Ishizaki describes a multiagent model ofdynamic design for the visual designer.

Chapter 3 (“Architectures and Infrastructure”) begins with ar-ticles that look at “organizing frameworks within which agents canbe designed and constructed” (p. 181), and in which agents caninteract. The article by Wiederhold is an excellent introduction tomediators and information processing. Cohen et al. describe anopen agent architecture that uses a multimodal interface. Bayardoet al. explain how their agent-based system integrates informationfound in an open and dynamic environment. Fischer et al. describea BDI (belief, goal, and intentions) architecture. Bates at al.present an architecture that allows agents with broad capabilities(like reactivity, goals, emotions, and/or social behavior) to actrealistically within a highly engaging simulated world.

A section of articles covering communications and knowledgesharing is next. Labrou and Finin address Knowledge Query Ma-nipulation Language (KQML) as it relates to agent communica-tion. Patil et al. offer a way for “researchers to develop newsystems by selecting components from a library of reusable mod-ules and assembling them together” (p. 243). Dowell et al. showhow a domain–knowledge ontology can be used as a semanticgateway between different information resources.

The third section of Chapter 3 deals with aspects of mobile agentsor distributed computing. Johansen et al. discuss an operating systemthat supports mobile agents. Chess et al. describe a framework thatpromotes secure, remote applications in large, public networks thatutilize itinerant agents. Rus et al. present information on transportableinformation agents, these being autonomous programs. Borensteindescribes a uniform extension language that “enables” mail, or, inother words, significantly increases the computational power andutility of electronic mail systems. Sirbu discusses ways to facilitateelectronic payments over the Internet. Reiter briefly describes securityand trust over a distributed system.

The final chapter of this book (“Models of Agency”) presentsarticles that cover the most significant ways to make agents appearrational, social, communicative, and/or adaptive. The first part ofthis section explores models of rationality. Rao and Georgeff offera fairly technical “alternative possible-worlds formalism for BDI-architectures” (p. 317). Shoham describes a computational “para-digm” or framework called agent-oriented programming (AOP).Rosenschein and Zlotkin discuss automated negotiation and theusefulness of using decision theory and game theory in the design.Wellman offers a computational market model that will supportdistributed design collaboration. Fenster et al. show how auto-mated agents can use focal points as viable means of coordination.

Articles dealing with social agency follow next. Gasser ex-plores social conceptions of knowledge and action within DAI andopen information systems semantics (OISS). Hewitt and Inmandiscuss the development of the “Actor” model and the challenges“Intelligent Agents” pose for DAI. Sichman et al. technicallydescribe a social reasoning mechanism based upon dependencenetworks that is designed to be a part of an agent’s internalenvironment. Tokoro explains why there is a need for autonomousagents. Wooldridge and Jennings present a preliminary model thatformalizes the cooperative problem solving process.

Interactive agency is the focus of the next section of articles.Haddadi presents a pragmatic means to specify the reasoningprocesses behind communication that facilitates cooperation be-tween agents. Decker and Lesser describe the development of a“domain independent coordination scheduling approach” (p. 450).Singh offers a formal semantic structure for speech acts. Lux andSteiner discuss in fairly technical terms how agents cooperate fromthe agent’s perspective.

The last section of this book includes articles on adaptiveagency. Weiss addresses the issue of agents learning to coordinateactions in reactive multiagent systems. Tan investigates whethercooperative agents perform better in multiagent reinforcementlearning than independent agents that do not communicate witheach other. Littman et al. describe a way for an agent to navigatewithin a partially observable environment. Tambe et al. identifythe problem of adaptive agent tracking in a real-world multiagentdomain, and suggest a solution based upon discrimination-basedlearning. Sen et al. conclude this book by showing how reinforce-ment learning techniques can allow agents to coordinate withoutsharing information.

I believe that the authors have done an admirable job puttingtogether this collection of articles. This book is a great source forsoftware practitioners, graduate students, senior researchers, andprofessors and their students. The authors are warranted in theirintent that this book be “used as a supplementary text for graduateand advanced undergraduate courses” (p. xi). The text is a vitallink between print and electronic resources, many of the lattereasily accessible on the Internet.

Unfortunately, there are some problems with the book’s format.The combination of being well over 500 pages and also beingsoftbound makes this book prone to a short life span. Each articlehas its own font, layout, and character size. It seems as if thearticles were lifted from their original sources and published as iswith no alteration or editing. In fact, six articles are presentedperpendicular to the rest so that the left margin is at the bottom ofthe page. One must turn the book 90 degrees in order to read thetext—a rather awkward situation. Computer screen captures arelegible for the most part, but those in the article by Ishizaki (pages172 –179) are almost useless.

Other things seem to have slipped by the editors. On page 25the article by Liu and Sycara is listed in the section of articles onenterprises, when in fact the article is located within the section onother applications. Page 148 has the word “is” hand written in theabstract. The article by Hewitt and Inman is noted on page 311 asfourth in the section when it should be second. The author of aarticle is cited on page 312 as Tambe 1995 when it should be

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998 1331

Page 4: Java in a nutshell: A desktop quick reference

Tambe et al. (1996). The word “if” is spelled “iff” on page 462.These things may seem minor, but they are still irritating lapsesthat may disturb other readers. I believe these faults to be primarilydue to the placing of aesthetic needs below that of producing acollection of superior readings.

The above said, I would still wholeheartedly recommend this bookfor academic libraries, large public libraries, and any other library orpersonal collection that focuses upon computer or Internet applica-tions, artificial intelligence, or speculative technology issues.

Jeff WhiteMedical Group Management Association104 Inverness Terrace EastEnglewood, CO 80112-5306E-mail: [email protected]

References

BotSpot. Available: http://www.hotbot.com/main.html (May 20, 1998).Hermans, B. (1997).Intelligent software agents on the Internet: An inven-

tory of currently offered functionality in the information society andprediction of (near) future developments(Online). Available: http://www.firstmonday.dk/issues/issue2_3/ (May 20, 1998).

UMBC Agent Web. Available: http://www.cs.umbc.edu/agents (May 20.1998).

Guide to Finding Legal and Regulatory Information on theInternet. Yvonne J. Chandler. New York: Neal-Schuman; 1998:516 pp. Price: $125.00. (ISBN 1-55570-306-2.)

Books containing lists of websites seem, at first blush, to be aperversion of the medium. However, a closer look at the Internetlegal research books available undermines this presumption. Awell-written book with good annotations can save the Internetlegal researcher time and frustration, especially when she needs toventure into unknown resources on the Net.

The Guide joins several other widely available publicationscreated for people who need to find legal information on theInternet. Two have been selected for comparison with theGuide—Legal Research on the Internet: A Simple, “How to” Guide forCourts and Attorneysby Brad Hillis (4/10/98 revision) andTheInternet Guide for the Legal Researcher,Second Edition by DonMacLeod (1997).

The Guide is an annotated list of 947 websites and individualfiles that contain legal information. Websites created by govern-ment agencies as well as sites created by others are included. Eachsite or individual file is annotated with a general description ofinformation on the site as well as the arrangement of the resourceson the site, a comparison with other sources, and highlighting ofspecial features. Each chapter closes with a list of “Chandler’s BestBets.” A general subject index references the website annotation,and a website index arranged alphabetically by name of the sitewith a reference to the annotation complete this work.

One of the features that makes thisGuideunique is the wholis-tic approach of the author, Dr. Yvonne Chandler. Instead ofassuming that the only legal materials in existence are on theInternet, she describes the print resources that are available. Forexample, in her chapter on judicial resources, the author describesthe organization and parts of a court opinion as well as gives aquick overview of the federal court structure. Before citing the websources for opinions of the U.S. Supreme Court, Chandler writesabout the print publications and the history of the project whichbrought electronic copies of these opinions to the public.

This wholistic approach is laudable, but underscores an impor-tant weakness in theGuide.Who is the audience for this book?From the title and preface, I assumed the book was written to helppeople already familiar with legal research. The author’s organi-zational structure also presumes this orientation. She has groupedwebsites by the source of the law with judicial, legislative, andadministrative types of legal materials gathered together in sepa-rate chapters. This structure automatically presupposes a readerfully knowledgeable about the structure of our legal system andforms of legal publication. While her first chapter tries to give thenecessary overview for the first-time legal researcher, I think itfails this necessary task. Because of the size and complexity of thiswork, use of the table of contents or index by the new legalresearcher is more likely to be the access methodology.

Her first chapter, “The Nature of Legal Information on theInternet,” articulates the sources of the law (a very important bit ofinformation for new legal researchers) and also describes somelimitations of the electronic resources to be found on the Web.However, the author fails to highlight the issues of authenticity,reliance, and archiving electronic information that are very realconcerns of those doing legal research on the Internet. Nor doesshe remind her readers that knowing the currency and coverage ofthe material found is extremely important when using and relyingon legal information sources. Again, lack of focus on an audiencemay be the culprit. The author may assume that experienced legalresearchers know these concerns and will take them into account.

In sum, theGuide is organized for the knowledgeable legalresearcher but has content that will be more useful to the first-timelegal researcher. Another illustration of the focus of the content isthat specialized terminology is dropped in favor of more genericlanguage (i.e., refers to West summary of the case (synopsis) as aprefatory statement; p. 46). The Preface is silent as to intendedaudience. The ambiguity this silence creates is obvious from theconflicting messages received from the organizational structureand content of the book.

The Guide needed at least one more good substantive edit.Sentences are often unnecessarily complicated. Textual portions ofthe book are sometimes redundant. In many parts of the text, theauthor’s message is hard to discern. Some of the annotations,especially of federal government sites, are so general as to beuseless. Annotations of the websites in a book like this should atleast summarize the legal and non-legal information available.

The Chandler book appears, because of its organization, to bea very comprehensive listing of legal websites. However, Chan-dler’s organizational structure requires duplication of websitessince she lists the precise location of specific legal information atthe subdirectory level. Meta sites like the Cornell Legal Informa-tion Institute or GPO Access are listed many times throughout thebook. This structure is useful, however, if you want to find all thewebsites and files that contain, for example, U.S. Supreme Courtdecisions.

The section in the book that is the weakest begins on page 353,labeled “Decisions of Federal Administrative Agencies.” Giventhe scope of this book, I would have expected to find listings forthe Merit Systems Protection Board, the Occupational Safety andHealth Administration, and the many other agencies that publishadministrative decisions on their websites.

Despite these negatives, does theGuidefill a niche? Compar-ison to two other publications—Hillis and MacLeod—about theInternet for legal researchers will help answer this question. TheBrad Hillis book is really a computer database, revised severaltimes a year. After a very short introduction to the use of theInternet for legal research, where the author claims that “legalinformation on the Internet is free, reliable and easy to sortthrough” (p. 7), Hillis goes on to list over 500 websites and filescontaining legal information, including official government sites aswell as non-government sponsers. His organizational structure iseasy to follow for an experienced legal researcher. He lists federal

1332 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998

Page 5: Java in a nutshell: A desktop quick reference

and then state websites. There are no annotations. Hillis includessites to foreign countries, international organizations, court admin-istration, law finder lists, cases about the Internet, some tips onsearching the Internet, how to purchase law books online, sitescontaining legal forms, and lists of law library and law schoolwebsites. There is no index, but the table of contents is verystraightforward.

The MacLeod book is in its second edition. Excellent chapterson communications protocols, retrieval protocols, and law-relatedindex pages and search engines are followed by annotated andillustrated pages for 140 federal government websites and a two-page description of legal materials for each of the 50 states and theDistrict of Columbia. MacLeod completes the book with a chapteron reference resources that includes Internet publications, FAQarchives, law libraries, weather reports, and other general pot-pourri.

The MacLeod book would be my choice for the experiencedlegal researcher but the novice Internet user. It is very readable andmanages not to be overwhelming in scope (though it is not asdetailed as some parts of theGuide). His annotations are excellent,and the reader will discover the scope of possibilities on varioussites. The Hillis book is a quick guide to specific sites and is muchmore useful for the experienced Internet legal researcher. It pre-sumes knowledge of the legal system as well as the organizationand scope of the Internet. Despite its more complex organizationalstructure, theGuidewould probably be best for the Internet-savvyperson who was not very familiar with legal resources or legalresearch. However, theGuideneeds to be much more focused onthis lay audience in order to adequately fill this niche. Perhaps thenext edition can address this weakness.

All of these publications about Internet sites for legal researchare out of date before the new editions can be printed. However,the annotations alone can make this type of reference book worthpurchasing. Even experienced Internet users might like to look upa URL and go right to the correct website! But, before librarianspurchase any guide to the legal materials on the Internet, think ofthe audience you hope to serve.

Penny A. HazeltonGallagher Law LibraryUniversity of Washington School of Law1100 NE Campus ParkwaySeattle, WA 98105E-mail: [email protected]

References

Hillis, B. Legal research on the internet: A simple, “how to” guide forcourts and attorneys,4/10/98 revision (Office of the Administrator forthe Courts, Olympia, WA, 4/10/98.

MacLeod, D.The internet guide for the legal researcher,Second Edition(Infosources Publishing, Teaneck, NJ, 1997).

Information of the Image (2nd ed.). Allan D. Pratt. Greenwich,CT: Ablex Publishing; 1998: 120 pp. Price: $73.25. (ISBN1-56750-346-2.)

This is the second edition of Pratt’s classic work on informationand librarianship, first published in 1982. Pratt starts from the basicconcept of information—What is it? Why do people seek it?—thenconstructs principles of librarianship based on the answers to thesequestions. The book is highly intellectual (e.g., a book is a “graph-ic record” and an expository text is an “instrumental graphic

record”), but it is written in a plain and simple style suitable forgraduate students in information and library science. For thesereaders (and for library and information scientists who have notread it), Pratt’s book will deepen their view of the profession oflibrarianship, providing a foundation for the principles normallyassociated with the “collection, preservation, organization anddistribution (dissemination) of information” formula.

The book communicates to the reader a sense of idealism, anidealism about what information is in the wider scheme of things:“Our mutual informing of each other,” Pratt states, “is essential to,perhaps is the process by which human beings accomplish any-thing” (p. 109); and an idealism about the place of information andlibrary science in facilitating “the ‘meeting of minds’ of individ-uals separated by time or space, by bringing together a graphicrecord and a potential user” (p. 89). Pratt’s ideas about howlibrarians can accomplish this role stem directly from an analysisof what communication is and what information is.

Using Kenneth Boulding’s definition of subjective knowledgeas the “image” we have of the world, Pratt believes the purpose ofcommunication is to “change another’s image” (p. 3). The “alter-ation of the image that occurs when it receives a message” isinformation. “Information is thus an event” (p. 27), or moreprobably a class of events, like an explosion. But like an explosion,an informative event must have the explosive substance, the det-onating impulse and locale, and all in appropriate relationship toeach other, so perhaps Michael Buckland’s definition of “informa-tion-as-process,” Pratt concludes, is “more accurate” (p. 28).(Though Pratt’s extensive discussion of information in terms ofprocess in the 1982 edition, where he goes back to the Greek andLatin roots of the word, helped shape the definitional discussionleading up to Buckland’s 1991 book and article.)

From this definition of information, Pratt defines the role of thelibrarian as providing service that “make[s] possible that informingof the reader’s image that results from contact with another’sideas” (p. 110). There are several implications for library serviceand the library itself resulting from these first principles.

In the age of the Internet, the storage and preservation of booksin a library building becomes less important, but the librarian’srole of bringing clients into contact with a graphic record so thatthey can become informed by the record is still vitally important,and this role can be performed without the building. However, thelibrarian must refine his or her role of bringing a user together witha record. The librarian must make sure the record is appropriateand the coming together “successful,” rather than directing the userto sources of information without judging those sources as to‘reliability’ and ‘suitability’ for the user’s requirements (p. 83).

In the age of the Internet, the role or mission of a particularpublic library should not be used as a consensus or team-buildingexercise; rather, the mission, goals, and objectives of the library,Pratt believes, should be the “vision” of “one person’s mind,turned into reality by that persons skill, intelligence, hard work anddetermination in making it real” (p. 64). Consensus should besought from the stakeholders after the mission, goals, and objec-tives have been written down. This is to insure that the comingtogether of user and graphic record produces, or has a greaterchance of producing, a successful informing encounter.

What the services librarians provide should add value to theactivity of bringing a user together with an informing graphicrecord, which means giving the client what he or she needs, notwants—i.e., the goal should not be trying to please the client (p.106). This may mean summarizing the subject area and highlight-ing the major issues for the client, and the delivery of materialsdirectly to the patron (p. 84).

The client should not be forced to learn how to use the library.An objection to providing the client with value added service isthat it prevents the client from learning how to use the library,which is a fundamental principle of many libraries. Pratt asks:Why should the client know how to use the library? They do not

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998 1333

Page 6: Java in a nutshell: A desktop quick reference

want to and it is not really useful that they do so. That is why somany bibliographic instruction programs fail.

One of the most interesting points Pratt raises, in this regard, isthat a client being taught how to use the library, while not wantingto be taught how to use the library because he or she is preoccupiedwith an information need, militates against the librarian’s primaryrole of bringing the user together with an informing graphic record.Pratt quotes an ex-librarian, now an administrator, who says: “Bydoing so [going to the library and becoming involved in themechanics of learning to make the library work] I will break up myday and my thought pattern” (p. 86). There are several levels tothis comment which go to the heart of Pratt’s first principledefinition of information and the role of the library to bring theuser together with a (potentially) informing graphic record.

This short, exceptionally well-conceived book has been entirelyupdated from the first edition, with large sections devoted to newlypublished research (e.g., recent work by Michael Buckland ondefinitions of information, and Guy Marco on library managementissues). In fact, the nature of the book has changed from the firstedition. It is now much more concerned with library-related issues.What the two editions share, however, is Pratt’s marvelous earlychapters on the nature of information, and the poetic, but notsentimental, sensibility of the writer.

Charles ColeCommunication StudiesConcordia University7141 Sherbrooke St. West, BR-111Montreal, QuebecCanada H4B 1R6E-mail: [email protected]

User and Task Analysis for Interface Design.JoAnn T. Hackosand Janice C. Redish. New York: Wiley; 1998: 488 pp. Price:$49.99. (ISBN 0-471-17831-4.)

Hackos and Redish tackle the often overlooked, pre-designactivities of user and task analysis in their outstanding text,Userand Task Analysis for Interface Design.In this practical volume,the authors provide rationale for these methods, discuss the detailsof preparing and conducting site visits, and outline the issues thatbridge the gap between analysis and design. Peppered throughoutthe text are real-world examples borrowed from the many years ofexperience the authors share in doing user interface design, doc-umentation, and task analysis.User and Task Analysis for Inter-face Designis an excellent discussion of prerequisite design ac-tivities and is a suitable introduction for the novice as well asexperienced user interface design practitioner.

While the book is geared toward user interface designers andhuman factors specialists, other disciplines can benefit, as well. Forexample, there is a chapter on user and task analysis for documen-tation and training that discusses such activities for use by tech-nical writers and other information designers. Likewise, thoseworking in marketing, product management, or other fields con-cerned with achieving a good fit between humans and technologycan benefit from the tools and methods described in this volume.Throughout the book, numerous case studies and real-world designsituations are presented to help illustrate the use of user and taskanalysis methods. This approach makes the book an enjoyable andeffective presentation of concepts that can benefit anyone inter-ested in improving the design of a system.

Although the book is targeted to the practitioner, researchersand academics may find the ample references to the extant litera-

ture useful. Those doing research in user interface design methodsor instructional design will note thorough citations to both currentand classic references. In addition to references, each chapterconcludes with pointers to related books and articles that cover thetopics in more depth.

The use of task analysis as a precursor to doing design work haslong been a mainstay of a traditional human factors engineer’s toolkit. Task analysis dates back to World War II as a method forassessing fighter pilot workload. Hackos and Redish have ad-vanced the topic, however, by broadening its scope and describingits use in many diverse situations: “In addition to applying taskanalysis to both work and leisure, we also consider as types of taskanalysis all levels of granularity and detail from people’s goals tothe lists of tasks they do to the specifics of the actions they take andthe decisions they make.”

The authors recognize that in a messy world, analyzing anddocumenting human behavior is a tremendous challenge. For thisreason, the book emphasizes the site visit as the ultimate tool formaking sense out of a complex world. Their methods reflect theneed to make the complex understandable and useful for the designof systems. Unfortunately, there is no single method that willenable one to transform a complicated user interaction into a lucentunderstanding of user needs for interface design. For example, ina chapter entitled, “Conducting the Site Visit: Honing Your Ob-servation Skills,” the authors describe the difficulties in determin-ing a user’s goal. In using a flowchart software package, a usercould describe the goal as drawing boxes on the screen (low level)or creating on organization chart (high level). The authors presentquestions to ask of users that help the observer understand theappropriate granularity of the goal, as well as an unstated goal.

Despite the wealth of useful information, the volume has a fewshortcomings. For example, the emphasis on doing site visits doeslittle for the designer with limited resources. Today’s competitivemarketplace has shortened the traditional design cycle, therebyeliminating the time required to prepare for a site visit, conduct it,and analyze the findings. Although the authors present usefulcounterarguments that management might make against doing sitevisits, little advice is given to those who remain without thenecessary resources.

While collecting highly useful information in a series of tasksanalyses, the authors avoid the topic of using that information as abasis for design science. For example, the knowledge learned fromone project’s user and task analyses could be used to inform otherdesign projects, and a series of studies could culminate in a bodyof knowledge. Similarly, the authors use scant theory to explainthe development or application of their methods.

Despite these few shortcomings,User and Task Analysis forInterface Designis an outstanding presentation of an often ne-glected topic. One could argue that if thorough and adequate userand task analysis is not performed, the foundation on which adesign is built may be flawed. This volume provides practicalexamples, comprehensive discussion of important issues whendoing site visits and design, and sound advice for making theconnection between analysis and good design.

Jonathan KiesPhilips Consumer Communications91 New England Ave.Piscataway, NJ 08854-4142E-mail: [email protected]

Research Misconduct: Issues, Implications, and Strategies.Ellen Altman and Peter Hernon, eds. Greenwich, CT: Ablex; 1997:204 pp. Price: $73.25. (ISBN 1-56750-340-3.)

When one speaks of research misconduct, one does not meanerrors due to good intentions mixed with human fallibility. Scien-

1334 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998

Page 7: Java in a nutshell: A desktop quick reference

tific misconduct entails “fabrication [making up data or results],falsification [changing data or results], or plagiarism [using theideas or words of another person without giving appropriate credit]in proposing, performing, or reporting research” (p. 2).

The purpose of this book is to address issues that have not beendealt with before. These issues include: The impact of fraudulentpublications upon libraries and the bibliographic process of ab-stracting, indexing, and subsequent retrieval; the use of taintedworks in teaching and learning; and librarians’ responses (or lackthereof) to research misconduct.

The contributors and editors of this text are well qualified tocomment on misconduct and library services. Ellen Altman hasbeen a visiting professor, faculty member, and graduate libraryschool director. She is currently the Feature Editor ofPublicLibraries, Co-Editor of theJournal of Academic Librarianship,and a member ofLibrary Quarterly’s editorial board. Philip J.Clavert is a senior lecturer at Victoria University of Wellington,and a past editor ofNew Zealand Libraries.Peter Hernon is aprofessor at Simmons College and is Editor-in-Chief of theJournal of Academic Librarianship,Editor of Government In-formation Quarterly,and Co-Editor ofLibrary & InformationScience Research.Laura R. Walters is the Head of Collectionsat the Arts and Sciences Library of Tufts University.

Ellen Altman provides an excellent introduction with thefirst chapter of this book entitled, “Scientific and ResearchMisconduct.” Statistics from the Office of the Inspector General(OIG) at the National Science Foundation (NSF) show 675allegations of scientific misconduct reported between June 1992and October 1995. Audits conducted by the Food and DrugAdministration (FDA), the only federal agency that audits re-search reports, discovered that 211 or 11% of the clinical drugtrials performed between 1977 and 1988 had serious deficien-cies, with half involving possible misconduct. A survey by theAcadia Institute of 118 graduate school deans in 1988 shows40% of the deans received reports of research misconduct.Twenty percent of the investigated allegations were verified asbeing true, with the greatest percentage of verified cases comingfrom those universities that received the most funding dollars.

Altman uses a great analogy to illustrate why one should beconcerned with misconduct. Scientific progress is like a brick wallin which every brick is a contribution from research. Each brickfinds its support upon another brick. The mortar that keeps theedifice together is the trust one has that others’ research is reliable.Remove the trust and the wall could collapse.

There are “three mechanisms [that] supposedly ensure that onlyquality research gets funded and/or published in the scientificliterature: peer review, refereeing, and replication of the research”(p. 12). These “safeguards” are based on the tenet that science isself-correcting. Replication, the best way to protect against falseresearch, is rarely done partially due to funding, insufficient re-search reporting, and difficulty in easily reproducing the studies.Peer review and refereeing rely upon the knowledge of thoseperforming these functions with the work being described, a lessthan foolproof situation. When one combines slight governingagency oversight into research, the present environment that fos-ters the publish or perish motif, and the rare punishment of thoseproven to be guilty of misconduct, you have a situation waiting tobe exploited.

Even if you can detect and expose research misconduct, youstill have a major problem: “Bogus research may be forgotten, itsperpetrators disgraced or dead, but tainted writings endure” (p. 29).Chapter 2, “Misconduct and the Scholarly Literature,” by PeterHernon, addresses the issue of how people are notified aboutfalsified, distorted, and blatantly incorrect research.

Any theory promoted by a person needs to be accepted byhis/her peers before it will become part of the knowledge base ofhis/her specialty. The first step in this process of consensus ispublication in a prestigious professional journal. Along with pub-

lication, research findings need to be easily retrieved in order to bedisseminated. Information retrieval depends upon the elements ofabstracting and indexing.

It is erroneously assumed that prestigious journals utilize de-manding screening processes that only allow articles that are trueand reliable to be published (p. 37). The fact is professionaljournals must rely upon their editors and referees to select only thebest manuscripts for publication. Yet, the refereeing process itselfhas received much criticism due in part to no agreed-upon defini-tion for the term referee. Altman observes that “just as democracydoes not guarantee good government, peer review does not guar-antee reliable research” (p. 38).

If erroneous information happens to get published, there is away to neutralize its affects. Retraction is the process that is usedto “cleanse” the literature of published works that have been foundto be faulty. Unfortunately, this process has some major problemsof its own. There is no clear consensus on the format, content, orheading labels used in a retraction. In fact, some journals will notpublish a retraction due to fear of being sued for slander. In theend, even if an article has been retracted, the printed version stillcontinues to exist.

Peter Hernon and Laura R. Walters present research con-ducted in 1996 at Tufts University, in Chapter 3, entitled“Student and Faculty Perceptions about Misconduct: A CaseStudy.” Faculty rarely wondered if the information they find inthe library might be tampered with. If they do question theresearch, they most likely ask a colleague before checking outthe author’s reputation. Faculty also tend to believe that stu-dents already possess the skills of critical thinking. If thestudents raise questions about the material they are using,faculty were not clear on what the students should do. Onerespondent cautioned that “we should regard every publishedpaper as an advertisement” (p. 62). Students also exhibited atendency to feel that the sources they find in the library aretruthful. If the students did question the material, they went firstto the instructor before checking out the author’s reputation.

Chapter 4, “Research Misconduct as Viewed from MultiplePerspectives,” by Peter Hernon and Philip J. Calvert, presentsresponses to the submission of a fraudulent paper for publication.Research was conducted utilizing focus groups at all seven NewZealand universities.

Observations from university librarians are quite illuminating.Librarians felt that those who wrote for scholarly publicationswere honest and possessed integrity. Awareness of the potentialproblems of research misconduct and the literature that existedabout this issue was low. Once made aware of the problem,librarians felt that they must respond. They felt that bibliographicinstruction (BI) was their best tool in helping the reader beware.Librarians tended to be split into two camps on the issue ofwithdrawing fraudulent material. One side favored withdrawal,and saw it as a valid component of collection management. Theother side saw withdrawal as a form of censorship, for when awork reaches the public domain, access to it should not be re-stricted.

Academe also expressed some interesting thoughts. Miscon-duct was “tacitly condoned in many university departments (“itis a part of the game”) because it apparently increases theirmeasured outputs” (p. 79). It was also stated that the “cost-benefit equation of large societal controls to catch a few dis-honest scholars is so poor that it is unacceptable” (p. 80). Thereis also a strong tendency to avoid confrontation, especially thatinvolving possible litigation. In fact, if tenured professors wereinvolved in misconduct, they would probably not lose their jobsover it.

Peter Hernon presents accepted misperceptions about miscon-duct, and some recommendations on how to deal with it, inChapter 5, entitled “Misconduct: Coping with the Problem.”Among the many accepted misperceptions are: Misconduct is not

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998 1335

Page 8: Java in a nutshell: A desktop quick reference

a problem; science is self-correcting; omission is more of a prob-lem than commission; students have learned how to evaluateinformation before entering college; librarians are aware and ac-tively dealing with the problem; and librarians monitor and alertusers about retracted works. Recommendations include: Auditingthe research; reevaluation of peer-review processes; and informa-tion literacy instruction.

This discussion leads directly into Chapter 6, “Implications ofMisconduct for Bibliographic Instruction,” by Laura R. Walters. Itis within this chapter that I feel a clear picture is presented on whatlibrarians must do to effectively combat misconduct: Take anactive role in fostering information literacy skills. We “have aresponsibility to educate students to know that resources are notuniformly valuable or relevant, and that students need to maintainskepticism and suspend judgment until they have assessed thetext” (p. 103). Depending on the setting and your preference, theword “students” can be replaced with users, clients, customers, orpatrons.

Information literacy is more than just being able to locateinformation. One must also be able to exercise “higher-ordercritical thinking skills, such as understanding and evaluating in-formation” (p. 102). To this must be added a healthy dose ofskepticism.

Bibliographic instruction is one way librarians can strengthenthe information literacy of users. BI should involve teaching usershow to evaluate information by using both internal and externalvalidation sources. Internal validation involves looking at theauthor’s stated purpose, biases, data gathering and analysis tech-niques, documentation, and sources. External validation entails aclose look at the publication and publisher, author qualifications,and the critical reception of the work.

In order to effectively confront research misconduct, librariansneed to accept that they have a very important role to play. Theyneed to make a concerted effort to let library users know thatmisleading information may exist in the library collection. Thoseinvolved in BI need to teach users that internal validation, thoughhighly important, cannot guarantee that fraudulent information willbe detected. Fraud is best uncovered (if possible) by using externalvalidation tools like citation indexes.

Ellen Altman offers her concluding thoughts and suggestions inChapter 7, called “The Implications of Research Misconduct forLibraries and Librarians.” Library users assume the informationthey access and read to be correct, especially that found in alibrary. Librarians tend to promote freedom of speech and usuallydo not restrict access to divergent viewpoints. Based on theseobservations, users believe libraries to be impartial repositories ofinformation. It may never occur to library users that by default,they “are responsible for judging both the accuracy and the rele-vance of any and all information obtained from the library” (p.114). In fact, “most libraries . . . do notmake it clear to customersthat . . . the library and its staff take no responsibility for theaccuracy of any information in the collection” (p. 115).

Altman offers an extremely crucial observation. The impartial-ity of librarians may be a perfectly valid response to opinions andcontroversial social issues. But, does impartiality work with mat-ters of fact? Even the American Library Association “has notseriously discussed or considered the matter of factual correctness,especially as it relates to scientific rather than social, religious, orpolitical topics” (p. 115). Are libraries merely impartial conduitsbetween sources and users? Do libraries offer only choices, notanswers? Altman asks:

Once a library accepts a question or a request for help in findinginformation about a topic, it seems that the library also assumes theresponsibility for providing the best possible answer or strategy forlocating appropriate sources. Otherwise, why mislead customers byoffering personal reference assistance. (p. 116)

The chapter ends with ways for librarians to succeed in thefuture. In addition to accepting more responsibility for providingaccurate information, librarians need to cultivate a sense of beinga filter between users and an ever multiplying amount of informa-tion. The author puts it best:

By definition, librarians are supposed to be experts in findinginformation. By implication, the information that experts findshould be current, complete, and as accurate as possible by areasonable standard. This may not be a new role, but it could be oneto take into the next millennium. (p. 123)

The final chapter, “Misconduct: Maintaining the PublicTrust,” by Peter Hernon promotes a definition of researchmisconduct different from the one mentioned earlier. The Com-mission on Research Integrity defines misconduct as “signifi-cant misbehavior that improperly appropriates intellectual prop-erty or contributions of others, that intentionally impedes theprogress of research, or that risks corrupting the scientificrecord or compromising the integrity of scientific practices” (p.130). This definition focuses on misappropriation, interference,and misrepresentation, terms that include and expand uponfalsification, fabrication, and plagiarism. Utilization of thisdefinition may increase one’s ability to not only actively dealwith misconduct, but also to “maintain and heighten the publictrust” (p. 132).

The last 72 pages of this book include seven appendices, anextensive list of references, two indexes (one for names, the otherfor subjects), and notes about the contributors. The appendicescover: Some publicly-discussed misconduct cases with citations tothe fraudulent literature; a list of some journals and monographs inwhich those who have been implicated in misconduct have beenpublished; the survey instruments for faculty and students used inthe case study discussed in Chapter 3; the fraudulent paper andsurvey instrument used for the Chapter 4 case study; a list ofsources useful in finding out about research misconduct; andcitations for professional codes of ethics and guidelines for au-thors.

I believe that this book should be in every library—be it academic,special, or public. It is a well-written and documented investigationinto a little-known, yet critical, issue. The only problem with this bookmay be its price. If a paperback edition is a viable alternative, thenAblex has one for $39.50 (ISBN 1-56750-341-1).

Jeff WhiteMedical Group Management Association104 Inverness Terrace EastEnglewood, CO 80112-5306E-mail: [email protected]

Books, Bricks & Bytes: Libraries in the Twenty-First Century.Stephen R. Graubard and Paul LeClerc, eds. New Brunswick, NJ:Transaction Publishers; 1998: 361 pp. Price: $24.95. (ISBN1-56000-986-1.)

Libraries need to constantly guard against the danger of be-coming static entities, making it essential that they evolve, as allinstitutions should, to meet the demands of those they serve.Libraries must also constantly adapt to meet the ever-expandingchanges brought about by the technological revolution if they areto survive and if they are to continue to be the gatekeepers toknowledge, culture, and the printed word. What will be the func-tion of libraries in the next millenium? Will the essential tasks oflibrarianship—to collect, categorize, and disseminate informationand knowledge—change radically in the years to come? Will

1336 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998

Page 9: Java in a nutshell: A desktop quick reference

libraries cease to exist altogether as has been predicted by somedoomsayers? What will be the impact of the “digital library” upon“traditional” libraries? These questions deserve close examinationby librarians, scholars and, indeed, the general public.Books,Bricks & Bytes,a volume of 19 essays, explores these questions aswell as other issues surrounding the destiny of libraries and infor-mation dissemination in the next millenium, and provides anexamination of the future of books and the publishing industry.

Books, Bricks & Byteswas originally published in the fall of1996 as a special theme issue ofDAEDALUS—the prestigiousjournal of the American Academy of Arts and Sciences—to com-memorate and celebrate the centennial of the New York PublicLibrary. In its new monographic form, it has been reprinted in itsentirety without further updates or additions, except for the inclu-sion of a brief introduction written by one of the editors, PaulLeClerc. Like many edited volumes, the essays vary in content,style, and purpose. A few of the essays are prosaic or expository innature, while others are based upon in-depth research and writtenin a more “scholarly” fashion. Several essays are historical over-views that conclude with prophecies and predictions about thefuture of libraries and librarianship. The qualifications of theinternational cast of contributors range from a graduate student atHarvard to the Directors of the National Libraries of India, France,South Africa, and Germany. A wide range of perspectives fromseveral cultures is offered for the pleasure and edification of thereader. When the book is taken as a whole, the quality is consis-tently high, despite the diverse variations in style and purpose.

The content of each essay is unique, providing useful informa-tion from the differing perspectives of library administrators, ed-ucators, and historians. Approximately one-third of the essays arespecifically devoted to exploring the problems, history, and/orfuture of libraries in various nations—namely Germany, India,Russia, South Africa, Brazil, and the United States. Although theseessays are interesting, it is questionable if some of them (such asthe one on South Africa) should have been included, as they arelargely historical accounts and the subtitle of this volume indicatesthat the focus of the essays will be the future of libraries and nottheir history. Affonso Romano de Sant’Anna (President of Brazil’sBiblioteca Nacional), however, addresses the themes of this vol-ume in a competent and highly readable essay entitled “Libraries,Social Inequality, and the Challenge of the Twenty-First Century.”This particular essay provides the reader with several scenariosthat juxtapose “scenes from Brazilian reality with news on currentlibrary projects” and proposes that “libraries, especially nationalones, can be seen as reduced models of their surrounding socialreality” (p. 279). As well as the essays that explore libraries invarious nations, there are also essays devoted to the role twoprominent institutions, the Library of Congress and the FrenchNational Library, in the information age.

The most prosaic and expository piece is “Searching for theCatalog of Catalogs” by Jamie Frederic Metzl, a Harvard LawSchool student. Metzl’s essay begins by extolling the pleasures ofbeing a student who has used both Oxford’s Bodleian Library andHarvard’s Widener Library. Although many of Metzl’s nostalgicsentiments are something most bibliophiles will empathize with,they are just sentiments and not particularly enlightening. Onoccasion, some of Metzl’s musings—such as, “I miss leafingthrough the cards, following with my fingers the paths of so manybefore me” (p. 150)—make him seem like a luddite. Luckily,Metzl eventually admits that “the library I really use and find mostuseful is the cyberlibrary I access in my pajamas” and that “thevirtual library will be a miracle of access. It will open the doors ofthe Bodleian and Widener not only to students wanting to work athome, but to aspiring Mongolian academics, Namibian journalists,and anybody else with proper equipment and a little money”(p. 151)

Some of the most noteworthy essays include “Buy or Lease?Two Models for Scholarly Information at the End (or the Begin-

ning) of an Era” by Anne Shumelda Okerson; “The Centrality ofCommunities to the Future of Major Public Libraries” by KennethE. Dowlin and Eleanor Shapiro; and “What Is a Digital Library?Technology, Intellectual Property, and the Public Interest” byPeter Lyman. Okerson delves into issues surrounding fair use andcopyright in the age of electronic publishing by providing a con-cise overview of current U.S. copyright law, followed by a thor-ough examination the “typical” electronic content license beingoffered by vendors, and concluding with a long-term prognosis forthe electronic license. Dowlin and Shapiro begin by stating that the“future of major urban libraries lies in an understanding of com-munity, connectivity and collaboration” (p. 176) and they thenexplore each of these central issues, as well as others such asintellectual freedom. They also offer guidelines for creating col-laborative institutions, and insight into the process that will ensurethat urban libraries thrive and not merely survive in the new era.Lyman’s essay is noteworthy, even if it is now somewhat dated, inthat he skillfully tackles the major themes of this volume by ablyanswering questions such as, “What is an information society?”and “What is electronic publishing?” as well as providing thereader with a good working definition of a digital library.

The preface to this volume states: “Libraries are today experi-encing a technological revolution that goes well beyond anythingthat has existed since the invention of printing. It is not at allsurprising that the digital library . . . should figure conspicuouslyin this book.” Based upon this statement, it could be concluded thatone purpose of this book is to discuss the new technologies that aretransforming libraries, and yet many of the essays were written in1996 or even earlier. An examination of footnotes and bibliogra-phies reveals that, in general, the literature is not cited beyond1993 or 1994. Hence, with a 1998 publication date, some infor-mation or essays are already out-of-date, given the pace at whichtechnology develops. At the time these essays were written, full-text retrieval databases via a graphical World Wide Web interfacewere not yet having an impact on libraries and, in fact, the Webwas still very much in its infancy in 1994. This would be a strongermonograph if one or two essays that discuss recent developmentsin Web technology had been added to the collection, instead ofsimply reprinting the original without any effort at updating.

Further access to the information through the inclusion of anindex would be useful, especially from the viewpoint of scholarsand students who may need to pull together information fromdifferent essays that discuss similar issues. Terms, concepts, orplaces such as the National Information Infrastructure, electronicpublishing, CD-ROM technology, the Kellogg Foundation, or theNew York Public Library, to name but a few, are mentionedmultiple times and thus worthy of being indexed. I find it difficultto believe that librarians could design, edit, and publish a book andoverlook the inclusion of an index. As it stands, there is no easyway to access the wide range of quality information found withinthe essays without reading the entire volume from cover to cover.

Two other volumes published in recent years immediatelycome to mind as valid points of comparison, one because its titleis so similar and the other because it too deals with the future oflibraries. The first is a report by the Benton Foundation that,despite its astonishingly similar title ofBuildings, Books andBytes: Libraries and Communities in the New Digital Age,has amuch different scope than the book under review. This reportfocuses solely on the “community library” with an emphasis on thepublic’s view of libraries. The Benton Foundation was concernedwith gathering data that would assist in the development of a“public message about American libraries that reflected both thelibrary leaders’ visions and the American people’s expectations”(Benton Foundation, 1996, p. 1). In doing so, it attempts to“compare library leaders’ visions for the future with the public’sprescription for libraries” (Benton Foundation, 1996, p. 3).Books,Bricks & Bytesis broader in scope—exploring a variety of librar-ies (public, academic, and national) and the ancillary industries

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998 1337

Page 10: Java in a nutshell: A desktop quick reference

related to librarianship—and different in that it presents scholarlyresearch and is not based upon public opinion research, as is muchof the report of the Benton Foundation. It is the breadth of topicscovered in the 19 essays that makesBooks, Bricks & Bytespar-ticularly useful and guarantees that it will have appeal for awide-ranging audience.

The second volume that comes to mind isFuture Libraries:Dreams, Madness & Reality,a book in which the authors “look atthe present state of libraries—their triumphs and tribulations—andcast a cold eye on the extrapolations of the present into theunknown, if confidently, predicted future” (Crawford & Gorman,1995, p. 1).Future Libraries,the work of two highly respected andauthoritative figures in the library world, is a book that not onlyexamines technology and its impact on the future of libraries, butalso offers pragmatic advice for practitioners on how to cope withthe technological revolution and stay sane in the process. Readingall three of these recently published volumes in close succession,or even simultaneously, will assist readers with drawing newinsights into the future of the profession of librarianship and theworld of libraries.

Despite a few minor flaws, such as the exclusion of an index,Graubard and LeClerc have compiled and edited an engaging,eclectic, and thought-provoking collection that rises above beingjust another future prognosticator. Its appeal will most likely be tobibliophiles, those who work in libraries, and students who studyinformation and library science or, perhaps, communications orcomputer science.Books, Bricks & Bytesis recommended for allacademic libraries with programs in library and information sci-ence, as well as libraries that support programs in disciplines thatfocus on communications and technology. It may also be a worthypurchase for larger public and academic libraries’ collections.

Janie L. Hassard WilkinsWarren Hunting Smith LibraryHobart and William Smith CollegesGeneva, NY 14456E-mail: [email protected]

References

Benton Foundation. (1996).Buildings, books and bytes: Libraries andcommunities in the digital age.Washington: Author. Available:̂http://www.benton.org/Library/Kellogg/buildings.html&

Crawford, W., & Gorman, M. (1995).Future libraries: Dreams, madness& reality. Chicago: American Library Association.

The Virtual Workplace. Magid Igbaria and Margaret Tan, eds.Hershey, PA: Idea Group; 1998: 406 pp. Price: $49.95. (ISBN1-878289047-0.)

In The Virtual Workplace,Magid Igbaria and Margaret Tanhave collected an impressive body of research including 19 sepa-rate studies pertaining to the emergent issues and problems ofcomputer-mediated work environments. The book is divided intofour major sections concerning the issues and benefits, applica-tions, aspects of tele-work and remote work, and human issuesrelated to the virtual workplace. The scope of the research is asbroad as would be found in any general analysis of face-to-face(f2f) workplace issues, spanning such topics as distance training,organizational effectiveness, economics, labor issues, and work–family conflicts pertaining to tele-work.

Much of the research concerns the psycho-social aspects of thevirtual workplace. Research pertaining to group and individual

behavioral differences between f2f and online or “virtual” envi-ronments has been accumulating for years. Though such newenvironments present great opportunities for the exploration ofpsychological and social theory, research in this area has onlyrecently begun to attract the attention of mainstream social scien-tists. The obvious utility of such studies concerns the application offindings to the growing implementation of these virtual environ-ments as settings for educational, social, commerce-oriented, andgeographically dispersed work groups. The not so obvious utilityof this type of research, given that virtual environments oftenrepresent unique opportunities for observation and study, is thatsuch investigations may test the generalizability and boundaries ofcurrent theory, and ultimately lead to deeper understanding withinthe social sciences in general. The research presented inTheVirtual Workplacerepresents important progress along these lines;it demonstrates that studies in computer-mediated communicationsare garnering interest among scholars and can offer practicalbenefits as well as new theoretical insight into the rapidly trans-forming computerized work environment. It is noteworthy that theorientation of the collection is towards the treatment of virtualwork environments as a new form of organizational structure,rather than as mere adjuncts to existing organizations (as has beenthe orientation of a number of past research efforts). Treating thevirtual workplace as such should lead to a deeper understanding ofthe issues involved, unfettered by more traditional notions oforganizational structure and behavior, while taking into full ac-count the sometimes highly unique psycho-social orientation ofonline interactants.

As a whole, the collection is quite informative, however, aswith any such body of edited work, some articles appear to be ofgreater merit than others. Though much of the research presentedcan be said to be groundbreaking in that the issues under studyhave received little or no attention previously, it is regretful that afew of the studies lack the necessary rigor to be considered asanything more than preliminary or merely suggestive. It is perhapsunderstandable that, in a field where so little has been done, certainresearchers might tend to opt for smaller samples, case studies, andqualitative assessments in order to lay the groundwork for futureexplorations.

Another minor shortcoming is that the references of the vastmajority of the researchers contain very few, if any, citations ofresearch published on the World Wide Web. Though paper pub-lications are a vital source of information, much of the relevantresearch in this area (understandably in light of the medium underconsideration) is published in electronic form either in peer re-viewed “e-journals” or on the personal websites of Internet re-searchers. Though in the case of publications found on personalwebsites, one might tend to view such information as possiblyquestionable in terms of reliability, there are many high qualitypeer reviewed e-journals that offer credible collections of infor-mation. Likewise, many researcher’s personal websites also con-tain credible information and, just as with paper publications, thereader must judge the viability of the work on a case-by-case basis.One may wonder whether the bias towards paper-based citationswithin The Virtual Workplaceserved to bias some of the researchorientations of its contributors.

These minor criticisms aside,The Virtual Workplaceis a highlyinformative and well-selected collection of research that presents afairly comprehensive overview of emergent issues and shouldserve as a valuable resource for scholars interested in pursuingfuture research in this area.

James J. Sempsey711 East Passyunk Ave.Philadelphia, PA 19147E-mail: [email protected]

1338 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998

Page 11: Java in a nutshell: A desktop quick reference

Privacy on the Line: The Politics of Wiretapping and Encryp-tion. Whitfield Diffie and Susan Landau. Cambridge, MA: TheMIT Press; 1998: 342 pp. Price: $25.00. (ISBN 0-262-04167-7.)

The maturation of online environments has resulted in confu-sion about (and challenges to) the rights and responsibilities ofindividual netizens, government, and business. What will consti-tute the warp and woof of the emerging online social fabric? Theencryption issue pits the interests of government against the inter-ests of both business and individuals. The government is againstabsolute encryption because it would make it difficult or impossi-ble to track many online illegal activities. In the past, if not also inthe present, cryptography has been tied to issues of nationalsecurity and affairs of state. Matters of national security are likenear-death experiences and miracles, in that it takes more thansimple deciphering to differentiate the real from the feigned.Business generally is interested in encryption as a tool to supportsecure online financial transactions. For many individual netizens,the encryption issue is all about the rights of free speech andprivacy. Although cryptography has ancient origins, it has flour-ished in the 20th century. During the first half of this century,military applications dominated the field. In the most recent 30years, a combination of declining computer costs and diversecivilian applications created a surge in academic and commercialinterest in cryptography.

Cryptography is one instance of a set of disciplines and socialpractices that appear to be undergoing fundamental economic,technological, and social transformation in the late 20th century.Because the economic costs of encrypting and deciphering mes-sages has dropped substantially in the last few years, encryption nolonger is the province of privileged segments of society, such asthe military and intelligence agencies. Cryptography is beingmainstreamed. As cryptography diffuses into many segments ofthe population, the diffusion process raises new social concernsand conflicts.

In this book, Diffie and Landau undertake to articulate andaddress these concerns and conflicts. Diffie was one of the devel-opers of public-key cryptography. David Brin (1998, p. 249) refersto Diffie as one of the “godfathers” of encryption. Diffie andLandau explore the 20th century history of encryption technolo-gies. The authors attempt to “lift enough veils” about cryptographyto enable the reader to develop an informed opinion on the subject.They examine the politics of cryptography in light of the socialfunction of privacy. In the coming society in which most human–human communication will be telecommunication and many closerelationships will exist between people who rarely, if ever, meet inperson, it becomes impossible to hedge about privacy. The authorsassert that modern communication systems are, by their essentialnature, interceptible. Privacy in long-distance communication isnot something the conversants can achieve and guarantee on theirown. Privacy has become less of a rule of nature (i.e., governednaturally by the constraints of the real world) and more of a rule ofgovernment and society. Ensuring privacy is becoming less thepurview of the conversants and more the responsibility of thecreators and maintainers of telecommunications media. The au-

thors are optimistic that electronic cryptography may restore someof the privacy lost during earlier technological advances.

Of the four keywords in the title (privacy, politics, wiretapping,and encryption), encryption receives the most attention, and poli-tics is the dominant lens. This is not an unbiased, dispassionateexamination of cryptography, wiretapping, and privacy. The chap-ter on privacy is a little disappointing, because it is mainly athumbnail history of the development (or decline) of personalprivacy in the U.S. in the 20th century, and because the authors donot explore the relationships between privacy issues and encryp-tion and wiretapping. The section on privacy issues in the 1990s isone paragraph long and cites two cautionary reports from offices ofthe U.S. federal government. The authors do note, however, that,whereas the societal needs for national security and law enforce-ment have both powerful political constituencies and the backingof major societal organizations, the basic societal need for privacyhas no such muscle behind it.

In a field where “publishandperish” seems to be the dominantBoolean argument, the bibliography at the end of this book isamazingly rich. The endnotes are extensive and informative. Theindex is well done.

The authors recount many interesting and harrowing tales fromthe crypt. Public-key cryptography is approximately 25 years old.Cryptography has attracted and involved a diverse cast of charac-ters, including presidents, academics, garage inventors—evenHedy Lamarr. This is more of a history book than an educated gazeinto the future of privacy, encryption, and wiretapping. Yet it is aninteresting, engaging book—almost a page-turner. Any book thatconcludes with a section titled “Suppose we were to make amistake?” almost certainly is worth reading, especially if it is aboutsome aspect of computerization—arguably the defining hubris ofthe late 20th century.

One limitation of cryptography is that it appears to encompassonly alphanumeric utterances, which certainly are important tohumans and the human condition, but not comprehensive. Ashuman colonization of cyberspace continues, alphanumeric utter-ances such as E-mail messages and electronic fund transfers mayactually lose market share in the expanding universe of the virtualhuman being. People may wish to hide, veil, and encrypt what theydo and where they go in cyberspace, as well as what they utter. Itwill be interesting to see if cryptography (not to mention govern-ment, business, and the society of netizens) is up to the challenge.

Thomas A. PetersWestern Illinois UniversityUniversity Libraries1 University CircleMacomb, IL 61455-1390E-mail: [email protected]

Reference

Brin, D. (1998). The transparent society: Will technology force us tochoose between privacy and freedom?Reading, MA: Addison-Wesley.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE—December 1998 1339