35
Preface Partial evaluation has reached a point where theory and techniques are mature, substantial systems have been developed, and it appears feasible to use partial eval- uation in realistic applications. This development is documented in a series of ACM SIGPLAN-sponsored conferences and workshops, Partial Evaluation and Semantics- Based Program Manipulation, held both in the United States and in Europe. In 1987, the first meeting of researchers in partial evaluation took place in Gam- mel Avernæs, Denmark. Almost ten years later, the time was due to evaluate the progress that has been achieved during the last decade and to discuss open prob- lems, novel approaches, and research directions. A seminar at the International Conference and Research Center for Computer Science at Schloß Dagstuhl, located in a beautiful scenic region in the southwest of Germany, seemed ideally suited for that purpose. The meeting brought together specialists on partial evaluation, partial deduc- tion, metacomputation, program analysis, automatic program transformation, and semantics-based program manipulation. The attendants were invited to explore the dimensions of program specialization, program analysis, treatment of programs as data objects, and their applications. Besides discussing major achievements or failures and their reasons, the main topics were: Advances in Theory: Program specialization has undergone a rapid devel- opment during the last decade. Despite its widespread use, a number of theo- retical issues still need to be resolved, including efficient treatment of programs as data objects; metalevel techniques including reflection, self-application, and metasystem transition; issues in generating and/or hand-writing program gen- erators; termination and generalization issues in different languages; related topics in program analysis including abstract interpretation, flow analysis, and type inference; the relationships between different transformation paradigms, such as automated deduction, theorem proving, and program synthesis. Towards Computational Practice: Academic research has thrived in many locations. Broad practical experience has been gained, and stronger and larger program specializers have been built for a variety of languages, including Scheme, ML, Prolog, and C. Work is being initiated to bring the achieve- ments of theory into practical use but a number of pragmatic issues still need to be resolved: progress towards medium- and large-scale applications; en- vironments and user interfaces (e.g., binding-time debuggers); integration of partial evaluation into the software development process; automated software reuse. 3

Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Preface

Partial evaluation has reached a point where theory and techniques are mature,substantial systems have been developed, and it appears feasible to use partial eval-uation in realistic applications. This development is documented in a series of ACMSIGPLAN-sponsored conferences and workshops, Partial Evaluation and Semantics-Based Program Manipulation, held both in the United States and in Europe.

In 1987, the first meeting of researchers in partial evaluation took place in Gam-mel Avernæs, Denmark. Almost ten years later, the time was due to evaluate theprogress that has been achieved during the last decade and to discuss open prob-lems, novel approaches, and research directions. A seminar at the InternationalConference and Research Center for Computer Science at Schloß Dagstuhl, locatedin a beautiful scenic region in the southwest of Germany, seemed ideally suited forthat purpose.

The meeting brought together specialists on partial evaluation, partial deduc-tion, metacomputation, program analysis, automatic program transformation, andsemantics-based program manipulation. The attendants were invited to explorethe dimensions of program specialization, program analysis, treatment of programsas data objects, and their applications. Besides discussing major achievements orfailures and their reasons, the main topics were:

• Advances in Theory: Program specialization has undergone a rapid devel-opment during the last decade. Despite its widespread use, a number of theo-retical issues still need to be resolved, including efficient treatment of programsas data objects; metalevel techniques including reflection, self-application, andmetasystem transition; issues in generating and/or hand-writing program gen-erators; termination and generalization issues in different languages; relatedtopics in program analysis including abstract interpretation, flow analysis, andtype inference; the relationships between different transformation paradigms,such as automated deduction, theorem proving, and program synthesis.

• Towards Computational Practice: Academic research has thrived in manylocations. Broad practical experience has been gained, and stronger and largerprogram specializers have been built for a variety of languages, includingScheme, ML, Prolog, and C. Work is being initiated to bring the achieve-ments of theory into practical use but a number of pragmatic issues still needto be resolved: progress towards medium- and large-scale applications; en-vironments and user interfaces (e.g., binding-time debuggers); integration ofpartial evaluation into the software development process; automated softwarereuse.

3

Page 2: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

• Larger Perspectives: We also wanted to critically assess state-of-the-arttechniques, summarize new approaches and insights, and survey challengingproblems.

The Proceedings

All participants were invited to submit a full paper of their contribution for a pro-ceedings volume [7]. The submitted papers have been reviewed with outside assis-tance and each paper read by at least three referees. Technical quality, significance,and originality were the primary criteria for selection. The selected papers cover awide and representative spectrum of topics and document state-of-the-art researchactivities in the area of partial evaluation. Three surveys were invited to completethe volume:

• In What not to do when writing an interpreter for specialization, Neil Jonescarefully reviews the (sometimes non-obvious) pitfalls that must be avoidedby the partial-evaluation apprentice in order to successfully generate compilersby partial evaluation,

• In A comparative revisitation of some program transformation techniques, Al-berto Pettorossi and Maurizio Proietti survey the use of well-known transfor-mation techniques in recent work, and

• In Metacomputation: Metasystem transition plus supercompilation, ValentinTurchin provides us with a personal account of the history and the present stateof supercompilation and metasystem transitions. Professor Turchin presentedhis invited lecture on his 65th birthday, during the seminar.

Here is a brief topical summary of the contributions in the proceedings:

Partial evaluation:

• Thomas Reps and Todd Turnidge. Program specialization via program slic-ing. Demonstrates that specialization can be achieved by program slicing. Abridge-building paper between two research areas.

• Torben Mogensen. Evolution of partial evaluators: Removing inherited limits.Provides a rationalized, unified, and insightful view of recent developments inpartial evaluation.

4

Page 3: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

• Michael Sperber. Self-applicable online partial evaluation. Reports the suc-cessful self-application of a realistic online partial evaluator. Solves a long-standing problem.

• Alain Miniussi and David Sherman. Squeezing intermediate construction inequational programs. Applies specialization of stack-machine code to optimizeequational programs.

Imperative programming:

• Mikhail A. Bulyonkov and Dmitry V. Kochetov. Practical aspects of special-ization of Algol-like programs. Presents a new technique for decreasing thememory usage in the specialization of imperative programs.

• Alexander Sakharov. Specialization of imperative programs through analysis ofrelational expressions. Presents a powerful optimizer applying techniques fromsupercompilation and generalized partial computation to improve imperativecode, using a dependence graph.

Functional programming:

• John Hughes. Type specialisation for the lambda calculus. An elegant steptowards solving the (in)famous tagging problem in specializing higher-orderinterpreters for typed programs.

• Olivier Danvy. Pragmatics of type-directed partial evaluation. Extends theauthor’s approach to specialize compiled programs by avoiding computationduplication and making residual programs readable, using type annotations.

• Peter Sestoft. ML pattern match compilation and partial evaluation. Appliesinformation-propagation techniques to the compilation of pattern matching.An excellent application of partial evaluation in a compiler.

Logic programming:

• Alberto Pettorossi and Maurizio Proietti. Automatic techniques for logic pro-gram specialization. Describes a general method to specialize logic programswith respect to sets of partially known data given as a conjunction of atoms.The correctness of the technique is shown by a proof based on unfolding/foldingtransformations.

5

Page 4: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

• Michael Leuschel and Bern Martens. Global control for partial deductionthrough characteristic atoms and global trees. Describes a global control ofpartial deduction that ensures effective specialization and fine-grained poly-variance, and whose termination only depends on the termination of the localcontrol component—a significant progress.

Metacomputation by supercompilation:

• Andrei P. Nemytykh, Victoria A. Pinchuk, and Valentin F. Turchin. A self-applicable supercompiler. Shows the feasibility of self-application of supercom-pilers by demonstrating its use with examples. The solution of a long-standingopen problem.

• Robert Gluck and Morten Heine Sørensen. A roadmap to metacomputation bysupercompilation. A survey paper with a careful explanation of supercompila-tion and its application in metacomputation, reporting state-of-the-art tech-niques and including a comprehensive list of references on the topic.

Generating extensions:

• Jesper Jørgensen and Michael Leuschel. Efficiently generating efficient gener-ating extensions in Prolog. Reports the first successful hand-written program-generator generator (cogen) for a logic programming language. Exhibits sig-nificant speedups with respect to generic specializers and cogens generated byself-application.

• Scott Draves. Compiler generation for interactive graphics using intermediatecode. Reports on a hand-written cogen for an intermediate language and itssuccessful application to graphics programming. Includes the possibility ofruntime code generation for specialized procedures.

Multi-level systems:

• Flemming Nielson and Hanne Riis Nielson. Multi-level lambda-calculi: analgebraic description. Considering the recent surge of interest in multi-levelcalculi, the authors revisit the “B-level-languages” of their book on two-levelfunctional languages. In this paper, they provide us with a unifying treatmentof various different calculi by exhibiting a general algebraic framework whichcan be instantiated to all other known calculi.

6

Page 5: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

• John Hatcliff and Robert Gluck. Reasoning about hierarchies of online programspecialization systems. Successful self-application and metasystem transition isharder to achieve in online specializers than in offline ones. The paper developsa framework which facilitates metasystem transitions involving multiple levelsfor online specializers.

Program analyses:

• John Gallagher and Laura Lafave. Regular approximation of computationpaths in logic and functional languages. Introduces computation paths asa novel means to control the polyvariance and termination of specializationtechniques for logic and functional languages.

• Wei-Ngan Chin, Siau-Cheng Khoo, and Peter Thiemann. Synchronizationanalyses for multiple recursion parameters. Extends the tupling transforma-tion to functions with multiple recursion parameters. Develops an originalframework to identify synchronous handling of recursion parameters and givesterminating algorithms to perform the tupling transformation.

Applications:

• Sandrine Blazy and Philippe Facon. An approach for the understanding of sci-entific application programs based on program specialization. Applies special-ization to understanding and maintaining “dusty deck” FORTRAN programs.The paper extends previous work with an interprocedural analysis.

• Charles Consel, Luke Hornof, Francois Noel, Jacques Noye, and NicolaeVolanschi. A uniform approach for compile-time and run-time specialization.Presents a language-independent partial-evaluation framework which is suitedfor standard partial-evaluation techniques, as well as for run-time specializa-tion. The major application is run-time specialization of one of the mostwidely used programming languages: C.

Further Partial-Evaluation Resources

Good starting points for the study of partial evaluation are the textbook by Jones,Gomard, and Sestoft [13], the tutorial notes by Consel and Danvy [5], and thetutorial notes on partial deduction by Gallagher [9]. Further material can be found

7

Page 6: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

in the proceedings of the Gammel Avernæs meeting [2, 8], in the proceedings ofthe ACM conferences and workshops on Partial Evaluation and Semantics-BasedProgram Manipulation (PEPM) [4, 10, 16, 17, 19], and in special issues of variousjournals [11,12,14]. An online bibliography is being maintained by Peter Sestoft [18].

Acknowledgements

We wish to thank the staff at Schloß Dagstuhl for their efficient assistance and formaking the meeting possible. Thanks are also due to Elisabeth Meinhardt andGebhard Engelhart for the help with organizational matters. And finally, thanks toall participants for a lively and inspiring meeting.

8

Page 7: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

1 Seminar Program

Monday, February 12

Session 1: Specialization of Imperative Programs IChair: Sandrine Blazy

A Uniform Approach for Compile-Time and Run-Time SpecializationCharles Consel, Luke Hornof, Francois Noel, Jacques Noye, Nicolae Volanschi

Session 2: Specialization of Imperative Programs IIChair: Julia L. Lawall

Data SpecializationErik Ruf, Todd Knoblock

Toward Partial Evaluation of Side-Effecting SchemeKenichi Asai

Session 3: Specialization of Imperative Programs IIIChair: Peter Sestoft

Practical Aspects of Specialization of Algol-like ProgramsMikhail A. Bulyonkov, Dmitry V. Kochetov

Specialization of Imperative Programs through Analysis of Relational ExpressionsAlexander Sakharov

Session 4: Specialization of Imperative Programs IVChair: Alexander Letichevsky

An Automatic Interprocedural Analysis for the Understanding of Scientific Appli-cation Programs Based on Program SpecializationSandrine Blazy, Philippe Facon

Squeezing Intermediate Construction Equational ProgramsAlain Miniussi, David Sherman

Session 5: DemonstrationsChair: Michael Sperber

9

Page 8: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

The M2Mix SystemMikhail A. Bulyonkov, Dmitry V. Kochetov

Tempo: A Partial Evaluation System for the C Programming LanguageCharles Consel, Luke Hornof, Francois Noel, Jacques Noye, Nicolae Volanschi

Meta-MLTim Sheard

Tuesday, February 13

Session 6: Theory IChair: Helmut Schwichtenberg

Multi-Level Lambda-Calculi: an Algebraic DescriptionFlemming Nielson, Hanne Riis Nielson

Regular Approximations of Computation Paths in Logic and Functional Languages

John P. Gallagher, Laura Lafave

The Functional Approach to Analyses and Transformations of Imperative Programs

Barbara Moura, Charles Consel

Session 7: Invited LectureChair: Mikhail Bulyonkov

What Not to Do When Writing an Interpreter for SpecialisationNeil D. Jones

Session 8: Theory IIChair: Torben Mogensen

Program Development by Proof TransformationHelmut Schwichtenberg

Deriving and Applying Program TransformersDavid Basin

Session 9: Logic Programming IChair: John Gallagher

10

Page 9: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

A Theory of Logic Program Specialization and Generalization for Dealing with In-put Data PropertiesAlberto Pettorossi, Maurizio Proietti

Global Control for Partial Deduction through Characteristic Atoms and Global Trees

Michael Leuschel, Bern Martens

Session 10: Logic Programming IIChair: Manuel Hermenegildo

Efficiently Generating Efficient Generating Extensions in PrologJesper Jørgensen, Michael Leuschel

Semantics-Directed Generation of Abstract MachinesStephan Diehl

Session 11: DemonstrationsChair: Kyung-Goo Doh

Regular Approximations of Computation Paths in Logic and Functional Languages

John P. Gallagher, Laura Lafave

Data Specialization of Graphics Shading CodeTodd Knoblock, Erik Ruf

Psychedelic GraphicsScott Draves

Experiments on Partial Evaluation in APSAlexander Letichevsky

Wednesday, February 14

Session 12: Supercompilation IChair: John Hatcliff

A Self-Applicable SupercompilerAndrei P. Nemytykh, Victoria A. Pinchuk, Valentin F. Turchin

11

Page 10: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Specification of a Class of SupercompilersAndrei Klimov

A Roadmap to Metacomputation by SupercompilationRobert Gluck, Morten Heine Sørensen

Session 13: Supercompilation IIChair: Neil D. Jones

Nonstandard Semantics of Programming LanguagesSergei Abramov

Invited Lecture: Metacomputation: MST plus SCPValentin F. Turchin

Thursday, February 15

Session 14: Functional Programming IChair: Charles Consel

Type Specialisation for the Lambda CalculusJohn Hughes

Some Relationships between Partial Evaluation and Meta-ProgrammingTim Sheard

Compiler Generation for Interactive Graphics using Intermediate CodeScott Draves

Session 15: Functional Programming IIChair: Yoshihiko Futamura

Self-Applicable Online Partial EvaluationMichael Sperber

Program Specialization via Program SlicingThomas Reps, Todd Turnidge

A Comparative Revisitation of Some Program Transformation TechniquesAlberto Pettorossi, Maurizio Proietti

12

Page 11: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Session 16: Functional Programming IIIChair: Erik Ruf

Evolution of Partial Evaluators: Removing Inherited LimitsTorben Æ. Mogensen

ML Pattern Match Compilation and Partial EvaluationPeter Sestoft

Session 17: Functional Programming IVChair: Thomas Reps

Lockstep TransformationsKristian Nielsen

Unravelling ComputationsS. Doaitse Swierstra

Session 18: DemonstrationsChair: Wei-Ngan Chin

A Binding-Time Analysis Ensuring Termination of Partial EvaluationNeil D. Jones

C-Mix DemonstrationPeter H. Andersen

Session 19: DemonstrationsChair: Todd Knoblock

Program Slicing, Differencing, and MergingThomas Reps

Applying Multiple Specialization to Program ParallelizationManuel Hermenegildo, German Puebla

13

Page 12: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Friday, February 16

Session 20: Functional Programming VChair: Dirk Dussart

Hylomorphism in Triplet Form, an Ideal Internal Form for Functional Program Op-timizationAkihiko Takano

Partial Evaluation and Separate CompilationRogardt Heldal, John Hughes

Session 21: Functional Programming VIChair: David Schmidt

Synchronization Analyses for Multiple Recursion ParametersWei-Ngan Chin, Siau-Cheng Khoo, Peter Thiemann

Reasoning about Hierarchies of Online Program Specialization SystemsJohn Hatcliff, Robert Gluck

Pragmatics of Type-Directed Partial EvaluationOlivier Danvy

14

Page 13: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

2 Abstracts of Presentations (alphabetical)

Nonstandard Semantics of Programming Languages

Sergei Abramov

This paper introduces notions of semantics modifier and on-standard semantics ofprogramming languages. The universal resolving algorithm and neighborhood ana-lyzer are considered as examples of semantics modifiers. The precise definition of se-mantics modifier and non-standard semantics is given and illustrated with additionalexamples. Finally it is shown that programming tools for effective implementationof non-standard semantics – non-standard interpreters, non-standard compilers –can be obtained using a powerful specializer (e.g. a supercompiler) as a result ofseveral metasystem transitions.

The paper is available by anonymous FTP fromftp://ftp.botik.ru/pub/local/Sergei.Abramov/nons-sem.dvi.zip

C-Mix Demonstration

Peter Holst Andersen

C-Mix is a partial evaluator for the ANSI C programming language. It was originallydeveloped by Lars Ole Andersen, and is currently developed and maintained by theauthor. The theory is developed to cover almost all of ANSI C [1], and most of it isimplemented.

The implementation supports programs that are strictly conforming to the ANSI Cstandard. A strictly conforming program shall only use features described in thestandard, and may not produce any result that depends on undefined, unspecified orimplementation-defined behavior. In general, C-Mix will not optimize non-strictlyconforming parts of the program, but rather suspend the offending operations torun-time.

The C-Mix prototype implementation is available for non-commercial use for re-search and teaching. The distribution includes source code and a user manual.More information and links to C-Mix papers can be found on the C-Mix home page:http://www.diku.dk/research-groups/topps/activities/cmix.html

15

Page 14: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Towards Partial Evaluation of Side-Effecting Scheme

Kenichi Asai

I will present my attempts to make a partial evaluator for Scheme programs whichcontain side-effects using the online approach. After presenting my motivation interms of reflection, I will show the preaction mechanism, which are used to residualizeI/O-type side-effects. It also gives us a convenient way to avoid a code eliminationproblem and to keep the order of side-effects. Then, I will consider how I can copewith assignments to variables and data structures. The concept of destroy listsand scopes are introduced here to find executable assignments. Despite the variousattempts, it is still insufficient to use in reflective languages, and some kind of offlineanalyses seem to be inevitable for the better partial evaluation.

Deriving and Applying Program Transformers

David Basin

Not available.

An Approach for the Understanding of Scientific Application ProgramsBased on Program Specialization

Sandrine Blazy and Philippe Facon

This paper reports on an approach for improving the understanding of old programswhich have become very complex due to numerous extensions. We have adaptedpartial evaluation techniques for program understanding. These techniques mainlyuse propagation through statements and simplifications of statements. We focushere on the automatic interprocedural analysis and we specify both tasks for call-statements, in terms of inference rules with notations taken from the specificationlanguages B and VDM. We describe how we have implemented in a tool and usedthat interprocedural analysis to improve program understanding. The difficulty ofthat analysis comes from the lack of well defined interprocedural mechanisms andthe complexity of visibility rules in Fortran.

Practical Aspects of Specialization of Algol-like Programs

Mikhail A. Bulyonkov and Dmitry V. Kochetov

A “linearized” scheme of polyvariant specialization for imperative languages is de-scribed in the paper. The scheme is intended for increasing efficiency of specializa-tion. Main properties of the scheme are linear generation of residual code and single

16

Page 15: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

memory shared by different variants of specialization process. The scheme was usedin partial evaluator for the Modula-2 language. Some benchmarks of the evaluatorare discussed to demonstrate efficiency of the processor.

The M2Mix Partial Evaluator

Mikhail A. Bulyonkov and Dmitry V. Kochetov

The M2Mix is a partial evaluator for the complete Modula-2 language. It is imple-mented as a compiler generator, so it accepts an annotated Modula-2 program andstatic data and produces another Modula-2 program, called generating extension,which in turn is (compiled and) executed to generate residual code in a slightlyextended Modula-2 language. Special postprocessing phase optimizes the residualcode and brings it back to Modula-2.

The static/dynamic annotations are provided by a user in a form of pseudo-comments. Typically some input files which store unknown data are annotatedas dynamic. However, a user may annotate any global or local variable or a proce-dure parameter in order to control partial evaluator. Special simple annotations arerequired for external procedures.

Upon a user’s request the M2Mix can generate a listing containing informationfrom various program analysis, such as binding time analysis, pointers analysis,configuration analysis, etc. Since in the M2Mix system static/dynamic divisiondepends not only on def/use chains, the special binding-time debugger explainingwhy something became dynamic turned out to be especially helpful.

The M2Mix system is implemented in C and runs on IBM PC under MS DOS orOS/2. Obviously a Modula-2 compiler is needed to compile source programs andgenerating extensions. The system shows very modest memory requirements andhigh speed at the same time. For example, specialization of Lex kernel interpreter(translated into Modula-2 and slightly modified for better specializability) with re-spect to tables, produced by Lex for Modula-2, performed on a machine with Intel486 processor and 4MB of RAM took 2.52 seconds and 189KB of extra memory.

Synchronization Analyses for Multiple Recursion Parameters

Wei-Ngan Chin, Siau-Cheng Khoo, and Peter Thiemann

Tupling is a transformation tactic to obtain new functions, without redundant callsand/or multiple traversals of common inputs. In [3], we presented an automaticmethod for tupling functions with a single recursion parameter each. In this paper,we propose a new family of parameter analyses, called synchronization analyses, to

17

Page 16: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

help extend the tupling method to functions with multiple recursion parameters.To achieve better optimization, we formulate three different forms of tupling op-timizations for the elimination of intra-call traversals, the elimination of inter-calltraversals and the elimination of redundant calls. We also guarantee the safety ofthe extended method by ensuring that its transformation always terminates.

A Uniform Approach for Compile-Time and Run-Time Specialization

Charles Consel, Luke Hornof, Francois Noel, Jacques Noye, and Nicolae Volanschi

As partial evaluation gets more mature, it is now possible to use this programtransformation technique to tackle realistic languages and real-size application pro-grams. However, this evolution raises a number of critical issues that need to beaddressed before the approach becomes truly practical. First of all, most existingpartial evaluators have been developed based on the assumption that they couldprocess any kind of application program. This attempt to develop universal partialevaluators does not address some critical needs of real-size application programs.Furthermore, as partial evaluators treat richer and richer languages , their size andcomplexity increase drastically. This increasing complexity revealed the need to en-hance design principles. Finally, exclusively specializing programs at compile timeseriously limits the applicability of partial evaluation since a large class of invariantsin real-size programs are not known until run time and therefore cannot be takeninto account. In this paper, we propose design principles and techniques to dealwith each of these issues. By defining an architecture for a partial evaluator andits essential components, we are able to to tackle a rich language like C withoutcompromising the design and the structure of the resulting implementation. Bydesigning a partial evaluator targeted towards a specific application area, namelysystem software, we have developed a system capable of treating realistic programs.Because our approach to designing a partial evaluator clearly separates preprocess-ing and processing aspects, we are able to introduce run-time specialization in ourpartial evaluation system as a new way of exploiting information produced by thepreprocessing phase.

Tempo: A Partial Evaluation System for the C Programming Language

Charles Consel, Luke Hornof, Francois Noel, Jacques Noye, and Nicolae Volanschi

Tempo is a partial evaluation system for the the C programming language. Itincorporates an off-line partial evaluation strategy, consisting of an analysis phaseand a specialization phase. Tempo is designed so that the specialization phase can beperformed either at compile time or at run time. To avoid the problems associated

18

Page 17: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

with universal partial evaluators, Tempo was created to specialize a well defined setof of applications, namely “system” programs. This choice allows realistic programsto be treated.

We will demonstrate the principles, functionality, and unique characteristics (flow-sensitive binding-time analysis, accurate treatment of complex data structures andpointers, run-time specialization, etc.) of Tempo by running it on a number ofexample programs.

Type-Directed Partial Evaluation

Olivier Danvy

Type-directed partial evaluation stems from the residualization of arbitrary staticvalues in dynamic contexts, given their type. Its algorithm coincides with the onefor coercing a subtype value into a supertype value, which itself coincides with theone of normalization in the lambda-calculus. Type-directed partial evaluation isthus used to specialize compiled, closed programs, given their type.

Since Similix, let-insertion is a cornerstone of partial evaluators for call-by-valueprocedural programs with computational effects. It prevents the duplication ofresidual computations, and more generally maintains the order of dynamic sideeffects in residual programs.

This article describes the extension of type-directed partial evaluation to insert resid-ual let expressions. This extension requires the user to annotate arrow types witheffect information. It is achieved by delimiting and abstracting control, comparablyto continuation-based specialization in direct style. It enables type-directed partialevaluation of effectful programs (eg, a definitional lambda-interpreter for an imper-ative language) that are in direct style. The residual programs are in A-normalform.

Semantics-Directed Generation of Abstract Machines

Stephan Diehl

Abstract machines are virtual target architectures which support the concepts of thesource language. We present a generator which automatically produces a compilerand abstract machine from a natural semantics specification. Whereas all existingsemantics-directed compiler generators use partial evaluation or a direct translationinto a fixed target language, we chose pass separation as the key transformation ofour system. The execution times of the abstract machine programs produced byour generated compiler compare to those of target programs produced by compilers

19

Page 18: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

generated by other semantics-directed generators. The generated specifications ofcompilers and abstract machines are suitable as a starting point for handwritingcompilers and abstract machines. Our generator is fully automated and its coretransformations are proved correct.

Compiler Generation for Interactive Graphics using Intermediate Code

Scott Draves

This paper describes a compiler generator (cogen) designed for interactive graph-ics, and presents preliminary results of its application to pixel-level code. Thecogen accepts and produces a reflective intermediate code in continuation-passing,closure-passing style. This allows low overhead run-time code generation as wellas multi-stage compiler generation. We extend partial evaluation techniques byallowing unrestricted lifting and partially static integers. In addition to some stan-dard examples, we examine graphics kernels such as bcopy, one-dimensional finitefiltering, and packed pixel access.

Psychedelic Graphics

Scott Draves

Not available.

Regular Approximation of Computation Paths in Logic and FunctionalLanguages

John P. Gallagher and Laura Lafave

The aim of this work is to compute descriptions of successful computation pathsin logic or functional program executions. Computations paths are represented asterms, built from special constructor symbols, each constructor and symbol corre-sponding to a specific clause or equation in a program. Such terms, called trace-terms, are abstractions of computation trees, which capture information about thecontrol flow of the program. A method of approximating trace-terms is described,based on well-established methods for computing regular approximations of terms.The special function symbols are first introduced into programs as extra argumentsin predicates or functions. Then a regular approximation (with respect to a givenclass of program executions) is computed, giving a regular description of the termsappearing in every argument in the program. The approximation of the extra argu-ments (the trace-terms) can then be examined to see what computation paths werefollowed during the computation. This information can then be used to control an

20

Page 19: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

off-line specialisation system. A key aspect of the analysis is the use of suitablewidening operations during the regular approximation, in order to preserve infor-mation on determinacy and branching structure of the computation. This method isapplicable to both logic and functional languages, and appears to offer appropriatecontrol information in both formalisms.

A Roadmap to Metacomputation by Supercompilation

Robert Gluck and Morten Heine Sørensen

This paper gives a gentle introduction to Turchin’s supercompilation and its ap-plications in metacomputation with an emphasis on recent developments. First,a complete supercompiler, including positive driving and generalization, is definedfor a functional language and illustrated with examples. Then a taxonomy of re-lated transformers is given and compared to the supercompiler. Finally, we putsupercompilation into the larger perspective of metacomputation and consider threemetacomputation tasks: specialization, composition, and inversion.

Reasoning about Hierarchies of Online Program Specialization Systems

John Hatcliff and Robert Gluck

The conventional wisdom is that online techniques are not well-suited for obtaininguseful results from metasystem hierarchies. However, this conclusion ignores bothpractical and theoretical progress. Our goal is to identify and clarify the founda-tional issues involved in hierarchies of online specialization systems. To achieve this,we develop a very simple online specialization system which focuses tightly on thefollowing problematic points of online specialization: (1) semantics of specializa-tion, (2) properties of program encodings and identifying position of entities in ahierarchy, (3) tracking unknown values across levels in metasystem hierarchies.

Partial Evaluation and Separate Compilation

Rogardt Heldal and John Hughes

Hitherto all partial evaluators have processed a complete program to produce acomplete residual program. We are interested in treating programs as collections ofmodules which can be processed independently: ‘separate partial evaluation’, so tospeak. In this paper we still assume that the original program is processed in itsentirety, but we show how to specialise it to the static data bit-by-bit, generating adifferent module for each bit. When the program to be specialised is an interpreter,

21

Page 20: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

this corresponds to specialising it to one module of its object language at a time:each module of the object language gives rise to one module of the residual program.

We have been forced to distinguish three different binding-times, which we call static,late static, and dynamic. Static data is present when a module is compiled. Latestatic data is present when compiling a module which imports such a module. Dy-namic data is of course present only at run-time. As an example, we have constructedan interpreter for a simple language with jumps, in which each module can importother modules and jump to labels in them. The main function in the interpreter hasthree parameters: the module, a label to interpret from, and the register contents.The first is of course static, and the last is dynamic. But the second parameter is latestatic: this function is invoked from other modules, and at the time of invocationthe label to jump to will be known.

When we specialise the interpreter to a ‘module’, only the static data is known.Late static values are therefore treated as dynamic. But the residual code is dividedinto two parts: ‘purely dynamic’ functions with only dynamic parameters, and therest. Functions with late static parameters are placed in an interface file; theymay be invoked when other modules are compiled, and are then specialised furtherwith the late static parameters treated as static. Purely dynamic functions onthe other hand need no further specialisation, and are placed in a code module forimmediate compilation. These functions need not be processed further during laterspecialisations, but they may of course be called from the interface file, which willresult in residual calls to the code module from other compiled modules.

Applying Abstract Multiple Specialization to Program Parallelization

Manuel Hermenegildo and German Puebla

We present and demonstrate an application of automatic multiple specialization inlogic program parallelization, in the context of the CIAO system. CIAO is a multi-paradigm compiler and run-time system aimed at providing efficient implementa-tions of a range of LP, CLP, and CC programming languages. The CIAO compilerperforms automatic parallelization of (constraint) logic programs using global anal-ysis information based on abstract interpretation. During parallelization, analysisinformation is used to detect calls that should be run in parallel using as few run-time tests as possible. Sometimes, especially when the user provides no informationto the analyzer, it is not possible to parallelize a program without introducing run-time tests. In this case, multiple specialization is used to optimize the automaticallyparallelized programs. First, multi-variant re-analysis of the program is efficientlyperformed using incremental analysis algorithms. Then, the optimizations allowedin each version of a procedure generated during multi-variant (re-)analysis are deter-mined and a minimizing algorithm is applied which obtains minimal programs while

22

Page 21: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

retaining all possible optimizations. Finally, this minimal program is materializedand optimized. The program specialization used is abstract in the sense that it isperformed with respect to abstract values rather than concrete ones, as is the casein more traditional partial evaluation systems. This is, to best of our knowledge,the first logic program compiler to automatically make use of abstract (multiple)specialization.

Type Specialisation for the Lambda Calculus

John Hughes

Partial evaluation of typed languages has long suffered from the problem that thetypes appearing in residual programs are constrained too strongly by those appearingin the unspecialised original program. In the simplest case, the only types that canappear in residual programs are those from the original source. When for exampleprograms are compiled by specialising an interpreter, then the only types that canappear in the compiled code are those used in the interpreter. In particular, sincethe interpreter must represent values of different types by injecting them into auniversal type, then in the compiled code values are also tagged members of thistype, and tags must be checked whenever a value is used. If the language beingcompiled is itself typed, then these tags and checks are unnecessary. When theinterpreter is a self-interpreter, the ‘optimality’ criterion is not met: mix sint p= p because the left hand side tags its data and the right hand side does not.In this paper we present what we believe is the first partial evaluator that canremove these tags. The object language is the simply typed lambda calculus witha two-level type system: static tags are removed, while dynamic tags remain. Anovel feature of our partial evaluator is that we can derive static information aboutexpressions which others treat as purely dynamic, such as a dynamic conditional.But just as compilers reject programs whose types do not match, so our partialevaluator rejects programs whose static information does not match. For example,the two arms of a dynamic conditional must carry the same static information. Weare forced to relax the functional nature of the mapping from source expressions tostatic information; rather a relation holds between them, which means that the samesource expression may be related to many different static values. Static informationcan’t be computed by a bottom-up evaluation, therefore, and indeed our specialiseris expressed not as an interpreter, but as a system of inference rules specifying howstatic information can be inferred. Unsurprisingly, the inference of static information(which includes type information) is very similar to type inference. The static valueinferred for an expression is then used to derive a suitable residual type for thespecialised expression. Since the same expression in the source can be related tomany different static values, its specialisations can have many different residual

23

Page 22: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

types. An interesting aspect is that our specialiser need not use unfolding: we can‘evaluate’ static expressions without unfolding function calls by inferring their staticvalue. To demonstrate this we have implemented a specialised which doesn’t unfoldat all. But for this reason our specialiser is not ‘optimal’: when we compute mix sintp we eliminate type tags, but the residual program must be unfolded somewhat inorder to recover p. The present specialised is just a toy, but we believe the techniqueswill scale up to make possible improved specialisation of real typed languages.

What Not to Do When Writing an Interpreter for Specialisation

Neil D. Jones

A partial evaluator, given a program and a known “static” part of its input data,outputs a residual program in which computations depending only on the staticdata have been performed. Ideally the partial evaluator would be a “black box”able to extract nontrivial static computations whenever possible; which never failsto terminate; and which always produces residual programs of reasonable size andmaximal efficiency, so all possible static computations have been done. Practicallyspeaking, partial evaluators fall short of this goal; they may loop, sometimes pes-simise, and/or explode code size. A partial evaluator is analogous to a spiritedhorse: while impressive results can be obtained when used well, the user must knowwhat he/she is doing. Our thesis is that this knowledge can be communicated tonew users of these tools. This paper presents a series of examples, concentrating ona quite broad and on the whole successful application area: using specialisation toremove interpretative overhead. It presents a series of examples, both positive andnegative, to illustrate the effects of program style on the efficiency and size of theof target programs obtained by specialising an interpreter with respect to a knownsource program.

BTA Algorithms to Ensure Termination of Off-line Partial Evaluation

Neil D. Jones and Arne J. Glenstrup

A partial evaluator, given a program and a known “static” part of its input data,outputs a residual program in which computations depending only on the staticdata have been precomputed. Ideally the partial evaluator is a “black box” whichboth extracts nontrivial static computations whenever possible, and never fails toterminate. Practically speaking, partial evaluators fall short of this goal: theysometimes loop (typical of functional programming systems), or never loop butoften give excessively conservative results (typical in logic programming).

This paper presents efficient algorithms (currently being implemented) for binding-time analysis as used by off-line specialisers. These algorithms ensure that the

24

Page 23: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

specialiser performs nontrivial static computations if possible, and is at the sametime guaranteed to terminate.

Further, the developed techniques have more general applications, including termi-nation of term rewriting systems.

Efficiently Generating Efficient Generating Extensions in Prolog

Jesper Jørgensen and Michael Leuschel

The so called “cogen approach”, writing a compiler generator instead of a special-izer, to program specialization has been used with considerably success in partialevaluation of both functional and imperative languages. This paper demonstratesthat this approach is also applicable to partial evaluation of logic programming lan-guages, also called partial deduction. Self-application has not been as much in focusin partial deduction as in partial evaluation of functional and imperative languages,and the attempts to self-apply partial deduction system have, of yet, not been all to-gether that successful. So especially for partial deduction the cogen approach couldprove to have a considerable importance when it come to practical applications. Itis demonstrated that using the cogen approach one gets very efficient compiler gen-erators which generate very efficient generating extensions which in turn yield verygood and non-trivial specialisation.

Specification of a Class of Supercompilers

Andrei Klimov

The specification of a class of supercompilers for a small functional language in formof Natural Semantics is presented. It defines a relation between source and resid-ual programs with respect to an initial configuration, and the notions of driving,generalization, configuration splitting. The specification states what is to be a su-percompiled program, and says nothing about how a supercompiler takes decisionsto fold, to generalize, to split a configuration.

The purpose of the specification is to divide the task of proving properties of super-compilers into two parts: proving the property of the specification and proving that aparticular supercompiler agrees with the specification. For example, the correctnesstheorem is proven once for the specification. Certain termination properties can beformulated and proven as well, such as the termination of the Turchin’s algorithmof configuration splitting (1987).

25

Page 24: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Global Control for Partial Deduction through Characteristic Atoms andGlobal Trees

Michael Leuschel and Bern Martens

Recently, considerable advances have been made in the (online) control of logic pro-gram specialisation. A clear conceptual distinction has been established betweenlocal and global control and on both levels concrete strategies as well as generalframeworks have been proposed. For global control in particular, recent work hasdeveloped concrete techniques based on the preservation of characteristic trees (lim-ited, however, by a given, arbitrary depth bound) to obtain a very precise control ofpolyvariance. On the other hand, the concept of an m-tree has been introduced as arefined way to trace “relationships” of partially deduced atoms, thus serving as thebasis for a general framework within which global termination of partial deductioncan be ensured in a non ad hoc way. Blending both, formerly separate, contribu-tions, in this paper, we present an elegant and sophisticated technique to globallycontrol partial deduction of normal logic programs. Leaving unspecified the specificlocal control one may wish to plug in, we develop a concrete global control strat-egy combining the use of characteristic atoms and trees with global (m-)trees. Wethus obtain partial deduction that always terminates in an elegant, non ad hoc way,while providing excellent specialisation as well as fine-grained (but reasonable) poly-variance. We conjecture that a similar approach may contribute to improve uponcurrent (online) control strategies for functional program transformation methodssuch as (positive) supercompilation.

The Experiments on Partial Evaluation in APS

Alexander A. Letichevsky

Algebraic programming system APS [15] has been used for the development of apartial evaluator for extended higher order untyped lambda calculus. Partial evalu-ator is considered as a generic program which defines an operational semantics of aprogramming language based on ˘-calculus. To obtain satisfactory results on self-application, three consistent levels of extension for ˘-calculus are considered: pure˘-calculus, semantic extension compatible with its classical denotational semantics,and metalevel extension, which allows to manipulate with ˘-terms as syntactic ob-jects. The partial evaluators for pure ˘-calculus and its semantic extension areproved to possess main properties: correctness, completeness, termination and opti-mality with respect to generic parameters. There are no restrictions on specializedprogram and partial evaluator diverges only when specialized program diverges forall values of dynamic variables. Evaluators use online binding time analysis and are

26

Page 25: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

expressed as systems of rewriting rules implemented in algebraic programming sys-tem APS. All partial evaluators may be expressed in metalevel extension of ˘-calculiand used for self-application and specialization.

Current Developments in Partial Evaluation for Equational Programs

Alain Miniussi and David Sherman

Equational programs, which use term-rewriting as their basic implementationmethod, can be greatly improved by partial evaluation of intermediate code pro-grams written in EM code. Such transformation removes various kinds of overheadassociated with rewriting, such as intermediate rewriting steps, boxing and unboxingof arithmetic values, and intermediate constructions resulting from recursive defini-tions. Unfolding strategies for equational programs must be both terminating, andgood enough with respect to pragmatic criteria. In this paper we give a general intro-duction to the particular problems associated with partial evaluation of equationalprograms, and propose some criteria for deciding whether an unfolding strategy isgood enough from a practical standpoint. We then present a new algorithm fordriving unfolding of EM code programs, and show that, in addition to terminating,it produces a good result. As we have already implemented the new strategy, thisfinal point is demonstrated with a number of concrete examples.

Evolution of Partial Evaluators: Removing Inherited Limits

Torben Æ. Mogensen

We show the evolution of partial evaluators over the past ten years from a particularperspective: the attempt to remove limits on the structure of residual programs thatare inherited by structural bounds in the original programs. It will often be thecase that a language allows an unbounded number or size of a particular features,but each program (being finite) will only have a finite number or size of thesefeatures. If the residual programs cannot overcome the bounds given in the originalprogram, that can be seen as a weakness in the partial evaluator, as it potentiallylimits the effectiveness of residual programs. The inherited limits are best observedthrough specializing a self-interpreter and examining the object programs producedby specialisation of this. We show how historical developments in partial evaluatorsgradually remove inherited limits, and suggest how this principle can be used as aguideline for further development.

27

Page 26: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

The Functional Approach to Analyses and Transformations ofImperative Programs

Barbara Moura and Charles Consel

Since the late eighties, the imperative programming language community has beenstudying intermediate program representations, where imperative features are repre-sented functionally [6,22]. These intermediate representations, such as Static SingleAssignment [6], are widely used in advanced optimizing compilation. However, al-though imperative features are dealt with in a functional fashion, they are still basedupon an imperative framework. We propose to go a step further by transformingan imperative program into a fully executable functional form.We present a set of transformations on the functional representation to put it intoa suitable form for accurate program analyses and transformation — as accurate asthe result of analyzing the imperative program directly. The main contributions ofthis work are:

• to propose a set of transformations that makes imperative programs amenableto functional programming technology,

• to show that new program analyses and transformations can be developed ina functional framework and applied to imperative languages.

We apply our transformation scheme to programs written in higher-order, imperativelanguage and show its practicality and accuracy in context of a partial evaluator forhigher- order, pure, functional programs.

A Self-Applicable Supercompiler

Andrei P. Nemytykh, Victoria A. Pinchuk, and Valentin F. Turchin

A supercompiler is a program which can perform a deep transformation of programsusing a principle which is similar to partial evaluation, and can be referred to asmetacomputation. Supercompilers that have been in existence up to now (see [21],[20]) were not self-applicable: this is a more difficult problem than self-applicationof a partial evaluator, because of the more intricate logic of supercompilation. Inthe present paper we describe the first self-applicable model of a supercompiler andpresent some tests. Three features distinguish it from the previous models and makeself-application possible: (1) The input language is a subset of Refal which we referto as flat Refal. (2) The process of driving is performed as a transformation ofpattern-matching graphs. (3) Metasystem jumps are implemented, which allows thesupercompiler to avoid interpretation whenever direct computation is possible.

28

Page 27: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Lockstep Transformations

Kristian Nielsen

We view program transformation as consisting of three distinct parts: external in-terface, choice of internal specialization technique, and control of transformation.The aim of this work is to achieve a better understanding of the relation betweenthe internal specialization techniques used in existing partial evaluation and relatedtransformation techniques.

A class of simple functional languages is proposed for which a class of lockstep trans-formations is defined and proven correct w.r.t. the operational semantics. A simplefunctional language represents a “typical” language in the field of partial evaluation,and lockstep transformation captures some common elements of partial evaluation-like transformations. The framework is general enough to describe faithfully a widerange of transformations including for example the partial evaluator Similix andWadler’s deforestation.

By recasting different techniques as instances of a single, more general frameworktheir similarities are exposed thus facilitating the exchange of techniques and ideasamong different parts of the field. As a concrete example of this we give an efficientimplementation of Wadler’s deforestation based on the implementation techniquesused in Similix.

Multi-Level Lambda-Calculi: an Algebraic Description

Flemming Nielson and Hanne Riis Nielson

Two-level lambda-calculi have been heavily utilised for applications such as partialevaluation, abstract interpretation and code generation. Each of these applicationspose different demands on the exact details of the two-level structure and the cor-responding inference rules. We therefore formulate a number of existing systems ina common framework. This is done in such a way as to conceal those differencesbetween the systems that are not essential for the multi-level ideas (like whether ornot one restricts the domain of the type environment to the free identifiers of theexpression) and thereby to reveal the deeper similarities and differences. In theirmost general guise the multi-level lambda-calculi allow multi-level structures thatare not restricted to (possibly finite) linear orders and thereby generalise previoustreatments in the literature.

29

Page 28: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

A Comparative Revisitation of Some Program TransformationTechniques

Alberto Pettorossi and Maurizio Proietti

We revisit the main techniques of program transformation which are used in partialevaluation, mixed computation, supercompilation, generalized partial computation,rule-based program derivation, program specialization, compiling control, and thelike. We present a methodology which underlines these techniques as a commonpattern of reasoning, and it explains the various correspondences which can beestablished among them. This methodology consists of three steps: i) symboliccomputation, ii) search for regularities, and iii) program extraction. We also discusssome control issues which occur in performing these steps.

A Theory of Logic Program Specialization and Generalization forDealing with Input Data Properties

Alberto Pettorossi and Maurizio Proietti

We address the problem of specializing logic programs w.r.t. the contexts wherethey are used. We assume that these contexts are specified by means of computableproperties of the input data. We describe a general method by which, given a pro-gram P, we can derive a specialized program P’ such that P and P’ are equivalentw.r.t. every input data satisfying a given property. Our method extends the tech-niques for partial evaluation of logic programs based on Lloyd and Shepherdson’sapproach, where a context can only be specified by means of a finite set of bindingsfor the variables of the input goal. In contrast to most program specialization tech-niques based on partial evaluation, our method may achieve superlinear speedups,and it does so by using a novel generalization technique.

Program Slicing, Differencing, and Merging

Thomas Reps

Not available.

Program Specialization via Program Slicing

Thomas Reps and Todd Turnidge

I will describe the use of program slicing to perform a certain kind of program-specialization operation. The specialization operation that slicing performs is dif-ferent from the specialization operations performed by algorithms for partial evalu-ation, supercompilation, bifurcation, and deforestation. In particular, I present an

30

Page 29: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

example in which the specialized program that is created via slicing could not becreated as the result of applying partial evaluation, supercompilation, bifurcation,or deforestation to the original unspecialized program. Specialization via slicing alsopossesses an interesting property that partial evaluation, supercompilation, and bi-furcation do not possess: The latter operations are somewhat limited in the sensethat they support tailoring of existing software only according to the ways in whichparameters of functions and procedures are used in a program. Because parametersto functions and procedures represent the range of usage patterns that the designerof a piece of software has anticipated, partial evaluation, supercompilation, and bi-furcation support specialization only in ways that have already been ”foreseen” bythe software’s author. In contrast, the specialization operation that slicing supportspermits programs to be specialized in ways that do not have to be anticipated bythe author of the original program.

Data Specialization

Erik Ruf and Todd Knoblock

Program staging improves the performance of a program whose inputs vary at differ-ent rates by separating it into two phases: an early phase which performs computa-tions dependent only on slowly-changing inputs, and a late phase which performs theremainder of the work given the remaining inputs and the results of the early compu-tations.Staging techniques based on partial evaluation achieve power and generalityby reifying these results as code in the form of a dynamically generated residualprogram. In this talk, we introduce an alternative approach in which the results ofearly computations are reified as a data structure,allowing both the early and latephases to be generated statically.By avoiding dynamic code manipulation, we tradesome power and generality in return for simplicity of implementation and rapid pay-back. We describe an implementation based on memoization and its use in stagingcomputations in an interactive graphics rendering system .

Data Specialization of Graphics Shading Code

Erik Ruf and Todd Knoblock

Demonstration: Data Specialization for Interactive Graphics We will demonstrateour system for interactive manipulation of shading parameters for three-dimensionalrendering. The system takes as input user-defined shading procedures, written ina subset of C, which are then automatically restaged for interactive use. Users ofthe system typically experiment with multiple values for a single shader parameterwhile leaving the others constant. Thus, we benefit by generating a late stage

31

Page 30: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

that performs only those computations depending on the parameter being varied;all other values needed by the shader are precomputed and cached by the earlystage. The resulting improvement in speed makes it possible to interactively viewparameter changes for relatively complex shading models such as procedural solidtexturing.

Specialization of Imperative Programs through Analysis of RelationalExpressions

Alexander Sakharov

An original analysis method for specialization of imperative programs is describedin this paper. This analysis is an inter-procedural data flow method operating oncontrol flow graphs and collecting information about program expressions. It appliesto various procedural languages. The set of analyzed formulas includes equivalencesbetween program expressions and constants, linear-ordering inequalities betweenprogram expressions and constants, equalities originating from some program as-signments, and atomic constituents of controlling expressions of program branches.Analysis is executed by a worklist-based fixpoint algorithm which interprets con-ditional branches and ignores some impossible paths. This analysis algorithm in-corporates a simple inference procedure that utilizes both positive and negativeinformation. The analysis algorithm is shown to be conservative; its asymptotictime complexity is cubic. A polyvariant specialization of imperative programs, thatis based on the information collected by the analysis, is also defined at the level ofnodes and edges of control flow graphs. The specialization incorporates a furtherrefinement of analysis information through local propagation. Multiple variants areproduced by replicating subgraphs whose in-links are limited to one node.

Program Development by Proof Transformation

Helmut Schwichtenberg

Goad’s technique of program development by ‘pruning’ proof trees is explained.As an example we discuss the maximal segment problem of Bates/Constable. Theexistence proof corresponding to the obvious quadratic algorithm is transformed intoa proof yielding a linear algorithm, using additional knowledge about the data (inthis case monotonicity of the measure function).

32

Page 31: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

ML Pattern Match Compilation and Partial Evaluation

Peter Sestoft

We derive a compiler for ML-style pattern matches. It is conceptually simple andproduces reasonably good compiled matches. The derivation is inspired by the in-strumentation and partial evaluation of naive string matchers (Consel and Danvy1989; Futamura and Nogi 1988). Following that paradigm, we first present a generaland naive ML pattern matcher, instrument it to collect and exploit extra informa-tion, and show that partial evaluation of the instrumented general matcher withrespect to a given match produces an efficient specialized matcher.

We then discard the partial evaluator and show that a match compiler can be ob-tained just by slightly modifying the instrumented general matcher. The resultingmatch compiler is interesting in its own right, and naturally detects inexhaustivematches and redundant match rules.

Some Relationships between Partial Evaluation and Meta-Programming

Tim Sheard

Meta-programs are programs that manipulate object-programs. Language toolsusually consist of an object-language in which the programs being manipulated areexpressed, and a meta-language that is used to describe the manipulation. One useof meta-programming technology is to generate object-programs.

We describe a system of phased computation where meta-programming abilities arebuilt into a language rather than added on as extra-language pre- or post-processors.We describe how such a system can be statically typed, and how it can provide manyof the benefits usually associated with macro systems, partial evaluators, and highlevel specification languages.

Meta-ML

Tim Sheard

Not available.

Self-Applicable Online Partial Evaluation

Michael Sperber

We propose a hybrid approach to partial evaluation to achieve self-application of re-alistic online partial evaluators. Whereas the offline approach to partial evaluation

33

Page 32: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

leads to efficient specializers and self-application, online partial evaluators performbetter specialization at the price of efficiency. Moreover, no online partial evaluatorfor a realistic language has been successfully self-applied. We propose a binding-timeanalysis for an online partial evaluator. The analysis distinguishes between static,dynamic, and unknown binding times. Thus, it makes some reduce/residualize deci-sions offline while leaving others to the specializer. The analysis does not introduceunnecessary generalizations. After some standard binding-time improvements, ourpartial evaluator successfully self-applies.

Partial Evaluation through Partial Parametrisation

S. Doaitse Swierstra

We have presented a way of defining combinator parsers in such a way that theyachieve deterministic parsing and error recovery for LL(1) grammars. Only the basiccombinators have to be redefined, whereas all higher level combinators, which havebeen defined using the low level ones, can be used without having to be adapted.

The program, which makes use of partial parametrisation, implicitly performs agrammar analysis, and optimises itself—provided the implementation of the func-tional language is fully lazy—into the efficient—more conventional—form, which isnormally generated by parser generators.

We further have shown how this method cannot be easily extended to parsers whichhave been formulated in the so-called monadic form. We indicated however, howby making use of techniques borrowed from the attribute grammar area, programtransformations can be performed which enable the aforementioned optimising formof execution.

Hylomorphism in Triplet Form: An Ideal Internal Form for FunctionalProgram Optimization

Akihiko Takano (joint work with Erik Meijer)

Hylomorphism is originally defined as a composition of a catamorphism (a gener-alized fold) and an anamorphism (a generalized unfold). In our previous work, weintroduced a triplet form representation of hylos and showed how the essence of theshortcut deforestation could be captured in more general setting. In this talk, weprovide a brief and intuitive view for the three components of hylo, and demonstratehow it facilitates automatic program optimization.

34

Page 33: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

Metacomputation: Metasystem Transitions plus Supercompilation

Valentin F. Turchin

Metacomputation is a computation which involves metasystem transitions(MST forshort) from a computing machine M to a metamachine M’ which controls, ana-lyzes and imitates the work of M. Semantics-based program transformation, such aspartial evaluation and supercompilation (SCP), is metacomputation. Metasystemtransitions may be repeated, as when a program transformer gets transformed itself.In this manner MST hierarchies of any height can be formed.The paper reviews one strain of research which was started in Russia in the late1960s - early 1970s and became known for the development of supercompilation as adistinct method of program transformation. After a brief description of the historyof this research line, the paper concentrates on those results and problems wheresupercompilation is combined with repeated metasystem transitions.

References

[1] L. O. Andersen. Program Analysis and Specialization for the C ProgrammingLanguage. PhD thesis, DIKU, University of Copenhagen, Dept. of ComputerScience, University of Copenhagen, Universitetsparken 1, DK-2100 KøbenhavnØ, May 1994.

[2] D. Bjørner, A. P. Ershov, and N. D. Jones, editors. Partial Evaluation andMixed Computation, Amsterdam, 1988. North-Holland.

[3] W.-N. Chin. Towards an automated tupling strategy. In Schmidt [17], pages119–132.

[4] C. Consel, editor. Proceedings of the ACM SIGPLAN Workshop on PartialEvaluation and Semantics-Based Program Manipulation PEPM ’92, San Fran-cisco, CA, June 1992. Yale University. Report YALEU/DCS/RR-909.

[5] C. Consel and O. Danvy. Tutorial notes on partial evaluation. In Proc. 20thAnnual ACM Symposium on Principles of Programming Languages, pages 493–501, Charleston, South Carolina, Jan. 1993. ACM Press.

[6] R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, and F. K. Zadeck. Ef-ficiently computing static single assignment form and the control flow graph.ACM Trans. Prog. Lang. Syst., 13(4):451–490, Oct. 1991.

35

Page 34: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

[7] O. Danvy, R. Gluck, and P. Thiemann, editors. Partial Evaluation, volume1110 of Lecture Notes in Computer Science, Dagstuhl, Germany, Feb. 1996.Springer Verlag, Heidelberg.

[8] A. P. Ershov, D. Bjørner, Y. Futamura, K. Furukawa, A. Haraldsson, andW. Scherlis, editors. Special Issue: Selected Papers from the Workshop onPartial Evaluation and Mixed Computation, 1987 (New Generation Computing,vol. 6, nos. 2,3). Ohmsha Ltd. and Springer-Verlag, 1988.

[9] J. Gallagher. Specialization of logic programs. In Schmidt [17], pages 88–98.

[10] P. Hudak and N. D. Jones, editors. Proceedings of the ACM SIGPLAN Sympo-sium on Partial Evaluation and Semantics-Based Program Manipulation PEPM’91, New Haven, CT, June 1991. ACM. SIGPLAN Notices 26(9).

[11] Journal of Functional Programming 3(3), special issue on partial evaluation,July 1993. Neil D. Jones, editor.

[12] Journal of Logic Programming 16 (1,2), special issue on partial deduction, 1993.Jan Komorowski, editor.

[13] N. D. Jones, C. K. Gomard, and P. Sestoft. Partial Evaluation and AutomaticProgram Generation. Prentice Hall, 1993.

[14] Lisp and Symbolic Computation 8 (3), special issue on partial evaluation, 1995.Peter Sestoft and Harald Søndergaard, editors.

[15] A. A. Letichevsky, J. V. Kapitonova, and S. V. Konozenko. Computations inAPS. Theoretical Comput. Sci., 119:145–171, 1993.

[16] W. Scherlis, editor. Proceedings of the ACM SIGPLAN Symposium on PartialEvaluation and Semantics-Based Program Manipulation PEPM ’95, La Jolla,CA, June 1995. ACM Press.

[17] D. Schmidt, editor. Proceedings of the ACM SIGPLAN Symposium on PartialEvaluation and Semantics-Based Program Manipulation PEPM ’93, Copen-hagen, Denmark, June 1993. ACM Press.

[18] P. Sestoft. Bibliography on partial evaluation. Available through URLftp://ftp.diku.dk/pub/diku/dists/jones-book/partial-eval.bib.Z.

[19] P. Sestoft and H. Søndergaard, editors. Proceedings of the ACM SIGPLANWorkshop on Partial Evaluation and Semantics-Based Program ManipulationPEPM ’94, Orlando, Fla., June 1994. ACM.

36

Page 35: Preface - Dagstuhl · • Scott Draves. Compiler generation for interactive graphics using intermediate code. Reports on a hand-written cogen for an intermediate language and its

[20] V. F. Turchin. The concept of a supercompiler. ACM Trans. Prog. Lang. Syst.,8(3):292–325, July 1986.

[21] V. F. Turchin, R. M. Nirenberg, and D. V. Turchin. Experiments with a su-percompiler. In 1982 ACM Symposium on Lisp and Functional Programming,Pittsburgh, Pennsylvania, pages 47–55. ACM, 1982.

[22] D. Weise, R. F. Crew, M. Ernst, and B. Steensgaard. Value dependence graphs:Representation without taxation. In Proc. 21st Annual ACM Symposium onPrinciples of Programming Languages, pages 297–310, Portland, OG, Jan. 1994.ACM Press.

37